现在我们每个像素都有对象和多条光线,我们可以做出一些逼真的外观材质。我们将从漫反射(无光泽)材质开始。 不发光的漫反射物体仅仅呈现周围的颜色,从漫反射表面反射的光有其随机的方向。现实生活中,我们经常接触到一些看起来暗淡粗糙的物体,他们之所以显得不那么光鲜,是因为当光线照射到他们之上时,其凹凸不平的表面令光线缺乏统一的反射方向。所以,如果我们发送三条光线进入两​​个漫射之间的裂缝表面他们将有不同的随机行为: [

](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051604.png)](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051604.png) 某些光线也可能被吸收而不是被反射。表面越黑暗,越有可能吸收,这就是为什么它是黑暗的原因。 法向量N,撞点p,随机点s。从与击中点相切的单位半径球体中选取一个随机点s,并发送一个从命中点p到随机点s的射线,这条射线就是我们反射出去的射线。[![](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051605.png)
](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051604.png)](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051604.png) 某些光线也可能被吸收而不是被反射。表面越黑暗,越有可能吸收,这就是为什么它是黑暗的原因。 法向量N,撞点p,随机点s。从与击中点相切的单位半径球体中选取一个随机点s,并发送一个从命中点p到随机点s的射线,这条射线就是我们反射出去的射线。[![](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051605.png)
cpp:

#include
#include
#include
#include “sphere.h”
#include “hitable_list.h”
#include “camera.h”

using namespace std;

//获得反射射线
vec3 RandomInUnitSphere()
{
vec3 p;
do
{
p = 2.0f * vec3((rand() % 100 / float(100)), (rand() % 100 / float(100)), (rand() % 100 / float(100))) - vec3(1.0f, 1.0f, 1.0f);
} while (dot(p, p) >= 1.0f);

return p;

}

vec3 Color(const ray& r, hitable* world)
{
hit_record rec;
if (world->hit(r, 0.0, FLT_MAX, rec))
{
vec3 target = rec.p + rec.normal + RandomInUnitSphere();
//递归,每次吸收50%的能量
return 0.5f * Color(ray(rec.p, target - rec.p), world);
}
else
{
vec3 unit_direction = unit_vector(r.direction());
float t = 0.5f * (unit_direction.y() + 1.0f);
//线性混合,t=1时蓝色,t=0时白色,t介于中间时是混合颜色
//blended_value = (1-t)*start_value + t*end_value
return (1.0f - t) * vec3(1.0f, 1.0f, 1.0f) + t * vec3(0.5f, 0.7f, 1.0f);
}
}

int main()
{
ofstream outfile;
outfile.open(“IMG01.ppm”);

int nx = 800;
int ny = 400;
//采样次数
int ns = 100;
outfile << "P3\\n" << nx << " " << ny << "\\n255\\n";

hitable\* list\[2\];
list\[0\] = new sphere(vec3(0.0f, 0.0f, -1.0f), 0.5f);
list\[1\] = new sphere(vec3(0.0f, -100.5f, -1.0f), 100.0f);
hitable\* world = new hitable\_list(list, 2);

camera cam;

//随机数,每个像素点的区域是以像素中心点为中心向外距离为1的范围
default\_random\_engine reng;
uniform\_real\_distribution<float> uni\_dist(0.0f, 1.0f);

for (int j = ny - 1; j >= 0; j--)
{
    for (int i = 0; i < nx; i++)
    {
        vec3 col(0.0f, 0.0f, 0.0f);
        //每个区域采样ns次
        for (int s = 0; s < ns; s++)
        {
            float u = float(i + uni\_dist(reng)) / float(nx);
            float v = float(j + uni\_dist(reng)) / float(ny);
            ray r = cam.getray(u, v);
            //vec3 p = r.point\_at\_parameter(2.0);
            //将本区域((u,v)到(u+1,v+1))的颜色值累加
            col += Color(r, world);
        }
        //获得区域的颜色均值
        col /= float(ns);
        //gamma矫正
        col = vec3(sqrt(col\[0\]), sqrt(col\[1\]), sqrt(col\[2\]));
        int ir = int(255.99 \* col\[0\]);
        int ig = int(255.99 \* col\[1\]);
        int ib = int(255.99 \* col\[2\]);
        outfile << ir << " " << ig << " " << ib << "\\n";
    }
}
outfile.close();
return 0;

}

最终效果: [

](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051606.jpg)
](http://www.wjgbaby.com/wp-content/uploads/2018/05/18051606.jpg)
gamma矫正:

//获得区域的颜色均值
col /= float(ns);
//gamma矫正
col = vec3(sqrt(col[0]), sqrt(col[1]), sqrt(col[2]));
int ir = int(255.99 * col[0]);
int ig = int(255.99 * col[1]);
int ib = int(255.99 * col[2]);
outfile << ir << “ “ << ig << “ “ << ib << “\n”;

未gamma矫正的效果如下: 注意球体下的阴影。这张照片非常黑暗,但我们的球体只每次反弹吸收一半的能量,所以它们是50%的反射,这些阴影本应该是浅灰色的。作者的观点: To a first approximation, we can use “gamma 2” which means raising the color to the power 1/gamma, or in our simple case ½, which is just square-root。 第一个近似值,我们可以使用“gamma 2”,这意味着增加颜色的作用1/gamma,或者在我们的简单些情况里,这仅仅是平方根。 参考书籍:《Ray Tracing in One Weekend》 RTIOW系列项目地址:GitHub RTIOW系列笔记: RTIOW-ch1:Output an image RTIOW-ch2:The vec3 class RTIOW-ch3:Rays, a simple camera, and background RTIOW-ch4:Adding a sphere RTIOW-ch5:Surface normals and multiple objects RTIOW-ch6:Antialiasing RTIOW-ch7:Diffuse Materials RTIOW-ch8:Metal RTIOW-ch9:Dielectrics RTIOW-ch10:Positionable camera RTIOW-ch11:Defocus Blur RTIOW-ch12:Where next