Pipeline and Shading

引言

GAMES101现代图形学入门是由闫令琪老师教授。本次作业我们会进一步模拟现代图形技术,主要是关于 Blinn-Phong 着色模型以及纹理贴图。

GAMES101 Spring 2021课程作业三现在发布!大家可以在SmartChair上提交了,注意截止日期为2021年7月10日。
在课程中段,我们会为同学们开启补提交通道。

SmartChair链接

百度云链接
提取码: r3pc

请大家根据网络环境自行下载。

总览

在这次编程任务中,我们会进一步模拟现代图形技术。我们在代码中添加了 Object Loader(用于加载三维模型), Vertex Shader 与 Fragment Shader,并且支持了纹理映射。

而在本次实验中,你需要完成的任务是:

  1. 修改函数 rasterize_triangle(const Triangle& t) in rasterizer.cpp: 在此处实现与作业 2 类似的插值算法,实现法向量、颜色、纹理颜色的插值。
  2. 修改函数 get_projection_matrix() in main.cpp: 将你自己在之前的实验中实现的投影矩阵填到此处,此时你可以运行 ./Rasterizer output.png normal 来观察法向量实现结果。
  3. 修改函数 phong_fragment_shader() in main.cpp: 实现 Blinn-Phong 模型计算 Fragment Color.
  4. 修改函数 texture_fragment_shader() in main.cpp: 在实现 Blinn-Phong 的基础上,将纹理颜色视为公式中的 kd,实现 Texture Shading Fragment Shader.
  5. 修改函数 bump_fragment_shader() in main.cpp: 在实现 Blinn-Phong 的基础上,仔细阅读该函数中的注释,实现 Bump mapping.
  6. 修改函数 displacement_fragment_shader() in main.cpp: 在实现 Bump mapping 的基础上,实现 displacement mapping.

开始编写

编译与使用

在课程提供的虚拟机上,下载本次实验的基础代码之后,请在 SoftwareRasterizer 目录下按照如下方式构建程序:

mkdir build
cd . / build
cmake . .
make

这将会生成命名为 Rasterizer 的可执行文件。使用该可执行文件时,你传入的第二个参数将会是生成的图片文件名,而第三个参数可以是如下内容:

  • texture: 使用代码中的 texture shader.

    使用举例: ./Rasterizer output.png texture

  • normal: 使用代码中的 normal shader.

    使用举例: ./Rasterizer output.png normal

  • phong: 使用代码中的 blinn-phong shader.

    使用举例: ./Rasterizer output.png phong

  • bump: 使用代码中的 bump shader.

    使用举例: ./Rasterizer output.png bump

  • displacement: 使用代码中的 displacement shader.

    使用举例: ./Rasterizer output.png displacement

    当你修改代码之后,你需要重新 make 才能看到新的结果。

框架代码说明

相比上次实验,我们对框架进行了如下修改:

  1. 我们引入了一个第三方 .obj 文件加载库来读取更加复杂的模型文件,这部分库文件在 OBJ_Loader.h file. 你无需详细理解它的工作原理,只需知道这个库将会传递给我们一个被命名被 TriangleListVector,其中每个三角形都有对应的点法向量与纹理坐标。此外,与模型相关的纹理也将被一同加载。 注意:如果你想尝试加载其他模型,你目前只能手动修改模型路径。
  2. 我们引入了一个新的 Texture 类以从图片生成纹理,并且提供了查找纹理颜色的接口:Vector3f getColor(float u, float v)
  3. 我们创建了 Shader.hpp 头文件并定义了 fragment_shader_payload,其中包括了 Fragment Shader 可能用到的参数。目前 main.cpp 中有三个 Fragment Shader,其中 fragment_shader 是按照法向量上色的样例 Shader,其余两个将由你来实现。
  4. 主渲染流水线开始于rasterizer::draw(std::vector<Triangle>&TriangleList). 我们再次进行一系列变换,这些变换一般由 Vertex Shader 完成。在此之后,我们调用函数 rasterize_triangle.
  5. rasterize_triangle 函数与你在作业 2 中实现的内容相似。不同之处在于被设定的数值将不再是常数,而是按照 Barycentric Coordinates 对法向量、颜色、纹理颜色与底纹颜色 (Shading Colors) 进行插值。回忆我们上次为了计算 z value 而提供的 [alpha, beta, gamma],这次你将需要将其应用在其他参数的插值上。你需要做的是计算插值后的颜色,并将 Fragment Shader 计算得 到的颜色写入 framebuffer,这要求你首先使用插值得到的结果设置 fragment shader payload,并调用 fragment shader 得到计算结果。

运行与结果

在你按照上述说明将上次作业的代码复制到对应位置,并作出相应修改之后 **(请务必认真阅读说明)**,你就可以运行默认的 normal shader 并观察到如下结果:

实现 Blinn-Phong 反射模型之后的结果应该是:

实现纹理之后的结果应该是:

实现 Bump Mapping 后,你将看到可视化的凹凸向量:

实现 Displacement Mapping 后,你将看到如下结果:

Frequently Asked Questions

  1. bump mapping 部分的 h(u,v)=texture_color(u,v).norm, 其中 u,vtex_coords, w,htexture 的宽度与高度
  2. rasterizer.cppv = t.toVector4()
  3. get_projection_matrix 中的 eye_fov 应该被转化为弧度制
  4. bump 与 displacement 中修改后的 normal 仍需要 normalize
  5. 可能用到的 eigen 方法:norm(), normalized(), cwiseProduct()
  6. 实现 h(u+1/w,v) 的时候要写成 h(u+1.0/w,v)
  7. 正规的凹凸纹理应该是只有一维参量的灰度图,而本课程为了框架使用的简便性而使用了一张 RGB 图作为凹凸纹理的贴图,因此需要指定一种规则将彩色投影到灰度,而我只是「恰好」选择了 norm 而已。为了确保你们的结果与我一致,我才要求你们都使用 norm 作为计算方法
  8. bump mapping & displacement mapping 的计算的推导日后将会在光线追踪部分详细介绍,目前请按照注释实现

评分与提交

评分

  • [5 分] 提交格式正确,包括所有需要的文件。代码可以正常编译、执行。

  • [10 分] 参数插值: 正确插值颜色、法向量、纹理坐标、位置 (Shading Position) 并将它们传递给 fragment_shader_payload.

  • [20 分] Blinn-phong 反射模型: 正确实现 phong_fragment_shader 对应的 反射模型。

  • [5 分] Texture mapping: 将 phong_fragment_shader 的代码拷贝到 texture_fragment_shader, 在此基础上正确实现 Texture Mapping.

  • [10 分] Bump mapping 与 Displacement mapping: 正确实现 Bump mapping 与 Displacement mapping.

  • [Bonus 3 分] 尝试更多模型: 找到其他可用的.obj 文件,提交渲染结果并 把模型保存在 /models 目录下。这些模型也应该包含 Vertex Normal 信息。

  • [Bonus 5 分] 双线性纹理插值: 使用双线性插值进行纹理采样, 在 Texture 类中实现一个新方法 Vector3f getColorBilinear(float u, float v) 并 通过 fragment shader 调用它。为了使双线性插值的效果更加明显,你应该考虑选择更小的纹理图。请同时提交纹理插值与双线性纹理插值的结果,并进行比较。

  • [-2 分] 惩罚分数:
    未删除 /build、/.vscode、Assignment3.pdf 等与代码无关的文件; 未提交或未按要求完成 README.md;
    未按照要求完成/images 目录;
    代码相关文件和 README.md 文件不在你提交的文件夹下的第一层。

提交

  1. 当你完成作业后,请清理你的项目 (删去:/build、/.vscode、/Assignment3.pdf 等),在你的文件夹中包含 CMakeLists.txt 和所有的程序文件 (无论是否修改);

  2. 同时,请新建一个 /images 目录,将所有实验结果图片保存在该目录下;

  3. 再添加一个 README.md 文件写清楚自己完成了以上七个得分点中的哪几点 (如果完成了,也请同时提交一份结果图片),并简要描述你在各个函数中实现的功能;

  4. 最后,将上述内容打包,并用“姓名 Homework3.zip”的命名方式提交到 SmartChair 平台。

    平台链接:http://www.smartchair.org/GAMES101-Spring2021/。

实现

代码框架

#include <iostream>
#include <opencv2/opencv.hpp>

#include "global.hpp"
#include "rasterizer.hpp"
#include "Triangle.hpp"
#include "Shader.hpp"
#include "Texture.hpp"
#include "OBJ_Loader.h"

Eigen::Matrix4f get_view_matrix(Eigen::Vector3f eye_pos)
{
Eigen::Matrix4f view = Eigen::Matrix4f::Identity();

Eigen::Matrix4f translate;
translate << 1,0,0,-eye_pos[0],
0,1,0,-eye_pos[1],
0,0,1,-eye_pos[2],
0,0,0,1;

view = translate*view;

return view;
}

Eigen::Matrix4f get_model_matrix(float angle)
{
Eigen::Matrix4f rotation;
angle = angle * MY_PI / 180.f;
rotation << cos(angle), 0, sin(angle), 0,
0, 1, 0, 0,
-sin(angle), 0, cos(angle), 0,
0, 0, 0, 1;

Eigen::Matrix4f scale;
scale << 2.5, 0, 0, 0,
0, 2.5, 0, 0,
0, 0, 2.5, 0,
0, 0, 0, 1;

Eigen::Matrix4f translate;
translate << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1;

return translate * rotation * scale;
}

Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio, float zNear, float zFar)
{
// Students will implement this function

}

Eigen::Vector3f vertex_shader(const vertex_shader_payload& payload)
{
return payload.position;
}

Eigen::Vector3f normal_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = (payload.normal.head<3>().normalized() + Eigen::Vector3f(1.0f, 1.0f, 1.0f)) / 2.f;
Eigen::Vector3f result;
result << return_color.x() * 255, return_color.y() * 255, return_color.z() * 255;
return result;
}

static Eigen::Vector3f reflect(const Eigen::Vector3f& vec, const Eigen::Vector3f& axis)
{
auto costheta = vec.dot(axis);
return (2 * costheta * axis - vec).normalized();
}

struct light
{
Eigen::Vector3f position;
Eigen::Vector3f intensity;
};

Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = {0, 0, 0};
if (payload.texture)
{
// TODO: Get the texture value at the texture coordinates of the current fragment
return_color = payload.texture->getColor(payload.tex_coords.x(), payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / 255.f;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = {0, 0, 0};

for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.


}

return result_color * 255.f;
}

Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.


}

return result_color * 255.f;
}



Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

float kh = 0.2, kn = 0.1;

// TODO: Implement displacement mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)


Eigen::Vector3f result_color = {0, 0, 0};

for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.


}

return result_color * 255.f;
}


Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;


float kh = 0.2, kn = 0.1;

// TODO: Implement bump mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Normal n = normalize(TBN * ln)


Eigen::Vector3f result_color = {0, 0, 0};
result_color = normal;

return result_color * 255.f;
}

int main(int argc, const char** argv)
{
std::vector<Triangle*> TriangleList;

float angle = 140.0;
bool command_line = false;

std::string filename = "output.png";
objl::Loader Loader;
// std::string obj_path = "../models/spot/";
std::string obj_path = "/home/cs18/Documents/share/Assignment3/Assignment3/models/spot/";

// Load .obj File
// bool loadout = Loader.LoadFile("../models/spot/spot_triangulated_good.obj");
bool loadout = Loader.LoadFile("/home/cs18/Documents/share/Assignment3/Assignment3/models/spot/spot_triangulated_good.obj");
for(auto mesh:Loader.LoadedMeshes)
{
for(int i=0;i<mesh.Vertices.size();i+=3)
{
Triangle* t = new Triangle();
for(int j=0;j<3;j++)
{
t->setVertex(j,Vector4f(mesh.Vertices[i+j].Position.X,mesh.Vertices[i+j].Position.Y,mesh.Vertices[i+j].Position.Z,1.0));
t->setNormal(j,Vector3f(mesh.Vertices[i+j].Normal.X,mesh.Vertices[i+j].Normal.Y,mesh.Vertices[i+j].Normal.Z));
t->setTexCoord(j,Vector2f(mesh.Vertices[i+j].TextureCoordinate.X, mesh.Vertices[i+j].TextureCoordinate.Y));
}
TriangleList.push_back(t);
}
}

rst::rasterizer r(700, 700);

auto texture_path = "hmap.jpg";
r.set_texture(Texture(obj_path + texture_path));

std::function<Eigen::Vector3f(fragment_shader_payload)> active_shader = phong_fragment_shader;

if (argc >= 2)
{
command_line = true;
filename = std::string(argv[1]);

if (argc == 3 && std::string(argv[2]) == "texture")
{
std::cout << "Rasterizing using the texture shader\n";
active_shader = texture_fragment_shader;
texture_path = "spot_texture.png";
r.set_texture(Texture(obj_path + texture_path));
}
else if (argc == 3 && std::string(argv[2]) == "normal")
{
std::cout << "Rasterizing using the normal shader\n";
active_shader = normal_fragment_shader;
}
else if (argc == 3 && std::string(argv[2]) == "phong")
{
std::cout << "Rasterizing using the phong shader\n";
active_shader = phong_fragment_shader;
}
else if (argc == 3 && std::string(argv[2]) == "bump")
{
std::cout << "Rasterizing using the bump shader\n";
active_shader = bump_fragment_shader;
}
else if (argc == 3 && std::string(argv[2]) == "displacement")
{
std::cout << "Rasterizing using the bump shader\n";
active_shader = displacement_fragment_shader;
}
}

Eigen::Vector3f eye_pos = {0,0,10};

r.set_vertex_shader(vertex_shader);
r.set_fragment_shader(active_shader);

int key = 0;
int frame_count = 0;

if (command_line)
{
r.clear(rst::Buffers::Color | rst::Buffers::Depth);
r.set_model(get_model_matrix(angle));
r.set_view(get_view_matrix(eye_pos));
r.set_projection(get_projection_matrix(45.0, 1, 0.1, 50));

r.draw(TriangleList);
cv::Mat image(700, 700, CV_32FC3, r.frame_buffer().data());
image.convertTo(image, CV_8UC3, 1.0f);
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);

cv::imwrite(filename, image);

return 0;
}

while(key != 27)
{
r.clear(rst::Buffers::Color | rst::Buffers::Depth);

r.set_model(get_model_matrix(angle));
r.set_view(get_view_matrix(eye_pos));
r.set_projection(get_projection_matrix(45.0, 1, 0.1, 50));

//r.draw(pos_id, ind_id, col_id, rst::Primitive::Triangle);
r.draw(TriangleList);
cv::Mat image(700, 700, CV_32FC3, r.frame_buffer().data());
image.convertTo(image, CV_8UC3, 1.0f);
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);

cv::imshow("image", image);
cv::imwrite(filename, image);
key = cv::waitKey(10);

if (key == 'a' )
{
angle -= 0.1;
}
else if (key == 'd')
{
angle += 0.1;
}

}
return 0;
}

投影函数

填写在 main.cpp 下的 get_projection_matrix() 投影函数:

Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio, float zNear, float zFar)
{
// Students will implement this function
Eigen::Matrix4f projection = Eigen::Matrix4f::Identity();

// TODO: Implement this function
// Create the projection matrix for the given parameters.
// Then return it.

// Matrix ortho

auto angle = static_cast<float>(eye_fov / 180.0f * MY_PI);

float top = zNear * std::tan(angle / 2);
float bot = -top;
float right = top * aspect_ratio;
float left = -right;
float near = -abs(zNear);
float far = -abs(zFar);

Eigen::Matrix4f ortho = Eigen::Matrix4f::Identity();
Eigen::Matrix4f trans(4,4);
Eigen::Matrix4f scale(4,4);

trans << 2 / (right - left), 0, 0, 0,
0, 2 / (top - bot), 0, 0,
0, 0, 2 / (near - far), 0,
0, 0, 0, 1;

scale << 1, 0, 0, -(right + left) / 2,
0, 1, 0, -(top + bot) / 2,
0, 0, 1, -(near + far) / 2,
0, 0, 0, 1;

ortho = scale * trans;

// Matrix persp2ortho

float A = near + far;
float B = -near * far;

Eigen::Matrix4f persp2ortho = Eigen::Matrix4f::Identity();

persp2ortho << near, 0, 0, 0,
0, near, 0, 0,
0, 0, A, B,
0, 0, 1, 0;

Eigen::Matrix4f mirror;
mirror << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1;

// Matrix presp
projection = mirror * ortho * persp2ortho;

return projection;
}

插值函数

与上一次作业相比,这次插值多出了法向和纹理:

//Screen space rasterization
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos)
{
// TODO: From your HW3, get the triangle rasterization code.
// TODO: Inside your rasterization loop:
// * v[i].w() is the vertex view space depth value z.
// * Z is interpolated view space depth for the current pixel
// * zp is depth between zNear and zFar, used for z-buffer

// float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
// float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
// zp *= Z;

auto v = t.toVector4();

int min_x = std::min(std::min(v[0].x(), v[1].x()), v[2].x());
int min_y = std::min(std::min(v[0].y(), v[1].y()), v[2].y());
int max_x = std::max(std::max(v[0].x(), v[1].x()), v[2].x());
int max_y = std::max(std::max(v[0].y(), v[1].y()), v[2].y());

for (int x = min_x; x <= max_x; x++) {
for (int y = min_y; y <= max_y; y++) {
if (insideTriangle(x + 0.5, y + 0.5, t.v)) {
auto [alpha, beta, gamma] = computeBarycentric2D(x + 0.5, y + 0.5, t.v);

float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;

if (zp < depth_buf[get_index(x, y)]) {
depth_buf[get_index(x, y)] = zp;

auto interpolated_color = interpolate(alpha, beta, gamma, t.color[0], t.color[1], t.color[2], 1);
auto interpolated_normal = interpolate(alpha, beta, gamma, t.normal[0], t.normal[1], t.normal[2], 1);
auto interpolated_texcoords = interpolate(alpha, beta, gamma, t.tex_coords[0], t.tex_coords[1], t.tex_coords[2], 1);
auto interpolated_shadingcoords = interpolate(alpha, beta, gamma, view_pos[0], view_pos[1], view_pos[2], 1);

// Use: fragment_shader_payload payload( interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
// Use: payload.view_pos = interpolated_shadingcoords;
// Use: Instead of passing the triangle's color directly to the frame buffer, pass the color to the shaders first to get the final color;
// Use: auto pixel_color = fragment_shader(payload);
fragment_shader_payload payload(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
payload.view_pos = interpolated_shadingcoords;
auto pixel_color = fragment_shader(payload);

set_pixel(Eigen::Vector2i(x, y), pixel_color);
}
}
}
}
}

接着运行终端代码:

mkdir build
cd ./build
cmake ..
make

./Rasterizer output.png normal

Blinn-Phong模型

如果 Blinn-Phong 模型还是有些搞不清楚,可以去看一下 LearnOpenGL 的 基础光照,讲得非常清楚可以配套课程食用。

Ambient Lighting

ambient 环境光照为常量,计算如下:

$$
L_a = k_a I_a
$$

Eigen::Vector3f ambient = ka.cwiseProduct(amb_light_intensity);
Diffuse Lighting

diffuse 漫反射光照计算如下:

$$
L_d = k_d (I / r^2) max(0, n \cdot l)
$$

Eigen::Vector3f diffuse = kd.cwiseProduct(light.intensity/r.dot(r)) * std::max(0.0f, n.dot(l));
Specular Lighting

$$
L_s = k_s(I / r^2) max(0, n \cdot h)^p
$$

Eigen::Vector3f specular = ks.cwiseProduct(light.intensity/r.dot(r)) * std::pow(std::max(0.0f, n.dot(h)), p);

不要忘记指数 p,我一开始就是忘记这一点导致渲染出来的模型偏亮,凸显不出高光项。

最终完整的 phong_fragment_shader() 函数如下:

Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
auto n = normal.normalized();
auto r = light.position - point;
auto l = light.position.normalized();
auto v = (eye_pos - point).normalized();
auto h = (l + v).normalized();

Eigen::Vector3f ambient = ka.cwiseProduct(amb_light_intensity);
Eigen::Vector3f diffuse = kd.cwiseProduct(light.intensity/r.dot(r)) * std::max(0.0f, n.dot(l));
Eigen::Vector3f specular = ks.cwiseProduct(light.intensity/r.dot(r)) * std::pow(std::max(0.0f, n.dot(h)), p);

result_color += ambient + diffuse + specular;
}

return result_color * 255.f;
}

接着运行终端代码:

./Rasterizer output.png phong

Texture

texture_fragment_shader() 大同小异,只是需要先获取纹理坐标:

Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = {0, 0, 0};
if (payload.texture)
{
// TODO: Get the texture value at the texture coordinates of the current fragment
return_color = payload.texture->getColor(payload.tex_coords.x(), payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / 255.f;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = {0, 0, 0};

for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
auto n = normal.normalized();
auto r = light.position - point;
auto l = light.position.normalized();
auto v = (eye_pos - point).normalized();
auto h = (l + v).normalized();

Eigen::Vector3f ambient = ka.cwiseProduct(amb_light_intensity);
Eigen::Vector3f diffuse = kd.cwiseProduct(light.intensity/r.dot(r)) * std::max(0.0f, n.dot(l));
Eigen::Vector3f specular = ks.cwiseProduct(light.intensity/r.dot(r)) * std::pow(std::max(0.0f, n.dot(h)), p);

result_color += ambient + diffuse + specular;
}

return result_color * 255.f;
}

接着运行终端代码:

./Rasterizer output.png texture

Bump mapping

有关法线贴图课上还是有些一笔带过,还可以参考 LearnOpenGL 的 法线贴图 补充。

kh * kn 是影响系数(是常数,上面已经定义了值),表示纹理法线对真实物体的影响程度,和课上的 c1c2 是同一个东西。

Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

float kh = 0.2, kn = 0.1;

// TODO: Implement bump mapping here
// Let n = normal = (x, y, z)
auto n = normal;
auto x = n.x();
auto y = n.y();
auto z = n.z();

// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
Eigen::Vector3f t = {x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)};
Eigen::Vector3f b = n.cross(t);
Eigen::Matrix3f TBN;
TBN << t.x(), b.x(), n.x(),
t.y(), b.y(), n.y(),
t.z(), b.z(), n.z();

// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
auto u = payload.tex_coords.x();
auto v = payload.tex_coords.y();
auto w = payload.texture->width;
auto h = payload.texture->height;

auto dU = kh * kn * (payload.texture->getColor(u + 1.0f / float(w), v).norm() - payload.texture->getColor(u, v).norm());
auto dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / float(h)).norm() - payload.texture->getColor(u, v).norm());

// Vector ln = (-dU, -dV, 1)
// Normal n = normalize(TBN * ln)
Eigen::Vector3f ln = {-dU, -dV, 1.0f};
normal = (TBN * ln).normalized();

Eigen::Vector3f result_color = {0, 0, 0};
result_color = normal;

return result_color * 255.f;
}

norm() 是 Eigen 库里定义的一个求范数的函数,就是求所有元素²的和再开方。 向量的范数则表示的是原有集合的大小,范数的本质是距离,存在的意义是为了实现比较。getColor() 返回的是一个储存颜色值的向量:(color[0],color[1],color[2]) 对应的是 RGB 值,dUdV 都是一个 float 值,并不是 Vector,想要实现 h() 表示的实数高度值,就要用到 norm.() 将向量映射成实数。

接着运行终端代码:

./Rasterizer output.png bump

Displacement mapping

与 bump 相比位移贴图多了一个修改 point 的步骤:

point += kn * n * payload.texture->getColor(u, v).norm();

在根据凹凸贴图 UV 计算新的法向量后,顶点要实际移动,按照注释多写一行,然后再修改法向量。

Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

float kh = 0.2, kn = 0.1;

// TODO: Implement displacement mapping here
// Let n = normal = (x, y, z)
auto n = normal;
auto x = normal.x();
auto y = normal.y();
auto z = normal.z();

// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
Eigen::Vector3f t = {x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)};
Eigen::Vector3f b = n.cross(t);
Eigen::Matrix3f TBN;
TBN << t.x(), b.x(), n.x(),
t.y(), b.y(), n.y(),
t.z(), b.z(), n.z();

// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
auto u = payload.tex_coords.x();
auto v = payload.tex_coords.y();
auto w = payload.texture->width;
auto h = payload.texture->height;

auto dU = kh * kn * (payload.texture->getColor(u + 1.0f / float(w), v).norm() - payload.texture->getColor(u, v).norm());
auto dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / float(h)).norm() - payload.texture->getColor(u, v).norm());

// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
Eigen::Vector3f ln = {-dU, -dV, 1.0f};
point += kn * n * payload.texture->getColor(u, v).norm();
normal = (TBN * ln).normalized();

Eigen::Vector3f result_color = {0, 0, 0};

for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.

n = normal.normalized();
auto r = light.position - point;
auto v = (eye_pos - point).normalized();
auto l = (light.position - point).normalized();
auto h = (l + v).normalized();

auto ambient = ka.cwiseProduct(amb_light_intensity);
auto diffuse = kd.cwiseProduct(light.intensity / r.dot(r)) * std::max(0.0f, n.dot(l));
auto specular = ks.cwiseProduct(light.intensity / r.dot(r)) * std::pow(std::max(0.0f, n.dot(h)), p);

result_color += ambient + diffuse + specular;
}

return result_color * 255.f;
}

接着运行终端代码:

./Rasterizer output.png displacement

问题汇总

C++标准设置

这个问题我没有遇到,但是在查阅其他问题时在论坛有同学有讨论无法使用一些新的语言特性。

如果使用 VisualStudio 需要进入项目设置中选择 C++17/20 等较高版本;如果像我一样使用虚拟机的同学则可以考虑修改 CMake 配置文件:

cmake_minimum_required(VERSION 3.10)
project(Rasterizer)

find_package(OpenCV REQUIRED)

set(CMAKE_CXX_STANDARD 17)

include_directories(/usr/local/include ./include)

add_executable(Rasterizer main.cpp rasterizer.hpp rasterizer.cpp global.hpp Triangle.hpp Triangle.cpp Texture.hpp Texture.cpp Shader.hpp OBJ_Loader.h)
target_link_libraries(Rasterizer ${OpenCV_LIBRARIES})
#target_compile_options(Rasterizer PUBLIC -Wall -Wextra -pedantic)

模型纹理的路径设置

框架代码给定的是相对路径,如果不加以修改会有以下报错:

OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/color.cpp, line 9716
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/color.cpp:9716: error: (-215) scn == 3 || scn == 4 in function cvtColor

相对路径修改为绝对路径即可,不要忘记最后的 /

int main(int argc, const char** argv)
{
std::vector<Triangle*> TriangleList;

float angle = 140.0;
bool command_line = false;

std::string filename = "output.png";
objl::Loader Loader;
- std::string obj_path = "../models/spot/";
+ std::string obj_path = "/home/cs18/Documents/share/Assignment3/Assignment3/models/spot/";

// Load .obj File
- bool loadout = Loader.LoadFile("../models/spot/spot_triangulated_good.obj");
+ bool loadout = Loader.LoadFile("/home/cs18/Documents/share/Assignment3/Assignment3/models/spot/spot_triangulated_good.obj");
......
}

模型倒置问题

如果按照前几次作业的投影函数直接复制拷贝,会出现模型尾部正对摄像机的情况:

Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio, float zNear, float zFar)
{
// Students will implement this function
Eigen::Matrix4f projection = Eigen::Matrix4f::Identity();

// TODO: Implement this function
// Create the projection matrix for the given parameters.
// Then return it.

// Matrix ortho

auto angle = static_cast<float>(eye_fov / 180.0f * MY_PI);

float top = zNear * std::tan(angle / 2);
float bot = -top;
float right = top * aspect_ratio;
float left = -right;
float near = -abs(zNear);
float far = -abs(zFar);

Eigen::Matrix4f ortho = Eigen::Matrix4f::Identity();
Eigen::Matrix4f trans(4,4);
Eigen::Matrix4f scale(4,4);

trans << 2 / (right - left), 0, 0, 0,
0, 2 / (top - bot), 0, 0,
0, 0, 2 / (near - far), 0,
0, 0, 0, 1;

scale << 1, 0, 0, -(right + left) / 2,
0, 1, 0, -(top + bot) / 2,
0, 0, 1, -(near + far) / 2,
0, 0, 0, 1;

ortho = scale * trans;

// Matrix persp2ortho

float A = near + far;
float B = -near * far;

Eigen::Matrix4f persp2ortho = Eigen::Matrix4f::Identity();

persp2ortho << near, 0, 0, 0,
0, near, 0, 0,
0, 0, A, B,
0, 0, 1, 0;

// Matrix presp
projection = ortho * persp2ortho;

return projection;
}

这种情况我参考了 GAMES101作业3-遇到的各种问题及解决方法 的解决方案,加入一个 mirror 矩阵将 Z 轴旋转 180 度:

Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio, float zNear, float zFar)
{
// Students will implement this function
Eigen::Matrix4f projection = Eigen::Matrix4f::Identity();

// TODO: Implement this function
// Create the projection matrix for the given parameters.
// Then return it.

// Matrix ortho

auto angle = static_cast<float>(eye_fov / 180.0f * MY_PI);

float top = zNear * std::tan(angle / 2);
float bot = -top;
float right = top * aspect_ratio;
float left = -right;
float near = -abs(zNear);
float far = -abs(zFar);

Eigen::Matrix4f ortho = Eigen::Matrix4f::Identity();
Eigen::Matrix4f trans(4,4);
Eigen::Matrix4f scale(4,4);

trans << 2 / (right - left), 0, 0, 0,
0, 2 / (top - bot), 0, 0,
0, 0, 2 / (near - far), 0,
0, 0, 0, 1;

scale << 1, 0, 0, -(right + left) / 2,
0, 1, 0, -(top + bot) / 2,
0, 0, 1, -(near + far) / 2,
0, 0, 0, 1;

ortho = scale * trans;

// Matrix persp2ortho

float A = near + far;
float B = -near * far;

Eigen::Matrix4f persp2ortho = Eigen::Matrix4f::Identity();

persp2ortho << near, 0, 0, 0,
0, near, 0, 0,
0, 0, A, B,
0, 0, 1, 0;

+ Eigen::Matrix4f mirror;
+ mirror << 1, 0, 0, 0,
+ 0, 1, 0, 0,
+ 0, 0, -1, 0,
+ 0, 0, 0, 1;

// Matrix presp
+ projection = mirror * ortho * persp2ortho;

return projection;
}

纹理归一化

框架给定的模型贴图一部分是超出 [0, 1] 范围的,所以需要我们在 Texture.h 中加入归一化操作:

//
// Created by LEI XU on 4/27/19.
//

#ifndef RASTERIZER_TEXTURE_H
#define RASTERIZER_TEXTURE_H
#include "global.hpp"
#include <eigen3/Eigen/Eigen>
#include <opencv2/opencv.hpp>
class Texture{
private:
cv::Mat image_data;

public:
Texture(const std::string& name)
{
image_data = cv::imread(name);
cv::cvtColor(image_data, image_data, cv::COLOR_RGB2BGR);
width = image_data.cols;
height = image_data.rows;
}

int width, height;

Eigen::Vector3f getColor(float u, float v)
{
+ if(u < 0) u = 0;
+ if(v < 0) v = 0;
+ if(u > 1) u = 1;
+ if(v > 1) v = 1;

auto u_img = u * width;
auto v_img = (1 - v) * height;
auto color = image_data.at<cv::Vec3b>(v_img, u_img);
return Eigen::Vector3f(color[0], color[1], color[2]);
}

};
#endif //RASTERIZER_TEXTURE_H

切线空间矩阵转置

对着注释写 TBN 矩阵写成了转置形式:

Eigen::Matrix3f TBN;
TBN << t.x(), t.y(), t.z(),
b.x(), b.y(), b.z(),
n.x(), n.y(), n.z();

后面浏览参考答案时发现不妥,模型身上都是哑光。应该更改为:

Eigen::Matrix3f TBN;
TBN << t.x(), b.x(), n.x(),
t.y(), b.y(), n.y(),
t.z(), b.z(), n.z();

计算结果未累加

逛论坛的时候发现大家还有一种错误就是没有累加最后的结果,选择了赋值:

首先是 Blinn-Phong 模型中 result_color 结果需要累加所有光源:

    for (auto& light : lights)
{
......
- result_color = ambient + diffuse + specular;
+ result_color += ambient + diffuse + specular;
}

displacement mapping 处顶点 point 这里也是累加而非赋值施加贴图的影响:

    // Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
Eigen::Vector3f ln = {-dU, -dV, 1.0f};
- point += kn * n * payload.texture->getColor(u, v).norm();
+ point = kn * n * payload.texture->getColor(u, v).norm();
normal = (TBN * ln).normalized();

这会导致模型渲染结果会比正常情况偏暗。

参考文章