Adding support for cube maps

Cube maps:

cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map can then be sampled to create reflections which look like they are in the scene.

To get reflections from cube map, first we find the reflected ray of the one from camera to screen. We use the hlsl function reflect() for that. Once we get the reflected ray, we sample the cubemap at that location to get the desired reflection. This can be seen in screenshots below.

Update: I made a mistake when calculating the reflected fresnel equation. Instead of using the reflected vector from camera, i was using the normal vector which was causing the white out you can see below. I’ve added a new video after the screenshot which shows the correct reflections.

Creating metals:

Metals have a very high reflection value and also reflect light in such a way that the reflected light has the same color as the metal. So we have to multiply the value of color to both the sampled data from environmental map  and the specular light. Metals do not have diffuse light because of their high reflectance, so we can ignore that value. You can see the gold shader below. The while point at the center is the specular highlight from the direction light

Adding Support for UI in engine.

As we know, almost all games contains some sort of UI. In my engine, i have added support for images to be shown as UI sprites. The sprite is basically a quad drawn with four vertices and we change the size of the quad from the game side. Since the size of the quad is constant, I am creating a quad the size of the viewport at the start of the game and adjusting the   size of each individual UI element later. From the game, we send the size which would be between 0-1 for both width and height and we also specify the position of the element, which would be between -1 and 1 for the x and y axis, with origin at the center.

The UI quad is drawn as a triangle strip. A data to draw a triangle strip consists of only the vertex information. Since we donot send the index data which is equal to sizeof(index) * number of vertices, triangle strips are more efficient compared to regular meshes. When drawing the quads, i draw all the previous meshes first, change the type to draw as triangle strips and then draw them which you can see in the screenshot below.

Since the UI has zero depth, the transformation matrix is already in the projected space. Currently the values that i require game to send are used as the transform values in the projected matrix. I store this projected matrix for my vertex shader and in the fragment shader i sample the texture in the material and output the color.

Each sprite has its own material and transform, but shares the vertex buffer and vertex layout. The vertex layout only contains two elements, position and UV data. So we are also being efficient by not sending color, normals etc. Currently i am using four floats, but we can make the struct even smaller by using int8_t instead of float, since we will only be using values of -1,0,1 for all information in the struct.

Changing the color scheme from sRGB to linear

sRGB:

sRGB (standard Red Green Blue) is an RGB color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the Internet. It is often the default color space for images.

Even though the monitors are linear the renderers themselves are non linear. GPUs already have hardware support to convert to sRGB while sending output to monitor. Hence we send in data in linear and perform calculations in linear.

From the previous assignments, we need to convert all color values when passing to shaders. Most colors that we use come from other applications which render them in sRGB. So we convert them into linear in our graphics pipeline in C++.

Below are the before and after images of the same scene with and without sRGB:

PBR!!!!!! and gloss maps

PBR or physically based rendering is the process of rending using a more accurate flow of light. This results in images that look more photorealistic compared to just having diffuse and specular lighting. To implement PBR, we update our lighting equations to use BRDF(Bidirectional reflectance distribution function). The BRDF for diffuse light is almost the same as the one we are using previously, so we need not update that. But the BRDF for specular light is updated to use Fresnel equation. The total BRDF is given as

f(l,v) = [D(h) * F(l,h) * G(l,v,h)] / [4 * |n⋅v| * |n⋅l|]

The Blinn-Phong (D(h)) and Fresnel (F(l,h)) consider only the active microfacet normals instead of the normal for the whole surface, so the final output will be more natural.

Below you can see the change in specular details when the material smoothness is changed.

Gloss Map / Smoothness Map / !(Roughness Map):

Instead of changing the smoothness on the whole surface, we can change it on parts of the surface using a gloss map. The gloss map a Texture, but is encoded in greyscale using BC4_UNORM. We sample the gloss map the same way as we sample a texture and use the resulting value as the amount of smoothness to be used for a fragment. Below you can see a gloss map in action.

Update: Updating the gloss map, since the previous one was hard to visualize