Final Project – Creating Multiple View for different Cameras

For the final project for my graphics class, I wanted to finish something that I wanted to do for the final project in the previous semester. That is to have split screen gaming. I could not finish it during the previous semester, but with better knowledge now, I set out to finish this. My goal for the final project is to create the viewport selector screen in Maya, where you can select which view you want, to view the object. It looks like the image below

The four different parts are different viewports and you can select which viewport you want from this screen.
To create the above view, first, I needed to create five different viewports. One for each square and one for the final selected camera view. We could also have resized the viewport, but this was much easier to implement. Since our window is 512 * 512, Each of the viewports is 256*256.

viewPort[0].TopLeftX = 0.0f;
viewPort[0].TopLeftY = 0.0f;
viewPort[0].Width = static_cast<float>(i_resolutionWidth);
viewPort[0].Height = static_cast<float>(i_resolutionHeight);
viewPort[0].MinDepth = 0.0f;
viewPort[0].MaxDepth = 1.0f;

viewPort[1].TopLeftX = 0.0f;
viewPort[1].TopLeftY = 0.0f;
viewPort[1].Width = static_cast<float>(i_resolutionWidth/2.0f);
viewPort[1].Height = static_cast<float>(i_resolutionWidth/2.0f);
viewPort[1].MinDepth = 0.0f;
viewPort[1].MaxDepth = 1.0f;

viewPort[2].TopLeftX = 256.0f;
viewPort[2].TopLeftY = 0.0f;
viewPort[2].Width = static_cast<float>(i_resolutionWidth / 2.0f);
viewPort[2].Height = static_cast<float>(i_resolutionWidth / 2.0f);
viewPort[2].MinDepth = 0.0f;
viewPort[2].MaxDepth = 1.0f;

Above is the sample of all the viewports that I created. Viewport 0 is the default viewport. After creating the viewports, we need to create transforms for each of the mesh in the projected space for each of the cameras. So, I calculate and save the per draw call data which contains the transforms and the positions of the cameras for each camera and also calculate the local to projected space for each one of them. Since we will not be rendering all the four camera views at every frame, I only calculate these matrices when I am showing these camera views which reduces the amount of computation required.

After we create these transforms, now it is the time to draw. When drawing to multiple viewports, you need to draw all the meshes multiple times, since the matrix transformations for each camera view is different. Below is the sample code for the switching and drawing of meshes.

I have created a small function in my graphics helper class which we set up in the first semester to change viewports based on a value I send.

constexpr unsigned int viewPortCount = 1;
direct3dImmediateContext-&gt;RSSetViewports(viewPortCount, &viewPort[i_ViewPortNumber]);
if (showMultipleCameras)
for (auto k = 1; k < 5; k++) { s_constantBuffer_perDrawCall.Update(&amp;s_renderSecondCam[k-1].constantData_perDrawCallNewCam[index]); s_helper->SetViewPort(k);
if ((currentMeshIndex ^ meshIndex))
currentMeshIndex = meshIndex;

You can see the multiple viewports in action below

The first viewport is the perspective mode where the player can move the camera freely, the second is top-down where the camera moves only in Y-axis, third is front, where the camera moves only in Z-axis and the fourth, is side, where the camera moves in X-axis. The controls are given below.

Working with lighting.

Since specular light with PBR is calculated using the angle between the camera and the fragment, I had an issue where when showing all the four viewports, I was calculating the specular and environmental reflections based on only the perspective camera which resulted in wrong lighting values and reflections. To rectify that, I am currently passing the per-frame data per draw call which is really inefficient but was the fastest way I could fix this issue without changing all my render commands. I am working on making this more efficient by updating both my render commands and shaders.

What I learned from this class:

I have learned the meanings of various terms in graphics programming and won’t look like a noob when other people mention texels or PBR. I also better understand the graphics pipeline itself and optimizing the various stages of the pipeline to get most of it. I have also created some cool shaders and showed how to create various illusions utilizing textures. I also learned about the various transformations we need to perform to draw a 3D object onto a 2D surface that is the screen which was helpful when doing this project.

Download: Link


Space: To view all viewports and when pressed again switches to the previous active view. The following keys work only in view selector.

1 – Switches to perspective mode.
2 – Switches to Top-down mode.
3 – Switches to Front mode.
4 – Switches to side mode.

Once switched to the given mode:
In perspective mode:

W, S – Move the camera in Z.
A, D – Move Camera in X.
Q, E – Move Camera in Y.
Z, X – Rotate camera around Y.
I, K – Move the first gecko in Z.
J, L – Move gecko in  X.
U, O – Move gecko in Y.

In Top-down mode:
W, S – Move the camera in Y.
I, K – Move the first gecko in Z.
J, L – Move gecko in  X.

In Front mode:
W, S – Move the camera in Z.
J, L – Move gecko in  X.
U, O – Move gecko in Y.

In Side mode:
W, S – Move the camera in X.
I, K – Move the first gecko in Z.
U, O – Move gecko in Y.

Adding support for cube maps

Cube maps:

cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map can then be sampled to create reflections which look like they are in the scene.

To get reflections from cube map, first we find the reflected ray of the one from camera to screen. We use the hlsl function reflect() for that. Once we get the reflected ray, we sample the cubemap at that location to get the desired reflection. This can be seen in screenshots below.

Update: I made a mistake when calculating the reflected fresnel equation. Instead of using the reflected vector from camera, i was using the normal vector which was causing the white out you can see below. I’ve added a new video after the screenshot which shows the correct reflections.

Creating metals:

Metals have a very high reflection value and also reflect light in such a way that the reflected light has the same color as the metal. So we have to multiply the value of color to both the sampled data from environmental map  and the specular light. Metals do not have diffuse light because of their high reflectance, so we can ignore that value. You can see the gold shader below. The while point at the center is the specular highlight from the direction light

Adding Support for UI in engine.

As we know, almost all games contains some sort of UI. In my engine, i have added support for images to be shown as UI sprites. The sprite is basically a quad drawn with four vertices and we change the size of the quad from the game side. Since the size of the quad is constant, I am creating a quad the size of the viewport at the start of the game and adjusting the   size of each individual UI element later. From the game, we send the size which would be between 0-1 for both width and height and we also specify the position of the element, which would be between -1 and 1 for the x and y axis, with origin at the center.

The UI quad is drawn as a triangle strip. A data to draw a triangle strip consists of only the vertex information. Since we donot send the index data which is equal to sizeof(index) * number of vertices, triangle strips are more efficient compared to regular meshes. When drawing the quads, i draw all the previous meshes first, change the type to draw as triangle strips and then draw them which you can see in the screenshot below.

Since the UI has zero depth, the transformation matrix is already in the projected space. Currently the values that i require game to send are used as the transform values in the projected matrix. I store this projected matrix for my vertex shader and in the fragment shader i sample the texture in the material and output the color.

Each sprite has its own material and transform, but shares the vertex buffer and vertex layout. The vertex layout only contains two elements, position and UV data. So we are also being efficient by not sending color, normals etc. Currently i am using four floats, but we can make the struct even smaller by using int8_t instead of float, since we will only be using values of -1,0,1 for all information in the struct.

Changing the color scheme from sRGB to linear


sRGB (standard Red Green Blue) is an RGB color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the Internet. It is often the default color space for images.

Even though the monitors are linear the renderers themselves are non linear. GPUs already have hardware support to convert to sRGB while sending output to monitor. Hence we send in data in linear and perform calculations in linear.

From the previous assignments, we need to convert all color values when passing to shaders. Most colors that we use come from other applications which render them in sRGB. So we convert them into linear in our graphics pipeline in C++.

Below are the before and after images of the same scene with and without sRGB:

PBR!!!!!! and gloss maps

PBR or physically based rendering is the process of rending using a more accurate flow of light. This results in images that look more photorealistic compared to just having diffuse and specular lighting. To implement PBR, we update our lighting equations to use BRDF(Bidirectional reflectance distribution function). The BRDF for diffuse light is almost the same as the one we are using previously, so we need not update that. But the BRDF for specular light is updated to use Fresnel equation. The total BRDF is given as

f(l,v) = [D(h) * F(l,h) * G(l,v,h)] / [4 * |n⋅v| * |n⋅l|]

The Blinn-Phong (D(h)) and Fresnel (F(l,h)) consider only the active microfacet normals instead of the normal for the whole surface, so the final output will be more natural.

Below you can see the change in specular details when the material smoothness is changed.

Gloss Map / Smoothness Map / !(Roughness Map):

Instead of changing the smoothness on the whole surface, we can change it on parts of the surface using a gloss map. The gloss map a Texture, but is encoded in greyscale using BC4_UNORM. We sample the gloss map the same way as we sample a texture and use the resulting value as the amount of smoothness to be used for a fragment. Below you can see a gloss map in action.

Update: Updating the gloss map, since the previous one was hard to visualize

Adding Support for normal maps

As discussed in previous post when adding diffuse lighting, we defined normals as the vectors perpendicular to the triangle drawn. We have also discussed how we can have multiple types of textures apart from the color textures. So, normal maps are a type of texture that are used to store normal values instead of the usual color values in the RGB channels. These normal maps are used to give a surface higher details by faking bumbs or raises on the surface.

While building normal maps as textures, we need to make sure that we are not building them as SRGB, since the data in the texture is not color. Then we can add the normal map to our material data. Below is the way i am storing it in our human readable material file

Material =
EffectLocation = "Effects/Effect1.effect",
ConstantType = "float4",
ConstantName = "Color",
ConstantValue = {1,1,1,1},
TextureLocation = "Textures/earth.jpg",
NormalMapLocation = "Textures/earth_normal.jpg",

Firstly, we need to change our mesh importer from maya to import tangents and bitangents which we pass onto the shader. In the shader, first we need to map the XYZ co-ordinates from [0,1] to [-1,1] . We do that using (2*value)-1. After we finish mapping, we create a TBN (Tangent,Bitangent, Normal) matrix, that we use to calculate the final normal to be used which is:

tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z

Finally, we multiply the incoming normal with this matrix and use the resulting normal in the places where we usually use our normals.

Below we can see the normal map working. The first is a test normal map. You can see letters “RAISEN” as raised and “SUNKEN” to appear low. This normal map was given by our professor, but i had to invert the green channel it to make it work in my engine, the resulting normal map looks like the following.

I’ve created a normal map from a low resolution QR code image i had. The resulting nomal map looks the following and you can see it in action below.

Update:  Changed the QR code to another texture because of the low resolution. Below you can see the updated video

And another example is the earth texture. You can see mountain ranges as we rotate the globe. I couldn’t find the source website, so linking here for download. The source was free to download and distribute.

Adding Support for Specular lighting and point lights

Specular reflection : Specular reflection means that the object is reflecting perfectly, such that the viewer can view the light source reflected from the object. But in the perfect scenario, the user can only see the specular reflection from only one angle, where the incident light angle from the source is equal to the angle to the viewer.

To mitigate this, we use an approximation for specular reflection known as the blinn-phong shading model. The total shading value can be given aswhere N is the normalized normal and H is the normalized half angle between the Light direction and view direction. The alpha is a exponent that represents how smooth an object is. The higher the value of alpha, the sharper the reflection. Specular lights are additive to other lights present in the scene.

Specular highlight: The specular highlight is a bright spot of light where the angle between the incident light and reflected light is the closest. The smoother the material, the sharper the highlight would be. Below you can see the specular light and highlight in action. Here we can see the specular highlight moving when the camera moves. The reason this happens is because of the equation given above, where we are dependent on the half angle between camera to source and light to source.

Below we can see the different variants of shininess. The first is when the exponent is 50 and the second is when the exponent is 10


Point Light: Point lights are the approximation to the real world light bulbs in that they emit light in all directions from one point. They are the most used sources of lighting to approximate the real world light sources. In our engine we support only one point light, but there can be multiple of them in one scene. Below we can see point light in action in my scene.

Adding transparency support for materials.

Alpha Transparency:

To enable alpha blending, we add the flag to our effect file as shown below

Effect =
VertexShaderLocation = "Shaders/Vertex/standard.shader",
FragmentShaderLocation = "Shaders/Fragment/standard.shader",
VertexInputLayoutShaderLocation = "Shaders/Vertex/vertexInputLayout.shader",
AlphaTransparency = "true",
DepthBuffering = "true",

After adding alpha transparency, we need to make sure that we render all effects with transparency turned on before rendering meshes with it turned off. To enable that, I’ve changed my render commands, so that meshes with transparency which we call as dependent draw calls are drawn first and meshes with non transparent effects called independent draw calls are drawn after these.

Also, since if an effect is transparent is has to show what’s behind it. So, we draw dependent draw calls from back to front instead of front to back as we do with other draw calls. Below is a screenshot of transparent meshes being rendered and a video of the order in which they are being rendered.

Adding Support for Materials in Engine


A material in my engine specifies which shaders a object has to use along with some material constants such as the color. Currently in my material i have support for Effects and colors but as you can see below, i also have support for specifying the textures, but support for textures will come at a later date.

[Update] I’ve removed the requirement to give the path to the binary file. So now the material file takes in just the path to the human-readable file. The filename changes from .effectbinary to .effect.

Material =
EffectLocation = "Effects/Effect1.Effect",
ConstantType = "float4",
ConstantName = "Color",
ConstantValue = {1,1,1,1},
TextureLocation = "Textures/Texture1.texture",

Similar to mesh builder and effect builder that we had before, I created a material builder, which takes in this file and outputs a binary file as shown below.

As you can see in the human-readable file above, I have a constant “color”, which can be specified in the file. If there is not value for the constant, i default it to white.

I then specify which material a gameobject has to use in the game. I have a material class which reads in the binary file into a material. To submit data from a material to shaders, we use a per material constant buffer. After performing this, we can now submit only meshes and material instead of mesh and effect.

The final result looks like this.

The reason we see black trees is because the constant color in that material is set to red which is {1,0,0,1}. The output color per fragment is calculated as

output color = vertexColor * material color;

Here vertexColor is green as it is using the same mesh as the other trees. So, we get {0,1,0,1} * {1,0,0,1}  which gives us {0,0,0,1} which is black.

After adding materials, now we sort the meshes first by effects and then by materials. Since all my meshes are using the same effect, they will now be sorted by materials as shown below.


Working With Textures

Filtering of textures:

In texture mapping, we using filtering to smooth textures when they are displayed larger or smaller than they are. The most commonly used filtering method is the bilinear filtering. Bilinear filtering user bilinear interpolation, which is like linear interpolation but in the 2D coordinate space. Bilinear filtering performs this interpolation on four nearest texels of the current pixel. In our engine, we default to bilinear filtering, but we can also change it to no filtering (or nearest neighbor) if we need. Below you can see the difference between bilinear filtering and no filtering.

With filtering:


As you can observe in the image without filtering, the edges on the numbers are more pronounced when compared to the image with filtering.


Mipmaps are images which are the same as the texture image but with lower resolution. The height and width of each mipmap levels is half the previous level with the lowest level being a 1×1 image. Mipmaps increase rendering speed and help in reducing aliasing. Aliasing occurs due to reconstruction of a sampled image at the wrong resolution. Mipmaps reduce this by having a level at particular resolutions which help in reducing aliasing effects when viewed at that resolution.

With Mipmaps:


Having no mipmaps keeps the texture at the same resolution as the original image which results in the output not getting filtered and smoothed. Below you can see all the mipmap levels for the car texture.

Adding more detail to geometry with alpha channel:

We can make images sharper by discarding fragments which have alpha value that is less than a cutoff. We perform this check in the fragment shader and use the discard() hlsl function to perform this function.

Simple Animations:

Since UVs are just array locations in the texture, we can change how we access the array to achieve some simple animations. One simple animation is to create a flowing water texture which is only a texture, but we are sampling the texture according to time which makes it look like its flowing.

float2(cos(i_textureData_local.x + g_elapsedSecondCount_simulationTime), i_textureData_local.y);

The above line shows the code to achieve the above effect. We can perform this in both vertex and fragment shader, but I am performing it in the vertex shader as it is less expensive than doing it in fragment shader. Also to achieve this, we need the image to tile.

Another effect is having the image rotate around an axis.