Final Project – Creating Multiple View for different Cameras

For the final project for my graphics class, I wanted to finish something that I wanted to do for the final project in the previous semester. That is to have split screen gaming. I could not finish it during the previous semester, but with better knowledge now, I set out to finish this. My goal for the final project is to create the viewport selector screen in Maya, where you can select which view you want, to view the object. It looks like the image below

The four different parts are different viewports and you can select which viewport you want from this screen.
To create the above view, first, I needed to create five different viewports. One for each square and one for the final selected camera view. We could also have resized the viewport, but this was much easier to implement. Since our window is 512 * 512, Each of the viewports is 256*256.

viewPort[0].TopLeftX = 0.0f;
viewPort[0].TopLeftY = 0.0f;
viewPort[0].Width = static_cast<float>(i_resolutionWidth);
viewPort[0].Height = static_cast<float>(i_resolutionHeight);
viewPort[0].MinDepth = 0.0f;
viewPort[0].MaxDepth = 1.0f;

viewPort[1].TopLeftX = 0.0f;
viewPort[1].TopLeftY = 0.0f;
viewPort[1].Width = static_cast<float>(i_resolutionWidth/2.0f);
viewPort[1].Height = static_cast<float>(i_resolutionWidth/2.0f);
viewPort[1].MinDepth = 0.0f;
viewPort[1].MaxDepth = 1.0f;

viewPort[2].TopLeftX = 256.0f;
viewPort[2].TopLeftY = 0.0f;
viewPort[2].Width = static_cast<float>(i_resolutionWidth / 2.0f);
viewPort[2].Height = static_cast<float>(i_resolutionWidth / 2.0f);
viewPort[2].MinDepth = 0.0f;
viewPort[2].MaxDepth = 1.0f;

Above is the sample of all the viewports that I created. Viewport 0 is the default viewport. After creating the viewports, we need to create transforms for each of the mesh in the projected space for each of the cameras. So, I calculate and save the per draw call data which contains the transforms and the positions of the cameras for each camera and also calculate the local to projected space for each one of them. Since we will not be rendering all the four camera views at every frame, I only calculate these matrices when I am showing these camera views which reduces the amount of computation required.

After we create these transforms, now it is the time to draw. When drawing to multiple viewports, you need to draw all the meshes multiple times, since the matrix transformations for each camera view is different. Below is the sample code for the switching and drawing of meshes.

I have created a small function in my graphics helper class which we set up in the first semester to change viewports based on a value I send.

constexpr unsigned int viewPortCount = 1;
direct3dImmediateContext-&gt;RSSetViewports(viewPortCount, &viewPort[i_ViewPortNumber]);
if (showMultipleCameras)
for (auto k = 1; k < 5; k++) { s_constantBuffer_perDrawCall.Update(&amp;s_renderSecondCam[k-1].constantData_perDrawCallNewCam[index]); s_helper->SetViewPort(k);
if ((currentMeshIndex ^ meshIndex))
currentMeshIndex = meshIndex;

You can see the multiple viewports in action below

The first viewport is the perspective mode where the player can move the camera freely, the second is top-down where the camera moves only in Y-axis, third is front, where the camera moves only in Z-axis and the fourth, is side, where the camera moves in X-axis. The controls are given below.

Working with lighting.

Since specular light with PBR is calculated using the angle between the camera and the fragment, I had an issue where when showing all the four viewports, I was calculating the specular and environmental reflections based on only the perspective camera which resulted in wrong lighting values and reflections. To rectify that, I am currently passing the per-frame data per draw call which is really inefficient but was the fastest way I could fix this issue without changing all my render commands. I am working on making this more efficient by updating both my render commands and shaders.

What I learned from this class:

I have learned the meanings of various terms in graphics programming and won’t look like a noob when other people mention texels or PBR. I also better understand the graphics pipeline itself and optimizing the various stages of the pipeline to get most of it. I have also created some cool shaders and showed how to create various illusions utilizing textures. I also learned about the various transformations we need to perform to draw a 3D object onto a 2D surface that is the screen which was helpful when doing this project.

Download: Link


Space: To view all viewports and when pressed again switches to the previous active view. The following keys work only in view selector.

1 – Switches to perspective mode.
2 – Switches to Top-down mode.
3 – Switches to Front mode.
4 – Switches to side mode.

Once switched to the given mode:
In perspective mode:

W, S – Move the camera in Z.
A, D – Move Camera in X.
Q, E – Move Camera in Y.
Z, X – Rotate camera around Y.
I, K – Move the first gecko in Z.
J, L – Move gecko in  X.
U, O – Move gecko in Y.

In Top-down mode:
W, S – Move the camera in Y.
I, K – Move the first gecko in Z.
J, L – Move gecko in  X.

In Front mode:
W, S – Move the camera in Z.
J, L – Move gecko in  X.
U, O – Move gecko in Y.

In Side mode:
W, S – Move the camera in X.
I, K – Move the first gecko in Z.
U, O – Move gecko in Y.

Adding support for cube maps

Cube maps:

cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map can then be sampled to create reflections which look like they are in the scene.

To get reflections from cube map, first we find the reflected ray of the one from camera to screen. We use the hlsl function reflect() for that. Once we get the reflected ray, we sample the cubemap at that location to get the desired reflection. This can be seen in screenshots below.

Update: I made a mistake when calculating the reflected fresnel equation. Instead of using the reflected vector from camera, i was using the normal vector which was causing the white out you can see below. I’ve added a new video after the screenshot which shows the correct reflections.

Creating metals:

Metals have a very high reflection value and also reflect light in such a way that the reflected light has the same color as the metal. So we have to multiply the value of color to both the sampled data from environmental map  and the specular light. Metals do not have diffuse light because of their high reflectance, so we can ignore that value. You can see the gold shader below. The while point at the center is the specular highlight from the direction light

Adding Support for UI in engine.

As we know, almost all games contains some sort of UI. In my engine, i have added support for images to be shown as UI sprites. The sprite is basically a quad drawn with four vertices and we change the size of the quad from the game side. Since the size of the quad is constant, I am creating a quad the size of the viewport at the start of the game and adjusting the   size of each individual UI element later. From the game, we send the size which would be between 0-1 for both width and height and we also specify the position of the element, which would be between -1 and 1 for the x and y axis, with origin at the center.

The UI quad is drawn as a triangle strip. A data to draw a triangle strip consists of only the vertex information. Since we donot send the index data which is equal to sizeof(index) * number of vertices, triangle strips are more efficient compared to regular meshes. When drawing the quads, i draw all the previous meshes first, change the type to draw as triangle strips and then draw them which you can see in the screenshot below.

Since the UI has zero depth, the transformation matrix is already in the projected space. Currently the values that i require game to send are used as the transform values in the projected matrix. I store this projected matrix for my vertex shader and in the fragment shader i sample the texture in the material and output the color.

Each sprite has its own material and transform, but shares the vertex buffer and vertex layout. The vertex layout only contains two elements, position and UV data. So we are also being efficient by not sending color, normals etc. Currently i am using four floats, but we can make the struct even smaller by using int8_t instead of float, since we will only be using values of -1,0,1 for all information in the struct.

Changing the color scheme from sRGB to linear


sRGB (standard Red Green Blue) is an RGB color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the Internet. It is often the default color space for images.

Even though the monitors are linear the renderers themselves are non linear. GPUs already have hardware support to convert to sRGB while sending output to monitor. Hence we send in data in linear and perform calculations in linear.

From the previous assignments, we need to convert all color values when passing to shaders. Most colors that we use come from other applications which render them in sRGB. So we convert them into linear in our graphics pipeline in C++.

Below are the before and after images of the same scene with and without sRGB:

PBR!!!!!! and gloss maps

PBR or physically based rendering is the process of rending using a more accurate flow of light. This results in images that look more photorealistic compared to just having diffuse and specular lighting. To implement PBR, we update our lighting equations to use BRDF(Bidirectional reflectance distribution function). The BRDF for diffuse light is almost the same as the one we are using previously, so we need not update that. But the BRDF for specular light is updated to use Fresnel equation. The total BRDF is given as

f(l,v) = [D(h) * F(l,h) * G(l,v,h)] / [4 * |n⋅v| * |n⋅l|]

The Blinn-Phong (D(h)) and Fresnel (F(l,h)) consider only the active microfacet normals instead of the normal for the whole surface, so the final output will be more natural.

Below you can see the change in specular details when the material smoothness is changed.

Gloss Map / Smoothness Map / !(Roughness Map):

Instead of changing the smoothness on the whole surface, we can change it on parts of the surface using a gloss map. The gloss map a Texture, but is encoded in greyscale using BC4_UNORM. We sample the gloss map the same way as we sample a texture and use the resulting value as the amount of smoothness to be used for a fragment. Below you can see a gloss map in action.

Update: Updating the gloss map, since the previous one was hard to visualize

Adding Support for normal maps

As discussed in previous post when adding diffuse lighting, we defined normals as the vectors perpendicular to the triangle drawn. We have also discussed how we can have multiple types of textures apart from the color textures. So, normal maps are a type of texture that are used to store normal values instead of the usual color values in the RGB channels. These normal maps are used to give a surface higher details by faking bumbs or raises on the surface.

While building normal maps as textures, we need to make sure that we are not building them as SRGB, since the data in the texture is not color. Then we can add the normal map to our material data. Below is the way i am storing it in our human readable material file

Material =
EffectLocation = "Effects/Effect1.effect",
ConstantType = "float4",
ConstantName = "Color",
ConstantValue = {1,1,1,1},
TextureLocation = "Textures/earth.jpg",
NormalMapLocation = "Textures/earth_normal.jpg",

Firstly, we need to change our mesh importer from maya to import tangents and bitangents which we pass onto the shader. In the shader, first we need to map the XYZ co-ordinates from [0,1] to [-1,1] . We do that using (2*value)-1. After we finish mapping, we create a TBN (Tangent,Bitangent, Normal) matrix, that we use to calculate the final normal to be used which is:

tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z

Finally, we multiply the incoming normal with this matrix and use the resulting normal in the places where we usually use our normals.

Below we can see the normal map working. The first is a test normal map. You can see letters “RAISEN” as raised and “SUNKEN” to appear low. This normal map was given by our professor, but i had to invert the green channel it to make it work in my engine, the resulting normal map looks like the following.

I’ve created a normal map from a low resolution QR code image i had. The resulting nomal map looks the following and you can see it in action below.

Update:  Changed the QR code to another texture because of the low resolution. Below you can see the updated video

And another example is the earth texture. You can see mountain ranges as we rotate the globe. I couldn’t find the source website, so linking here for download. The source was free to download and distribute.

Adding Support for Specular lighting and point lights

Specular reflection : Specular reflection means that the object is reflecting perfectly, such that the viewer can view the light source reflected from the object. But in the perfect scenario, the user can only see the specular reflection from only one angle, where the incident light angle from the source is equal to the angle to the viewer.

To mitigate this, we use an approximation for specular reflection known as the blinn-phong shading model. The total shading value can be given aswhere N is the normalized normal and H is the normalized half angle between the Light direction and view direction. The alpha is a exponent that represents how smooth an object is. The higher the value of alpha, the sharper the reflection. Specular lights are additive to other lights present in the scene.

Specular highlight: The specular highlight is a bright spot of light where the angle between the incident light and reflected light is the closest. The smoother the material, the sharper the highlight would be. Below you can see the specular light and highlight in action. Here we can see the specular highlight moving when the camera moves. The reason this happens is because of the equation given above, where we are dependent on the half angle between camera to source and light to source.

Below we can see the different variants of shininess. The first is when the exponent is 50 and the second is when the exponent is 10


Point Light: Point lights are the approximation to the real world light bulbs in that they emit light in all directions from one point. They are the most used sources of lighting to approximate the real world light sources. In our engine we support only one point light, but there can be multiple of them in one scene. Below we can see point light in action in my scene.

Decision and Behavior Trees

Decision Making:

Decision making is the core of all AI in games. Decision making helps the AI to make a decision based on various factors and execute actions based on the inputs. Some of the most common decision making algorithms include Decision trees, Behavior trees, Fuzzy logic, state machines and others. We will cover the first two in the article below.

The most important data structures and classes that we will use in the game are ActionManager and Action. Action encompasses any action that can be performed by the agent. The action manager keeps track of all the actions that are pending and active and adds or removes them from each queue based on their priority and whether they can interrupt other actions or not. The action manager also checks if an action has been waiting for too long and removes it from the pending list. I also have an AI class which serves as the base class for all the AI present in the game.

The AI class calls the update function of the decision making behavior first and gets the final action to be performed. This action is then added to the pending queue of the action manager of that AI. The action manager then looks at all the tasks in the pending queue and as discussed above, based on the priority either assigns them to be the active action or keeps them in the pending queue.

There is also a Game manager which manages the player and all the AI present in the game which calls the update function of the AI.

Decision Trees:

Decision trees are one of the simpler decision making behaviors along with state machines. The decision trees contain only two types of nodes. They are the decision node and the action node. The decision node contains the logic to check for decision and the action node serves as the leaf node and calls a particular action in the game. Both the decision node and the action node are derived from decisiontreenode in my implementation.

In the decision nodes, i have a true and a false children nodes and based on what the condition is evalutated to, the decision node calls the respective child nodes. I am extensively using function pointers to set the functions that the decision node uses to check for the game condition and the action that is to be performed in the action node, so that i do not have to create different types of nodes for different decisions and actions and can use them just by creating objects of the main classes.

The decision tree that i am using in my game is a really simple version of a guard behavior. The AI checks if the player is within a certain radius and if it is present, then it will go to the player. Otherwise the AI will perform a random wander action.

Behavior Tree:

Behavior Trees are one of the most complex decision making algorithms. Halo 2 was one of the first games to come out with behavior tree and since then they have become really popular in the gaming industry. Behavior trees combine many previously existing concepts such as Hierarchical State Machines, Scheduling, Planning, and Action Execution. Similar to a node in decision trees, behavior trees contain tasks. They contain many types of tasks and each tasks can take in different types of inputs and perform many kinds of actions. Some of the tasks include selector, sequencer, inverter and condition. Tasks can also inculde action tasks which are similar to action nodes and usually come at the end of the decision tree. Let us discuss some of the tasks below.

  1. Selector : Selectors are used to select the first task that is successful among its children. It will return as soon as one of the children nodes returns success, otherwise it keeps on running all the nodes. Figure showing example of a selector node in a behavior tree
  2. Sequencer : A sequencer is opposite to a selector in that it will keep on running as long as all its children are return success. It will exit immediately as soon as one child returns failure and does not continue execution.Figure showing example of a sequence node in a behavior tree
  3. Condition: A condition task checks for a condition and selects a child based on the output of that condition. The condition can be a boolean condition or others such as a checking against multiple floats or enum values.Figure showing a behavior tree with composite nodes
  4. Inverter :  An inverter inverts the ouput of the status of its child. Hence it can only have one child compared to the above nodes which can have multiple child nodes.

Implementation of Behavior Tree:

All the tasks derive from an abstract base class Task. Each task then implements all the interfaces and the requred logic for that task. The behavior tree class itself is derived from the Decisionmakingbehavior class similar to Decision tree, but includes additonal information such as a blackboard used to share data between various tasks in the behavior tree.

The AI logic behind the behavior tree is similar to that of the decision tree in that the AI moves randomly, but seeks and eats the player when the player is too close to the AI. But there are many differences between behavior tree and decision tree.

Instead of simple move as i am using in decision tree, I am using A* to do path finding to the next randomly selected point on the screen. The video below is not long enough for it to show path finding to the next point, but once the AI reaches a certain point, it will select a point on the graph. I also use path finding to move the AI towards the player when the player moves too close to the AI.

The behavior tree is also more complex compared to the decision tree with mine containing a sequencer node, multiple condition nodes and an inverter node. Below you can see the representation of my behavior tree.

I had to use an inverter node, because of the way i was retriving the status of the actions was inverted to how the selector works. Return statuses are one of the most important components of behavior trees as almost all nodes depend on them to perform their tasks. I have 4 types of statuses, they are success, failure, running and stopped.

Decision Tree learning

Learning is the process in which the AI looks at existing data and learns how to behave on its own. So instead of writing logic for AI, we can write code to learn some logic from the existing data and AI can create its own logic. Learning AI has the potential to adapt to each player, learning their tricks and techniques and providing a consistent challenge and is an hot topic in the gaming world now. There are many techniques that are applied for learning. Some are local to a machine but some use the power of servers to perform much deeper learning and provide a much better AI experience. Machine learning is games is one of the most researched topics in game AI world now.

In this post, we will keep the learning process simple and create a decision tree out of an exisiting behavior tree. Decision trees can be efficiently learned, since they a series of decisions that generate an action to take based on a set ofobservations. The trees can be either wide or deep and depending on that can perform different tasks. There are many algorithms that are used to perform decision tree learning, but many of them are derived from ID3 which we will look into now.

The ID3 algorithm uses a set of examples that are collected from observations and actions. The observations are called attributes. The main logic for the algorithms takes in an array of examples, array of attributes and a root node to start the tree. The algorithm then divides all the examples into two groups and reucrsively performs the same action by creating a node for each group which can produce the most efficient tree. The algorithm returns when all the examples agree on the same action. The splitting process looks at each attribute and calculates the amount of information gain for that attribute and the division which has the highest information gain is then selected as the most likely to provide an optimized tree.

To select the most optimized branch, ID3 uses entropy the entropy of each action in the set. Entropy measures the degree to which the actions in the all the examples are in agreement with each other. If all examples perform the same action, then the entropy will become zero. If all the possible actions are distributed evenly among all the examples, then the entropy becomes one. We define information gain as the reduction in total entropy and we select the division which can reduce the entropy to zero.

In my implementation, once the total entropy reaches zero, i create an action node as it means that all examples have reached the same action. Below is the code for the Make decision tree function

auto initialEntropy = GetEntropy(i_Examples);
if (initialEntropy <= 0)
ActionNode* action = m_ActionFunction(i_Examples[0]->GetAction());
i_RootNode->SetNodes(action, nullptr);
auto exampleCount = i_Examples.size();
float bestInfoGain = 0;
std::string bestAttribute = "";
std::vector<std::vector<Example*>> bestSets;
for (auto attribute : i_Attributes)
auto sets = SplitByAttribute(i_Examples, attribute);
auto overallEntropy = GetEntropyOfSets(sets, i_Examples.size());
auto infoGain = initialEntropy - overallEntropy;
if (infoGain > bestInfoGain)
bestInfoGain = infoGain;
bestAttribute = attribute;
bestSets = sets;
i_RootNode = m_DecisionFunction(bestAttribute);
auto it = i_Attributes.begin();
while (it!=i_Attributes.end())
if ((*it) == bestAttribute)
it = i_Attributes.erase(it);
for (auto set:bestSets)
auto attributeValue = set[0]->GetAttribute();
DecisionNode* node = new DecisionNode();
i_RootNode->SetNodes(node, nullptr);
MakeDecisionTree(set, i_Attributes, node);

In the above code, each example contains an action to perform and an attribute associated with it. I created a map of action nodes and decision node which can be looked up based on the action name that is stored in the example.

This results in the creation of a decision tree which is similar to how my decision tree works.


Downloads: Assignment3

Controls : Click to move the player. AI moves automatically.

Adding transparency support for materials.

Alpha Transparency:

To enable alpha blending, we add the flag to our effect file as shown below

Effect =
VertexShaderLocation = "Shaders/Vertex/standard.shader",
FragmentShaderLocation = "Shaders/Fragment/standard.shader",
VertexInputLayoutShaderLocation = "Shaders/Vertex/vertexInputLayout.shader",
AlphaTransparency = "true",
DepthBuffering = "true",

After adding alpha transparency, we need to make sure that we render all effects with transparency turned on before rendering meshes with it turned off. To enable that, I’ve changed my render commands, so that meshes with transparency which we call as dependent draw calls are drawn first and meshes with non transparent effects called independent draw calls are drawn after these.

Also, since if an effect is transparent is has to show what’s behind it. So, we draw dependent draw calls from back to front instead of front to back as we do with other draw calls. Below is a screenshot of transparent meshes being rendered and a video of the order in which they are being rendered.

Adding Support for Materials in Engine


A material in my engine specifies which shaders a object has to use along with some material constants such as the color. Currently in my material i have support for Effects and colors but as you can see below, i also have support for specifying the textures, but support for textures will come at a later date.

[Update] I’ve removed the requirement to give the path to the binary file. So now the material file takes in just the path to the human-readable file. The filename changes from .effectbinary to .effect.

Material =
EffectLocation = "Effects/Effect1.Effect",
ConstantType = "float4",
ConstantName = "Color",
ConstantValue = {1,1,1,1},
TextureLocation = "Textures/Texture1.texture",

Similar to mesh builder and effect builder that we had before, I created a material builder, which takes in this file and outputs a binary file as shown below.

As you can see in the human-readable file above, I have a constant “color”, which can be specified in the file. If there is not value for the constant, i default it to white.

I then specify which material a gameobject has to use in the game. I have a material class which reads in the binary file into a material. To submit data from a material to shaders, we use a per material constant buffer. After performing this, we can now submit only meshes and material instead of mesh and effect.

The final result looks like this.

The reason we see black trees is because the constant color in that material is set to red which is {1,0,0,1}. The output color per fragment is calculated as

output color = vertexColor * material color;

Here vertexColor is green as it is using the same mesh as the other trees. So, we get {0,1,0,1} * {1,0,0,1}  which gives us {0,0,0,1} which is black.

After adding materials, now we sort the meshes first by effects and then by materials. Since all my meshes are using the same effect, they will now be sorted by materials as shown below.