Game Engineering – Building Game Engine – Part 9

Creating Human readable and Binary Files for Effects

As discussed in the previous posts, having a human readable file to edit data and having a binary file to load that data is more useful, compared to hardcoding the data in code. In this assignment, we have to create a Human readable format for effects and then convert it into binary format during build time and then load the required data from the binary file during runtime.

Human Readable effect file:

The file contains the locations to vertex and fragment shader and the type of Render state which can be “AlphaTransparency”, “DepthBuffering” or “DrawBothTriangleSides”. The file looks

return
{
Effect =
{
VertexShaderLocation = "Shaders/Vertex/standard.shader",
FragmentShaderLocation = "Shaders/Fragment/standard.shader",
RenderState = "DepthBuffering",
},
}

 

Binary File:

The binary file saves the above information, but also saves the length of the path to Vertex Shader. I am storing this value to find where the path to fragment shader starts in the binary file. If I do not include that info, I have to iterate through every byte to check where the null termination character is and then point the next byte as fragment shader location. I am also adding a null terminator after the vertex and fragment shader locations, so that we can directly load the values as strings.

The input shaders are stored to our $(GameInstallDir)/data folder, but our working directory for game is the $(GameInstallDir) folder. I Chose to append “data/” in front of the file names, so that we do not have to add that during runtime. The upside of this method is that it is faster to load the paths during runtime, while the downside is increased file size. I think the improvements in load speeds offset the increase in file size, hence I included those.

 

The red rectangle is the Renderstate. The blue one is the length of path for vertex shader and the violet rectangles show the null terminators after each path.

Size

RenderState = uint8_t = 1Byte

Length of Path = uint8_t = 1Byte

Vertex Shader Path = 35 characters = 35 Bytes

null character = 1 Byte

Fragment Shader Path = 37 Characters  = 37 Bytes

null character = 1 Byte

Total = 76 Bytes

Extracting data:

The above screenshot shows the code to extract data from the file. First we are extracting renderstate, length of vertex path, the vertex path. To know where the path to fragment shader starts. We add the length of vertex path plus 1, to account for the null termination character, to the offset.

 

Since most code is changed in the backend, there are no visible changes to the output.

 

Output:

Downloads:

MyGame_x64_1101 MyGame_x86_1101

Game Engineering – Building Game Engine – Part 8

Converting Maya exported files to Binary files for loading into the game.

This assignment is focused on creating a binary mesh format and converting the existing mesh files which are exported from Maya in our file format to this binary file format during the build time, so that we can load the binary files during runtime.

Advantages of Binary Files over Human Readable files.

  1. Binary files are smaller compared to human readable files: The size taken by binary files on the disk is many times smaller than comparative human readable files. This difference can be huge when the source human readable file is very big. Below is the table outlining the difference in sizes between binary and human readable files.
Name Human Readable File Size Binary file size Difference
Plane 459 Bytes 80 Bytes 379 Bytes
Donut 154KB 29.6KB 124.4KB
Gas Can 1.37MB 262KB 1108 KB
Lambo 2.12MB 403KB 1717 KB
Helix (Index count 62436) 4.15MB 772KB 3378 KB

 

  1. Faster load and read times from Disk : Since binary files are small and contains information in binary format, the time taken to load the file and the time taken to read the file after loading are very small compared to Human readable files where we are using Lua to convert data from the file into binary format to use in the game.
Name Time to load and read in Lua Time to load and read binary files Difference ( in Seconds)
Plane 0.002427 0.000095 0.002332
Chair 0.006533 0.000081 0.006452
Helix 0.117267 0.000528 0.116739

 

As seen from the data above, there is a small but considerable difference between load times using Lua and binary files even for simple meshes like a plane. The time taken to load the Human readable mesh also increases with increasing mesh complexity, but the times for binary files are still considerably less.

Building different Mesh files for different Platforms:

As discussed in previous posts, Direct3D and OpenGL use opposing winding order to render a given triangle. Since its easier to have a common winding order across the engine, mine uses the OpenGL one as the default and until now I have been switching the winding order when initializing the mesh. Even though this process still works for binary files, to take the advantage of the gain in speed, I moved the part of switching the winding order to the place where I am building the binary file. This allows me to have two different binary files for the different platforms and I do not have to worry about switching the winding order during loading the mesh.

Order of data in Binary files:

The above picture shows the plane mesh in the Binary file format. The red highlighted bytes are the Index Count, the purple is the Vertex Count, Yellow is the Index array and Blue is Vertex Array. The light blue underlined part is the vertex positions in the vertex array and the brown underlined one is the position of color data of a vertex.

Sizes of different parts :

Index count: uint16_t = 2 Bytes

Vertex Count: uint16_t = 2 Bytes

Index Array: each is uint16_t = 2Bytes * number of indices

Vertex Array: each is 3 * floats for position + 4 * uint8_t for Color = 16 Bytes * Number of vertices

Extraction of data:

The data is extracted in the same order as the file format.

To extract Index and Vertex arrays, we just have to create pointer to the correct location from where the corresponding array starts, since the data is already binary and is loaded in memory.

Optional Challenges:

  1. Using a different file extension for the binary files: To differentiate the Human readable files from the binary files, I changed the extension of my binary files to “.meshbinary”.
  2. Loading a recognizable object: I am using a Gas can and a Chair which I have used in my previous assignment for this one. Meshes are courtesy of Chris.

 

Final game output:

Downloads:

MyGame_x86_1024 MyGame_x64_1024

Game Engineering – Building Game Engine – Part 7

Importing Meshes from Maya

Maya is one of the most common software used to create 3D models which are to be put in games. Maya also provides documentation to create plugins to extend the capabilities of the software. In this project, we are creating a maya plugin to export the mesh files in the file format that can be read by our game engine as described in the previous blog post here.

After downloading and setting up the Maya SDK, we have to enable the plugin in Maya to be able to export to our format. One we export to our format, we have to make sure that our engine builds the mesh files and then specify that this the particular mesh we have to use. The file to which we are exporting the data only contains the vertex and index information and comments which show which vertex is at what index. Apart from that we are not importing anything from maya since, the file should be human readable and ediatable and if we include parameters such as normals which we are not using in our game, it would create confusion when someone opens the file.

Output:

Importing Color: First we need to change our maya importer to import color to our files. Then we need to change code in lua, so as to import color to our vertex buffers and after importing, we need to tell both OpenGL and Direct3D that we are adding color to our vertices. After this is done, the output should look like the following.

 

Debugging Maya: Maya also provides debugging symbols (.pdbs) along with the SDK. This allows us to debug the Maya importer we built. The way we do it is to run Maya first and in visual studio, attach the process  to debug by going to ‘Debug->Attach To Process’.

The above screenshot shows Visual Studio breakpoint when first loading the plugin we build in Maya.

The maya mesh  importer is a standalone project which does not depend on any other projects. Hence no need to add references to other projects or add the maya mes importer as reference to other projects.

Opening meshes with a greater number of vertices:

Currently we are storing the total number of vertices that can be in our game in a uint16_t which can hold upto a maximum of 65535 vertices. If the count exceeds this, it would overflow. To mitigate this, we are limiting our vertex count to this value and if a mesh exceeds that, we will not render that mesh.

Controls

  1. WASD to move the Donut
  2. IJKL to move the Plane
  3. F3 to swap the Donut to a Chair
  4. Left and Right arrow to move the camera in X axis.
  5. Up and Down arrows to move the camera in Z axis.
  6. Z and X to rotate the camera around itself
  7. Ctrl and Alt to move camera along the Y axis

Thanks to Chris Cheung for providing me with some Maya meshes like the chair that I could then import into my game. You can find more of his word at his artstation.

Downloads:

MyGame_x86_1017 MyGame_x64_1017

 

Game Engineering – Building Game Engine – Part 6

Creating a human readable mesh file

Till this point in the class, we were submitting meshes from the code as vertex and index buffer arrays in the game. Even though this method is better compared to when we were submitting the data in the engine, this has its drawbacks. Chief among them is we have to modify the code every time the meshes have to change. Having meshes load from a separate file removes the need for that, as we can load the vertex and index buffer from the file and render the mesh. There can be two different approaches to how the data is stored in files, storing them as binary files or storing in a human readable format.

Storing the files in a binary format can be very beneficial as loading binary files is considerably faster compared to loading plain text files. The downside is that they are not human readable and there is no easy way of editing binary files. The second is having them stored in a human readable format. The main advantage of such files is that they can be easily editable. The downside of this is that they can be slow to load.

Example Mesh File:

return
{
	VertexBuffer =
	{
		{
			Position = {0,0.5,0},
			Color = {0,0,0,0},
		},
		{
			Position = {1,0.5,0},
			Color = {0,0,0,0},
		},
		{
			Position = {0.5,1,0},
			Color = {0,0,0,0},
		},
		{
			Position = {0,-0.5,0},
			Color = {0,0,0,0},
		},
		{
			Position = {1,-0.5,0},
			Color = {0,0,0,0},
		},
	},
	IndexBuffer = {1,0,3,4,1,3},
}

The above code shows a mesh file. It has two parts the vertex buffer and index buffer. Each table in the vertex buffer represents one vertex. Each vertex contains a position and color for that position. The position array is contains the x,y,z coordinates in that order. The color has r,g,b and alpha in the that order. Since these files will be used by people with basic knowledge about the arrangement of position and color, they do not have labels associated with them while still being readable. The color attribute is currently not being used but has been added to future proof the mesh format.

Handles:

The file is processed with lua and the data is fed into the existing mesh class to create a mesh. Instead of creating a new mesh every time the game wants, we implemented a handle system. In this system, the mesh class contains a manager which tracks all the meshes being created and the game gets a handle to a mesh instead of the mesh itself. Whenever the game wants to use the mesh, it asks for the mesh using the handle and the manager will either create the mesh or return the mesh if it has been already created.

Ouput:

Even though visually the previous output and the current one look almost the same (hello again tree), there have been significant changes under the hood

Controls:

  1. WASD to move the rectangle
  2. IJKL to move the triangle
  3. F2 to swap the effects
  4. F3 to swap the triangle to a house
  5. Left and Right arrow to move the camera in X axis.
  6. Up and Down arrows to move the camera in Z axis.
  7. Z and X to rotate the camera around itself
  8. Ctrl and Alt to move camera along the Y axis

Downloads:

MyGame_x64_1003MyGame_x86_1003

Game Engineering – Building Game Engine – Part 5

Adding a representation for camera
A camera is an object which is used to specify the view that has to be shown on screen. A game can have multiple cameras and the game developer must be able to specify which camera is used to specify a particular frame and the engine must be able to render to the screen   the view from the perspective of that camera. To achieve this, we first have to convert the world space to the camera space and then map the camera space to the screen or perspective space. These are matrices which are stored in the per frame constant buffer. The vertex shader then takes the information that is in these constant buffers to render on screen.
We also added movement to camera to move around the world space in the X and Z directions.
Adding a representation from a game object.
A game object can be thought as the basic element to represent an object in the game. A game object can be anything in the game, but it has certain properties. My implementation of the game object has a rigidbody to keep track of the position, rotation and apply forces. It also contains which mesh it is made of and which effect is to be used to draw the mesh.

The class also contains member functions to set the position, rotation and also to update the mesh and the effect that the game object uses. Instead of storing the mesh and effect separately, I am using a struct which contains a mesh and effect pair.

Submitting Meshes to render
Renderers are low level engine components and we do not want to let the low level engine components know about the high level ones such as the game objects. Since meshes are now part of the game object, when we need to render a mesh we can either send the whole game object to the renderer or we can just send the details that are necessary for the renderer which are the effect and the mesh.
But since we are moving from rendering in a 2D space to a 3D space, we need some way to translate the local co ordinate space to the world co-ordinates. We can do this by creating a translation matrix using the orientation and the position of the game object. So, my final interface to submit mesh effect pairs looks like
1. void eae6320::Graphics::SetEffectsAndMeshesToRender(sEffectsAndMeshesToRender i_EffectsAndMeshes[eae6320::Graphics::m_maxNumberofMeshesAndEffects], eae6320::Math::cMatrix_transformation i_LocaltoWorldTransforms[m_maxNumberofMeshesAndEffects], unsigned int i_NumberOfEffectsAndMeshesToRender)

To move the objects around the world, we need to have a constant buffer which can be updated every draw call instead of every frame. We also need to store the data corresponding to this constant buffer, which stores the local to world transformation of every game object in the scene. After integrating this constant per draw call data. The size of my struct which stores data required render a frame increased to 1768 Bytes in OpenGL and 1936 Bytes in Direct3D.

Movement of camera and Game objects and the importance of Prediction:

After adding movement to both camera and various game objects in the scene, I noticed that the movement appeared jerky. This is because the simulation is running at a much faster rate compared to the game. To smoothen the movement, we have to predict where the camera or the game object would be in the based on the velocity and the time and update render the predicted position instead of the actual position. It is also due to this reason that we update the velocity of the body instead of directly manipulating the position.

Key Bindings:

1. WASD to move the rectangle
2. IJKL to move the triangle
3. F2 to swap the effects
4. F3 to swap the triangle to a house
5. Left and Right arrow to move the camera in X axis.
6. Up and Down arrows to move the camera in Z axis.


Final Output:

MyGame_x64       MyGame_x86

 

Game Engineering – Building Game Engine – Part 4

Moving Mesh and Effect Code from Engine to Game:

The main goal of this exercise is to move the mesh and effect initialization to game instead of the engine. By doing this we are abstracting the engine from the data as engine should be able to process any type of data but doesn’t have to know what that data means. In games, the gameplay programmer can just specify which mesh and effect he wants the engine to render and the engine must be able to render that.

Sending the Background Color to render

eae6320::Graphics::SetBackBufferValue(eae6320::Graphics::sColor{ abs(sin(i_elapsedSecondCount_systemTime)),abs(cos(i_elapsedSecondCount_systemTime)),abs(cos(i_elapsedSecondCount_systemTime)), 1});

Output of changing the background colors.

    

The first problem that we encounter is how to pass the data from the game thread and the application thread. We use a struct(sDataRequiredToRenderAFrame) that is used to store the data required to render a frame. We first create the data in the game and send that data to the engine to store in this struct to render. Since our game and the renderer runs on two different threads if we use only one struct, the renderer thread would be waiting for the game thread to fill the data and the game thread will be waiting for the renderer to finish rendering the current frame.

To maximize efficiency, we make the game thread populate the data that is required to render the next frame while the render thread is rendering the current frame. To achieve this we use two structs, one to store the data being rendered in the current frame and one to store the data we want to render in the next frame and after the current frame ends, we swap the data in the two structs.

Since we are creating our effects and meshes in the game instead of engine, we should restrict the access to mesh and effect classes and should not allow calling the constructor and destructor. Instead we implemented a Factory function which creates a new mesh or effect and gives us the pointer to the created mesh or effect. Since we are dealing with pointers, there is a possibility that the game or renderer might free the pointer when the other one is using, and this might create undefined behavior. To mitigate this, we use reference counting to keep track if the pointer is being used and when the game or renderer no longer uses the pointer, we decrement reference count and once reference count reaches zero, we then free the pointer. The framework for reference counting was already present in the engine and we just have to implement it.

To pass the mesh and effect data between game and engine, I am using a array of struct which takes an effect and mesh in that order and the renderer then first binds the effect and then draws the mesh in the same order


m_EffectsAndMeshes[0].m_RenderEffect=s_Effect; 
m_EffectsAndMeshes[0].m_RenderMesh=s_Mesh; 
m_EffectsAndMeshes[1].m_RenderEffect=s_Effect2; 
m_EffectsAndMeshes[1].m_RenderMesh=s_Mesh2; 
eae6320::Graphics::SetEffectsAndMeshesToRender(m_EffectsAndMeshes,m_NumberOfMeshesToRender);

 

Hiding the meshes and swapping the effects:

Since we moved the code to initialize and submit the effects and meshes to game, we can also specify which ones to render and which effect goes on which one. In my game, we can hide a mesh by pressing F1 key and swap effects between the two by holding the F2 key.

 

Removing mesh

 

Swap effect:

The reason why we submit all the data required to render a frame when the renderer is rendering the previous frame is that the renderer will know what to render in the next frame and this eliminates the renderer waiting for data to be submitted by the application.

Size of Mesh, Effect and sDataRequiredToRenderAFrame:

After making the mesh and effect reference counted, the size of mesh and effect turned out to be 20 bytes in OpenGL, 40 and 48 Bytes respectively in Direct3D. The size of the struct was 168 bytes in OpenGL and 176 Bytes in Direct3D. After rearranging the member variables in the struct, the size of mesh and effect came down as shown below.

Breakdown of sDataRequiredToRenderAFrame

Member Size
Constant data required per frame 144
Color struct which holds four floats 16
The struct which holds the effect and mesh to render 8 / 16 * ( 10 pairs)
unsigned int containing the number of mesh effect pairs being rendered 4

 

Before Optimization:

Mesh Effect sDataRequiredToRenderAFrame
OpenGL 20 20 244
DirectX 40 48 328

After Optimization:

Mesh Effect sDataRequiredToRenderAFrame
OpenGL 20 16 244
DirectX 32 48 328

Size differences from last week:

For the previous assignment, the way I divided the platform specific code into a new class was not the ideal way of doing it and created a few problems while working on this weeks assignment. So I went in and changed the code so that it is better and in the process I removed a few member variables that I was using in both the mesh and the effect class which lead to the drastic decrease in the amount of memory taken by each class.

Total Memory for the graphics project:

The memory allocated to graphics project is budgeted, since memory is limited, especially in consoles and mobile. Hence the total number of meshes and effects that can be drawn at the same time is capped at 10. The total memory that will be taken is 488 Bytes in OpenGL and 656 Bytes when using Direct3D. When the game wants to render more than that number of meshes, the renderer only renders the first 10 pairs of effects and meshes and in Debug mode will throw an error.

 

Controls:

  1. Shift: To slow the simulation
  2. F1: To make a mesh invisible
  3. F2: Swap the effects between meshes

MyGame_Assignment04_x86 MyGame_Assignment04_x64

Game Engineering – Building Game Engine – Part 3

The main part of this assignment is to remove all the platform specific code that is present in the various graphics.xx.cpp files and create one platform independent graphics.cpp file. For this I created another class to hold all the platform specific code called “GraphicsHelper”. This class contained functions that mirror those of the main Graphics interface. Graphics.cpp contained all the platform independent code and calls are made to functions in graphics helper for platform dependent code. Each platform specific implementation inside GraphicsHelper is differentiated using preprocessor blocks.

The GraphicsHelper class also has interface to change the color of back buffer. At start of every frame the back buffer is first cleared usually by setting the color to black. I created a color struct that takes in values of red, green, blue and alpha between the values of 0 and 1. This I pass to my graphicshelper.cpp using the interface “SetBackBuffer” which then sets the value for the back buffer.

 

sColor m_BackBuffer {

abs(sin(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), abs(cos(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), abs(cos(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), 1

};

Interface for changing the color of back buffer

 s_helper->SetBackBuffer(m_BackBuffer); 

We also added an index buffer to tell the graphics API, the order in which vertices of a triangle are to be drawn. By using an index buffer, we can reduce the number of points that are being stored for any shape as we can remove the common vertices between each triangle that is part of that mesh. But, this introduces an additional complexity when using different renderers such as OpenGL and DirectX. Since they render points in different order, index buffer for one is incompatible with the other. The way I solved this is to take the order of OpenGL as default and swap every 2nd and 3rd point in the array.


I also moved out the code to initialize the meshes and effects from their respective classes to Graphics.cpp. During the initialization from graphics.cpp the effect needs the locations of both the vertex and the fragment shaders as strings

 
	s_Effect->Initialize(m_vertShader1Location, m_fragShader1Location);
	s_Effect2->Initialize(m_vertShader2Location, m_fragShader2Location);

The mesh requires a pointer to the array containing the vertex buffer, pointer to an array containing index buffer and the number of vertices that are to be rendered using the index buffer, as it is not possible to find the number of vertices since we are passing a pointer to the array and not the array itself.

 	
s_Mesh->Initialize(vertexData, indexData, 3);
s_Mesh2->Initialize(vertexData2, indexData2, 4);

After refactoring the code to include the changes to Graphics.cpp, my mesh class uses 28 Bytes and 48 bytes in OpenGL and Direct3D respectively. Effects on the other hand take-up 72 bytes and 120 bytes respectively. I could not find any way to reduce the size.

Optional Challenge:

As an optional challenge we must animate the background color. So instead of passing a solid color value such as {1,0,0,1} for red, I passed the value of the sine value of simulated time clamped between 0 and 1.

Final Output:

MyGame_x64 MyGame_x86</a

Game Engineering – Building game engine – Part 2

Creating common interface for rendering meshes and binding effects:

The current engine has two different Graphics.cpp files, one each for Direct3D and OpenGL, which perform exactly the same thing. The main functionality of these files are to Initialize the vertices and shading data, Bind shading data and draw the vertices, Render the frame and perform cleanup after rendering is complete. Our assignment is to create a common interface to Initialize, Bind and Cleanup effects and Initialize, Draw and Clean the meshes which is platform independent.

Since all of the code is already present, the first thing we need to do is to identify parts of the original files performing the different functions. Once identified, it is simple to move them to new files and create references to the new files in the old ones. The way I did this is to move one part of the file at a time and commenting the relevant parts in the old file, so that even if there are some errors it would be easy to figure out where things are going wrong. Once it is confirmed that everything is working, I removed the redundant code. I created two header files one for mesh and other for the effect which contain the platform independent function calls and couple of cpp files which contain the actual implementation for specific APIs.

The separation of functions for the effect are a bit more complex since the code contain lots of platform independent and dependent stuff mixed together. Hence, I created another cpp file to hold the platform independent initialization and cleanup process.

Below is the code that binds the effect and draws the mesh which is common in both the Graphics files.

// Bind the shading data

{

s_Effect.Bind();

}

// Draw the geometry

{

s_Mesh.Draw();

}

As an engine programmer we often have to dig deep and write code which directly interfaces with the hardware. But since there are lots of differences between each hardware platform and the vendor APIs that are used to access the hardware, it is easier for us and other programmers to write such interfaces which are platform independent.

Adding an additional triangle to draw, we have to keep in mind the difference in DirectX and OpenGL on the order how they render points, with DirectX using the left hand convention and OpenGL the opposite.

Finally, after moving the mesh and effect representations to their own platform independent interfaces, the graphics classes for each platform now only contains code which renderers the frame which can be moved to its own class to make the graphics class truly platform independent.

 

Visual Studio Graphics Debugger and RenderDoc:

The Visual Studio Graphics Debugger and/or RenderDoc are important pieces of software when writing software related to Graphics. Using these two software we can see the API function calls to DirectX(VS Debugger/RenderDoc) or OpenGL(RenderDoc) at a particular frame in the game. This is useful when there are graphics artefacts in the game and we want to debug where in the render pipeline are they getting introduced. We can look at each API function call and see what is being sent to the graphics card at that particular frame which makes debugging graphics related issues easier.

Following are the screenshots from the VS Graphics Debugger showing the render process for a frame in the game

The game:

The same in RenderDoc (For OpenGL)

The game:

Optional Challenge: As a challenge creating a “house” and a “tree” using triangles. This is done using 7 triangles. The hardest part of this is figuring out the points in the screen coordinate space and adding them in the correct order for the respective renderers.

DirectX:

OpenGL:

Fixing the “Debug x86” bug: My Solution had a strange bug where all configurations were building perfectly except for Debug x86. I first thought that this issue was because of a references issue due to Graphics project not being updated as discussed at this link. But even after fixing the references for the Application project, the issue persisted. So, after doing more investigation, I found out that I added a library reference to “Graphics.lib” in the project settings for my game just for the Debug x86 configuration. The issue got resolved as soon as I removed this reference in the settings and now builds in all the configurations.

You can download the game from the links below

User Inputs

  1. Shift: Plays the game in slow motion when held
  2. Esc: Exits the game.

MyGame_Assignment2_x64 MyGame_Assignment2_x86

Game Engineering – Building Game Engine – Part 1

The point of the first assignment was to integrate the ‘Graphics’ project given as a separate one into the main engine and add necessary dependencies to the ‘Graphics’ project and other projects in the engine solution so that there are no errors while building the engine from scratch. The given graphics project consisted of files specific to both DirectX and OpenGL with some common files containing wrapper classes for implementations in both frameworks. This is required for the engine to be cross platform compliant while using latest technologies that are specific to particular operating systems. Even though most of the functionality is the same in both frameworks, the way it is achieved is different with different API names and variable declarations.

After integrating the project, we created a sample game modelled to an example game given as part of engine. The example game displays a white triangle and we are required to change the name of the window and logo as part of the assignment. The name of the window was straightforward with changing a string. The logo on the other hand had to be an ‘.ico’ (icon) file, which then I had to link to the project first as an icon in the resource file and link it to the image and then define it as a value in the resource header file after which I could use it in the game.

The second requirement was to change the color of the triangle displayed by the game to any color other than white. I wrote a fragment shader1 that changes the color of a triangle being drawn on the screen. I used a time variable exposed as part of the shader to achieve the desired results which can be seen below.

The shader constantly changes color based on the absolute value sine and cosine of time. Since the shader uses values for Red, Green and Blue that should be between 0 and 1, this gives the desired output. The output should be the same when run on DirectX and OpenGL and since the functions I used to get this effect are the same in both, I did not have to change anything.

Some projects in the engine are also dependent on the graphics library and vice versa. All the projects in the engine are dependent on the application project which serves as the base class for all the other projects to inherit from. So adding the graphics project as a reference to this project made sure that it is included in every other project in the engine which is dependent on the graphics library. There was a issue with the application project where it was referencing a previous version of the graphics library and did not add graphics to the project dependencies even when added as a reference. This lead to most of us having an error with the game not building because it was unable to find the required libraries. It took some time to figure out the issue. I also had an issue where my game was building for every configuration except for Debug x86 which was also solved by this.

There were also a few optional challenges for us to complete.

  1. The first one was to find a way to re-declarations of constant buffers in each shader and declare them in one place. To accomplish this, I have added the declarations for these constant buffers in the “shaders.inc” include file, which is included in all the shaders and is called before the shader is built.
  2. The second is to either pause or slow down the rate of simulation of the game when the user presses a key. The engine already has functionality to take in user input from keyboard and also to change the simulation time and we just had to tie them both in one place. In my game, the simulation runs at half the speed when pressing the “Shift” key. This is done by setting the simulation time in the game to 0.5 and when user releases the key it is set back to 1.0.

Finally, I personally want to learn more about graphics and other low-level engine programming as part of the class. From the first assignment, even though it is not directly working with low level graphics APIs, I like that we would be rendering objects on the screen using abstracted function calls but I hope the class would lay more emphasis on graphics programming.

User Inputs

  1. Shift: Plays the game in slow motion when held
  2. Esc: Exits the game.

 

OpenGL(x86) Direct3D(x64)