Blog

Game Engineering – Building Game Engine – Part 4

Moving Mesh and Effect Code from Engine to Game:

The main goal of this exercise is to move the mesh and effect initialization to game instead of the engine. By doing this we are abstracting the engine from the data as engine should be able to process any type of data but doesn’t have to know what that data means. In games, the gameplay programmer can just specify which mesh and effect he wants the engine to render and the engine must be able to render that.

Sending the Background Color to render

eae6320::Graphics::SetBackBufferValue(eae6320::Graphics::sColor{ abs(sin(i_elapsedSecondCount_systemTime)),abs(cos(i_elapsedSecondCount_systemTime)),abs(cos(i_elapsedSecondCount_systemTime)), 1});

Output of changing the background colors.

    

The first problem that we encounter is how to pass the data from the game thread and the application thread. We use a struct(sDataRequiredToRenderAFrame) that is used to store the data required to render a frame. We first create the data in the game and send that data to the engine to store in this struct to render. Since our game and the renderer runs on two different threads if we use only one struct, the renderer thread would be waiting for the game thread to fill the data and the game thread will be waiting for the renderer to finish rendering the current frame.

To maximize efficiency, we make the game thread populate the data that is required to render the next frame while the render thread is rendering the current frame. To achieve this we use two structs, one to store the data being rendered in the current frame and one to store the data we want to render in the next frame and after the current frame ends, we swap the data in the two structs.

Since we are creating our effects and meshes in the game instead of engine, we should restrict the access to mesh and effect classes and should not allow calling the constructor and destructor. Instead we implemented a Factory function which creates a new mesh or effect and gives us the pointer to the created mesh or effect. Since we are dealing with pointers, there is a possibility that the game or renderer might free the pointer when the other one is using, and this might create undefined behavior. To mitigate this, we use reference counting to keep track if the pointer is being used and when the game or renderer no longer uses the pointer, we decrement reference count and once reference count reaches zero, we then free the pointer. The framework for reference counting was already present in the engine and we just have to implement it.

To pass the mesh and effect data between game and engine, I am using a array of struct which takes an effect and mesh in that order and the renderer then first binds the effect and then draws the mesh in the same order


m_EffectsAndMeshes[0].m_RenderEffect=s_Effect; 
m_EffectsAndMeshes[0].m_RenderMesh=s_Mesh; 
m_EffectsAndMeshes[1].m_RenderEffect=s_Effect2; 
m_EffectsAndMeshes[1].m_RenderMesh=s_Mesh2; 
eae6320::Graphics::SetEffectsAndMeshesToRender(m_EffectsAndMeshes,m_NumberOfMeshesToRender);

 

Hiding the meshes and swapping the effects:

Since we moved the code to initialize and submit the effects and meshes to game, we can also specify which ones to render and which effect goes on which one. In my game, we can hide a mesh by pressing F1 key and swap effects between the two by holding the F2 key.

 

Removing mesh

 

Swap effect:

The reason why we submit all the data required to render a frame when the renderer is rendering the previous frame is that the renderer will know what to render in the next frame and this eliminates the renderer waiting for data to be submitted by the application.

Size of Mesh, Effect and sDataRequiredToRenderAFrame:

After making the mesh and effect reference counted, the size of mesh and effect turned out to be 20 bytes in OpenGL, 40 and 48 Bytes respectively in Direct3D. The size of the struct was 168 bytes in OpenGL and 176 Bytes in Direct3D. After rearranging the member variables in the struct, the size of mesh and effect came down as shown below.

Breakdown of sDataRequiredToRenderAFrame

Member Size
Constant data required per frame 144
Color struct which holds four floats 16
The struct which holds the effect and mesh to render 8 / 16 * ( 10 pairs)
unsigned int containing the number of mesh effect pairs being rendered 4

 

Before Optimization:

Mesh Effect sDataRequiredToRenderAFrame
OpenGL 20 20 244
DirectX 40 48 328

After Optimization:

Mesh Effect sDataRequiredToRenderAFrame
OpenGL 20 16 244
DirectX 32 48 328

Size differences from last week:

For the previous assignment, the way I divided the platform specific code into a new class was not the ideal way of doing it and created a few problems while working on this weeks assignment. So I went in and changed the code so that it is better and in the process I removed a few member variables that I was using in both the mesh and the effect class which lead to the drastic decrease in the amount of memory taken by each class.

Total Memory for the graphics project:

The memory allocated to graphics project is budgeted, since memory is limited, especially in consoles and mobile. Hence the total number of meshes and effects that can be drawn at the same time is capped at 10. The total memory that will be taken is 488 Bytes in OpenGL and 656 Bytes when using Direct3D. When the game wants to render more than that number of meshes, the renderer only renders the first 10 pairs of effects and meshes and in Debug mode will throw an error.

 

Controls:

  1. Shift: To slow the simulation
  2. F1: To make a mesh invisible
  3. F2: Swap the effects between meshes

MyGame_Assignment04_x86 MyGame_Assignment04_x64

Game Engineering – Building Game Engine – Part 3

The main part of this assignment is to remove all the platform specific code that is present in the various graphics.xx.cpp files and create one platform independent graphics.cpp file. For this I created another class to hold all the platform specific code called “GraphicsHelper”. This class contained functions that mirror those of the main Graphics interface. Graphics.cpp contained all the platform independent code and calls are made to functions in graphics helper for platform dependent code. Each platform specific implementation inside GraphicsHelper is differentiated using preprocessor blocks.

The GraphicsHelper class also has interface to change the color of back buffer. At start of every frame the back buffer is first cleared usually by setting the color to black. I created a color struct that takes in values of red, green, blue and alpha between the values of 0 and 1. This I pass to my graphicshelper.cpp using the interface “SetBackBuffer” which then sets the value for the back buffer.

 

sColor m_BackBuffer {

abs(sin(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), abs(cos(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), abs(cos(s_dataBeingSubmittedByApplicationThread - > constantData_perFrame.g_elapsedSecondCount_simulationTime)), 1

};

Interface for changing the color of back buffer

 s_helper->SetBackBuffer(m_BackBuffer); 

We also added an index buffer to tell the graphics API, the order in which vertices of a triangle are to be drawn. By using an index buffer, we can reduce the number of points that are being stored for any shape as we can remove the common vertices between each triangle that is part of that mesh. But, this introduces an additional complexity when using different renderers such as OpenGL and DirectX. Since they render points in different order, index buffer for one is incompatible with the other. The way I solved this is to take the order of OpenGL as default and swap every 2nd and 3rd point in the array.


I also moved out the code to initialize the meshes and effects from their respective classes to Graphics.cpp. During the initialization from graphics.cpp the effect needs the locations of both the vertex and the fragment shaders as strings

 
	s_Effect->Initialize(m_vertShader1Location, m_fragShader1Location);
	s_Effect2->Initialize(m_vertShader2Location, m_fragShader2Location);

The mesh requires a pointer to the array containing the vertex buffer, pointer to an array containing index buffer and the number of vertices that are to be rendered using the index buffer, as it is not possible to find the number of vertices since we are passing a pointer to the array and not the array itself.

 	
s_Mesh->Initialize(vertexData, indexData, 3);
s_Mesh2->Initialize(vertexData2, indexData2, 4);

After refactoring the code to include the changes to Graphics.cpp, my mesh class uses 28 Bytes and 48 bytes in OpenGL and Direct3D respectively. Effects on the other hand take-up 72 bytes and 120 bytes respectively. I could not find any way to reduce the size.

Optional Challenge:

As an optional challenge we must animate the background color. So instead of passing a solid color value such as {1,0,0,1} for red, I passed the value of the sine value of simulated time clamped between 0 and 1.

Final Output:

MyGame_x64 MyGame_x86</a

Game Engineering – Building game engine – Part 2

Creating common interface for rendering meshes and binding effects:

The current engine has two different Graphics.cpp files, one each for Direct3D and OpenGL, which perform exactly the same thing. The main functionality of these files are to Initialize the vertices and shading data, Bind shading data and draw the vertices, Render the frame and perform cleanup after rendering is complete. Our assignment is to create a common interface to Initialize, Bind and Cleanup effects and Initialize, Draw and Clean the meshes which is platform independent.

Since all of the code is already present, the first thing we need to do is to identify parts of the original files performing the different functions. Once identified, it is simple to move them to new files and create references to the new files in the old ones. The way I did this is to move one part of the file at a time and commenting the relevant parts in the old file, so that even if there are some errors it would be easy to figure out where things are going wrong. Once it is confirmed that everything is working, I removed the redundant code. I created two header files one for mesh and other for the effect which contain the platform independent function calls and couple of cpp files which contain the actual implementation for specific APIs.

The separation of functions for the effect are a bit more complex since the code contain lots of platform independent and dependent stuff mixed together. Hence, I created another cpp file to hold the platform independent initialization and cleanup process.

Below is the code that binds the effect and draws the mesh which is common in both the Graphics files.

// Bind the shading data

{

s_Effect.Bind();

}

// Draw the geometry

{

s_Mesh.Draw();

}

As an engine programmer we often have to dig deep and write code which directly interfaces with the hardware. But since there are lots of differences between each hardware platform and the vendor APIs that are used to access the hardware, it is easier for us and other programmers to write such interfaces which are platform independent.

Adding an additional triangle to draw, we have to keep in mind the difference in DirectX and OpenGL on the order how they render points, with DirectX using the left hand convention and OpenGL the opposite.

Finally, after moving the mesh and effect representations to their own platform independent interfaces, the graphics classes for each platform now only contains code which renderers the frame which can be moved to its own class to make the graphics class truly platform independent.

 

Visual Studio Graphics Debugger and RenderDoc:

The Visual Studio Graphics Debugger and/or RenderDoc are important pieces of software when writing software related to Graphics. Using these two software we can see the API function calls to DirectX(VS Debugger/RenderDoc) or OpenGL(RenderDoc) at a particular frame in the game. This is useful when there are graphics artefacts in the game and we want to debug where in the render pipeline are they getting introduced. We can look at each API function call and see what is being sent to the graphics card at that particular frame which makes debugging graphics related issues easier.

Following are the screenshots from the VS Graphics Debugger showing the render process for a frame in the game

The game:

The same in RenderDoc (For OpenGL)

The game:

Optional Challenge: As a challenge creating a “house” and a “tree” using triangles. This is done using 7 triangles. The hardest part of this is figuring out the points in the screen coordinate space and adding them in the correct order for the respective renderers.

DirectX:

OpenGL:

Fixing the “Debug x86” bug: My Solution had a strange bug where all configurations were building perfectly except for Debug x86. I first thought that this issue was because of a references issue due to Graphics project not being updated as discussed at this link. But even after fixing the references for the Application project, the issue persisted. So, after doing more investigation, I found out that I added a library reference to “Graphics.lib” in the project settings for my game just for the Debug x86 configuration. The issue got resolved as soon as I removed this reference in the settings and now builds in all the configurations.

You can download the game from the links below

User Inputs

  1. Shift: Plays the game in slow motion when held
  2. Esc: Exits the game.

MyGame_Assignment2_x64 MyGame_Assignment2_x86

Game Engineering – Building Game Engine – Part 1

The point of the first assignment was to integrate the ‘Graphics’ project given as a separate one into the main engine and add necessary dependencies to the ‘Graphics’ project and other projects in the engine solution so that there are no errors while building the engine from scratch. The given graphics project consisted of files specific to both DirectX and OpenGL with some common files containing wrapper classes for implementations in both frameworks. This is required for the engine to be cross platform compliant while using latest technologies that are specific to particular operating systems. Even though most of the functionality is the same in both frameworks, the way it is achieved is different with different API names and variable declarations.

After integrating the project, we created a sample game modelled to an example game given as part of engine. The example game displays a white triangle and we are required to change the name of the window and logo as part of the assignment. The name of the window was straightforward with changing a string. The logo on the other hand had to be an ‘.ico’ (icon) file, which then I had to link to the project first as an icon in the resource file and link it to the image and then define it as a value in the resource header file after which I could use it in the game.

The second requirement was to change the color of the triangle displayed by the game to any color other than white. I wrote a fragment shader1 that changes the color of a triangle being drawn on the screen. I used a time variable exposed as part of the shader to achieve the desired results which can be seen below.

The shader constantly changes color based on the absolute value sine and cosine of time. Since the shader uses values for Red, Green and Blue that should be between 0 and 1, this gives the desired output. The output should be the same when run on DirectX and OpenGL and since the functions I used to get this effect are the same in both, I did not have to change anything.

Some projects in the engine are also dependent on the graphics library and vice versa. All the projects in the engine are dependent on the application project which serves as the base class for all the other projects to inherit from. So adding the graphics project as a reference to this project made sure that it is included in every other project in the engine which is dependent on the graphics library. There was a issue with the application project where it was referencing a previous version of the graphics library and did not add graphics to the project dependencies even when added as a reference. This lead to most of us having an error with the game not building because it was unable to find the required libraries. It took some time to figure out the issue. I also had an issue where my game was building for every configuration except for Debug x86 which was also solved by this.

There were also a few optional challenges for us to complete.

  1. The first one was to find a way to re-declarations of constant buffers in each shader and declare them in one place. To accomplish this, I have added the declarations for these constant buffers in the “shaders.inc” include file, which is included in all the shaders and is called before the shader is built.
  2. The second is to either pause or slow down the rate of simulation of the game when the user presses a key. The engine already has functionality to take in user input from keyboard and also to change the simulation time and we just had to tie them both in one place. In my game, the simulation runs at half the speed when pressing the “Shift” key. This is done by setting the simulation time in the game to 0.5 and when user releases the key it is set back to 1.0.

Finally, I personally want to learn more about graphics and other low-level engine programming as part of the class. From the first assignment, even though it is not directly working with low level graphics APIs, I like that we would be rendering objects on the screen using abstracted function calls but I hope the class would lay more emphasis on graphics programming.

User Inputs

  1. Shift: Plays the game in slow motion when held
  2. Esc: Exits the game.

 

OpenGL(x86) Direct3D(x64)