Game Engineering – Building Game Engine – Part 5

Adding a representation for camera
A camera is an object which is used to specify the view that has to be shown on screen. A game can have multiple cameras and the game developer must be able to specify which camera is used to specify a particular frame and the engine must be able to render to the screen   the view from the perspective of that camera. To achieve this, we first have to convert the world space to the camera space and then map the camera space to the screen or perspective space. These are matrices which are stored in the per frame constant buffer. The vertex shader then takes the information that is in these constant buffers to render on screen.
We also added movement to camera to move around the world space in the X and Z directions.
Adding a representation from a game object.
A game object can be thought as the basic element to represent an object in the game. A game object can be anything in the game, but it has certain properties. My implementation of the game object has a rigidbody to keep track of the position, rotation and apply forces. It also contains which mesh it is made of and which effect is to be used to draw the mesh.

The class also contains member functions to set the position, rotation and also to update the mesh and the effect that the game object uses. Instead of storing the mesh and effect separately, I am using a struct which contains a mesh and effect pair.

Submitting Meshes to render
Renderers are low level engine components and we do not want to let the low level engine components know about the high level ones such as the game objects. Since meshes are now part of the game object, when we need to render a mesh we can either send the whole game object to the renderer or we can just send the details that are necessary for the renderer which are the effect and the mesh.
But since we are moving from rendering in a 2D space to a 3D space, we need some way to translate the local co ordinate space to the world co-ordinates. We can do this by creating a translation matrix using the orientation and the position of the game object. So, my final interface to submit mesh effect pairs looks like
1. void eae6320::Graphics::SetEffectsAndMeshesToRender(sEffectsAndMeshesToRender i_EffectsAndMeshes[eae6320::Graphics::m_maxNumberofMeshesAndEffects], eae6320::Math::cMatrix_transformation i_LocaltoWorldTransforms[m_maxNumberofMeshesAndEffects], unsigned int i_NumberOfEffectsAndMeshesToRender)

To move the objects around the world, we need to have a constant buffer which can be updated every draw call instead of every frame. We also need to store the data corresponding to this constant buffer, which stores the local to world transformation of every game object in the scene. After integrating this constant per draw call data. The size of my struct which stores data required render a frame increased to 1768 Bytes in OpenGL and 1936 Bytes in Direct3D.

Movement of camera and Game objects and the importance of Prediction:

After adding movement to both camera and various game objects in the scene, I noticed that the movement appeared jerky. This is because the simulation is running at a much faster rate compared to the game. To smoothen the movement, we have to predict where the camera or the game object would be in the based on the velocity and the time and update render the predicted position instead of the actual position. It is also due to this reason that we update the velocity of the body instead of directly manipulating the position.

Key Bindings:

1. WASD to move the rectangle
2. IJKL to move the triangle
3. F2 to swap the effects
4. F3 to swap the triangle to a house
5. Left and Right arrow to move the camera in X axis.
6. Up and Down arrows to move the camera in Z axis.

Final Output:

MyGame_x64       MyGame_x86


Leave a Reply

Your email address will not be published. Required fields are marked *