As we know, almost all games contains some sort of UI. In my engine, i have added support for images to be shown as UI sprites. The sprite is basically a quad drawn with four vertices and we change the size of the quad from the game side. Since the size of the quad is constant, I am creating a quad the size of the viewport at the start of the game and adjusting the size of each individual UI element later. From the game, we send the size which would be between 0-1 for both width and height and we also specify the position of the element, which would be between -1 and 1 for the x and y axis, with origin at the center.
The UI quad is drawn as a triangle strip. A data to draw a triangle strip consists of only the vertex information. Since we donot send the index data which is equal to sizeof(index) * number of vertices, triangle strips are more efficient compared to regular meshes. When drawing the quads, i draw all the previous meshes first, change the type to draw as triangle strips and then draw them which you can see in the screenshot below.
Since the UI has zero depth, the transformation matrix is already in the projected space. Currently the values that i require game to send are used as the transform values in the projected matrix. I store this projected matrix for my vertex shader and in the fragment shader i sample the texture in the material and output the color.
Each sprite has its own material and transform, but shares the vertex buffer and vertex layout. The vertex layout only contains two elements, position and UV data. So we are also being efficient by not sending color, normals etc. Currently i am using four floats, but we can make the struct even smaller by using int8_t instead of float, since we will only be using values of -1,0,1 for all information in the struct.