DirectX 11 and WinUI 3.0

Shaders and Buffers

Let's continue with our look at how to render stuff with DirectX 11 inside a WinUI 3.0 window. Before we can get to drawing, we need to address a few more things. We need shaders that determine what our shape will look like, and we need a vertex buffer and an index buffer to determine the coordinates and structure of our shape.

Shaders Gonna Shade

Even when drawing a blank SwapChainPanel on the screen, shaders are still needed for rendering in a graphics application. Shaders are small programs that run on the GPU (Graphics Processing Unit) and are responsible for the processing and manipulation of vertices and pixels during the rendering pipeline.

Vertex Shader, in a typical rendering pipeline, is responsible for transforming individual vertices from their original object space to clip space or screen space i.e. projecting them to the two-dimensional screen. It operates on each vertex independently, performing position transformations, lighting calculations, texture coordinate generation, or any other required vertex-level operations.

Pixel Shader (also fragment shader) can take information such as texture data, lighting information, and depth information, and use these to calculate the final color of the pixel. This could involve various operations like texture sampling, light calculations, bump mapping, applying shadows, and many other effects.

Shaders allow for complex and customizable rendering effects by performing mathematical calculations and operations on vertices and pixels. The graphics pipeline relies on shaders to perform the necessary computations and transformations to display the output correctly.

To use a shader, we need a shader file. So, create a new file called Shader.hlsl.

hlsl
struct VertexInputType
{
	float3 position : POSITION;
};

struct PixelInputType
{
	float4 position : SV_POSITION;
};

PixelInputType VS(VertexInputType input)
{
	PixelInputType output;
	output.position = float4(input.position, 1.0);
	return output;
}
  1. struct VertexInputType defines a structure representing the input attributes for each vertex in the shader. In this case, it has a single attribute position of type float3 (a 3-component float vector) with the semantic name POSITION indicating that this attribute represents the position of the vertex in 3D space.
  2. struct PixelInputType defines a structure representing the output attributes for each vertex that will be interpolated and passed to the pixel shader. It has a single attribute position of type float4 (a 4-component float vector) with the semantic name SV_POSITION indicating that this attribute represents the position of the vertex in screen space after transformation.
  3. PixelInputType VS(VertexInputType input) is the vertex shader function, named VS. It takes an input of type VertexInputType and returns an output of type PixelInputType. It is responsible for transforming the input vertex attributes into the output attributes that will be passed to the pixel shader.
  4. PixelInputType output creates a variable of type PixelInputType to store the output attributes of the vertex shader. PixelInputType structure is a data structure representing the information needed by the pixel shader. It typically contains attributes such as the position of the pixel on the screen and any other data required for rendering, such as texture coordinates or color information. The pixel shader uses this data to determine the final color of each pixel on the screen. Think of it as a container that holds the necessary information for coloring the pixels of an image.
  5. output.position = float4(input.position, 1.0) assigns the transformed position of the vertex to the output.position attribute. It creates a float4 vector using the input.position (which represents the original vertex position) and sets the w component to 1.0 (indicating it's a point in 3D space, rather than a direction or other type of vector).
  6. return output This returns the final output attributes for the vertex.

So, in this vertex shader, we have a single input attribute representing the vertex position. The vertex shader passes through the position as the output position without any transformations. The resulting output is a PixelInputType structure containing the position.

This basic vertex shader is useful when you don't need any complex transformations or additional attributes, such as texture coordinates or normals. It provides a minimal setup for rendering objects with their original positions.

In addition, we'll need a pixel shader. Since the shaders we are using are so simple, we can just add the pixel shader at the end of the Shader.hlsl file. Remember to switch the Build Action of the file to "Content" and Copy to Output Directory to "Copy if newer".

hlsl
...
float4 PS() : SV_TARGET
{
	return float4(1.0f, 0.0f, 0.0f, 1.0f);
}

PS() function is the entry point for our pixel shader. It returns a 4-component float vector for RGBA. And since we're hard-coding the value here, the shader will always return the color red.

Compiling Shaders for 3D Graphics

Next, we'll be creating a bunch of resources and then setting some render states with those resources. To keep things organized, let's create a couple of methods for these actions and call them when the SwapChainCanvas_Loaded event gets triggered.

C#
private void SwapChainCanvas_Loaded(object sender, RoutedEventArgs e)
{
	CreateSwapChain();
	CreateResources();
	SetRenderState();
	timer.Start();
}

public void CreateResources()
{
}

public void SetRenderState()
{
}

To be able to utilize shaders, they need to be compiled. That's where the next Vortice library comes in handy.

Vortice.D3DCompiler: This is a specific part of Vortice that deals with compiling and working with shaders to determine how objects are displayed on the screen.

Alright, so we're diving into the fascinating world of creating shaders for our 3D graphics. Let's create global variables for the ID3D11VertexShader and ID3D11PixelShader and also a string pointing to the Shader.hlsl file we just created (assuming it's in the project root).

C#
private ID3D11VertexShader vertexShader;
private ID3D11PixelShader pixelShader;
public void CreateResources()
{
	string shaderFile = Path.Combine(AppContext.BaseDirectory, "Shader.hlsl");

	var vertexEntryPoint = "VS";
	var vertexProfile = "vs_5_0";
	ReadOnlyMemory<byte> vertexShaderByteCode = Compiler.CompileFromFile(shaderFile, vertexEntryPoint, vertexProfile);

	var pixelEntryPoint = "PS";
	var pixelProfile = "ps_5_0";
	ReadOnlyMemory<byte> pixelShaderByteCode = Compiler.CompileFromFile(shaderFile, pixelEntryPoint, pixelProfile);
}

You'll notice that we're doing two processes here one after the other. First we define vertexEntryPoint, which refers to the name of the function that acts as the entry point for the vertex shader in the HLSL shader file, which we set to VS. The vertexProfile describes the version of the shader model to be used for the vertex shader, and shader model 5.0 is used in Direct3D 11. We also define similar properties for the pixel shader.

After that, the Shader.hlsl file needs to be transformed into a format that our graphics card can understand, called byte code. So, we compile it into byte code using the Compiler.CompileFromFile method. Actually, we compile the file twice to get the bytecode for both shaders. When we have the byte code, we create a vertex shader and pixel shader objects using the ID3D11VertexShader and ID3D11PixelShader interfaces.

C#
vertexShader = device.CreateVertexShader(vertexShaderByteCode.Span);
pixelShader = device.CreatePixelShader(pixelShaderByteCode.Span);

We then head to the SetRenderState method to set these shader objects as the active ones using VSSetShader and PSSetShader on the device context. This is essentially giving the green light to our shader code to work its magic on each vertex of our 3D objects. The vertex shader will take those input vertices, and perform calculations and transformations on them, and the pixel shader will make each pixel on the screen look just right.

C#
public void SetRenderState()
{
	deviceContext.VSSetShader(vertexShader, null, 0);
	deviceContext.PSSetShader(pixelShader, null, 0);
}

The second parameter, null, is the optional array of class instances associated with the vertex shader. In this case, we're not providing any class instances, so it's set to null. The last 0 is the optional number of class instances to set, which we leave as 0 accordingly.

What is the Vertex Buffer

Oh, the vertex buffer! It's like a treasure chest filled with all the important details of our vertices. Imagine having a neat and organized storage container that holds the coordinates, colors, textures, and other attributes that define each vertex.

We use the vertex buffer in graphics programming to efficiently manage and organize our vertex data. It's like having a reliable assistant that keeps track of all the necessary information about our 3D objects. To create a vertex buffer, we need to set up a storage space for the vertex data. We fill it with details like position coordinates, color values, and texture coordinates for each vertex. This data is essential for the graphics pipeline to render our geometry.

When it's time to render, the graphics pipeline accesses the vertex buffer and retrieves the vertex data. It knows exactly where to find the attributes it needs for each vertex thanks to the structure and layout of the buffer. With the help of the vertex buffer, we can efficiently manage and manipulate our vertex data, allowing us to create and render even complex 3D objects effortlessly.

Let's define a simple shape, like a triangle, using the vertices. Here's how to define vertices for a simple triangle in 2D space.

C#
private ID3D11Buffer vertexBuffer;

public void CreateResources()
{
	...
	float[] vertices = new float[]
	{
		0f, 0.5f, 0f, // Top-center
		0.5f, -0.5f, 0f, // Bottom-right
		-0.5f, -0.5f, 0f, // Bottom-left
	};
}

Next, let's write another description, this time for the Vertex Buffer.

C#
BufferDescription vertexBufferDesc = new BufferDescription()
{
	Usage = ResourceUsage.Default,
	ByteWidth = sizeof(float) * 3 * vertices.Length,
	BindFlags = BindFlags.VertexBuffer,
	CPUAccessFlags = CpuAccessFlags.None
};
  • Usage: It is set to ResourceUsage.Default, indicating that the vertex buffer will be used for both reading and writing by the GPU.
  • ByteWidth: This value represents the total size, in bytes, of the vertex buffer. It is calculated based on the number of vertices (vertices.Length) and the size of each vertex (sizeof(float) * 3). The 3 represents the number of components (x, y, z) in each vertex.
  • BindFlags: It is set to BindFlags.VertexBuffer, indicating that the vertex buffer will be bound as a vertex buffer resource in the graphics pipeline. This means it will be used to store the vertex data that will be processed during rendering.
  • CPUAccessFlags: It is set to CpuAccessFlags.None, indicating that the CPU will have no direct access to the vertex buffer, meaning that it cannot read from or write to the buffer directly.

When we have the Description ready, we can create the vertex buffer.

C#
using DataStream dsVertex = DataStream.Create(vertices, true, true);
vertexBuffer = device.CreateBuffer(vertexBufferDesc, dsVertex);

We use the DataStream.Create method to create a data stream from the vertices array. The first parameter is the array we want to put into the data stream. The second parameter indicates that the data should be writable. We want to be able to modify the data in the data stream if needed. The third parameter tells the data stream to take ownership of the data. So, it will manage the garbage collection for the indices array, and we don't have to worry about freeing the memory ourselves. This data stream allows us to efficiently transfer the index data to the graphics device.

Using the device, we create the vertex buffer by passing in the vertexBufferDesc and the data stream dsVertex that we just created. This creates a buffer object that holds the vertex data.

Then we can go ahead and call IASetVertexBuffers in SetRenderState, telling the device context to use this vertex buffer as the active one.

C#
private int stride = sizeof(float) * 3;
private int offset = 0;

public void SetRenderState()
{
	...
	deviceContext.IASetVertexBuffers(
		0, 
		new[] { vertexBuffer }, 
		new[] { stride }, 
		new[] { offset });
}
  • 0: The input slot where the vertex buffer is to be bound.
  • vertexBuffers: An array of vertex buffers (ID3D11Buffer) to bind to the device.
  • strides: An array of stride integers. Stride is used quite a bit in graphics programming, and it refers to the size (in bytes) of each element in a vertex buffer. It represents the offset between consecutive elements in the buffer. In other words, it indicates the amount of memory needed to store a single vertex. In our app, a vertex consists of three floating-point values (x, y, z) and each value is represented by a 32-bit (4-byte) floating-point number. So, the stride is sizeof(float) * 3, which is the total size of the vertex. Just remember, this value will change with more complex data structures e.g. involving normals. The stride ensures that the GPU can properly process and access the individual vertex attributes during rendering.
  • offsets: An array of integers that represent the number of bytes between the beginning of the buffer and the first element to use.

And the Index Buffer?

The index buffer is an optimization tool. It is like a roadmap that guides the graphics pipeline on how to connect the vertices and form the desired shapes. This allows us to optimize the rendering process by reusing vertices and minimizing the amount of data that needs to be processed.

To create an index buffer, we define an array of indices that reference the vertices in the vertex buffer. Each index represents a specific vertex in the buffer, and the order of these indices determines how the vertices are connected. By using an index buffer, we can avoid duplicating vertices and save memory. It also allows us to create more complex shapes by reusing vertices and defining the connectivity between them.

This is particularly useful when dealing with large meshes or models that contain thousands or even millions of vertices. Think of it like assembling a puzzle. The index buffer tells us which pieces (vertices) to connect and in what order, allowing us to recreate the complete picture efficiently. Without an index buffer, we would have to specify each vertex individually, resulting in redundancy and increased memory usage.

Since we are just making a triangle, the index buffer is going to be pretty simple.

C#
private ID3D11Buffer indexBuffer;

public void CreateResources()
{
	...
	int[] indices = new int[]
	{
		0, 1, 2,
	};
}

The indices array contains the indices of the vertices that make up the triangles. Each set of three consecutive indices represents a triangle. For example, our triangle is formed by vertices with indices 0, 1, and 2. If we wanted to have another triangle, we'd add vertices with indices 3, 4, and 5.

Next, we need the description for the index buffer.

C#
BufferDescription indexBufferDesc = new BufferDescription
{
	Usage = ResourceUsage.Default,
	ByteWidth = sizeof(uint) * indices.Length,
	BindFlags = BindFlags.IndexBuffer,
	CPUAccessFlags = CpuAccessFlags.None,
};

We set the Usage to ResourceUsage.Default, which means it will be used as a standard resource. The ByteWidth is calculated by multiplying the size of a uint (32 bits) by the number of indices in the indices array. We specify that this buffer will be used as an index buffer by setting the BindFlags to BindFlags.IndexBuffer. Lastly, we set the CPUAccessFlags to CpuAccessFlags.None, indicating that the CPU does not need direct access to this buffer.

C#
using DataStream dsIndex = DataStream.Create(indices, true, true);
indexBuffer = device.CreateBuffer(indexBufferDesc, dsIndex);

And similarly to the vertex buffer, using the device, we create the index buffer by passing in the indexBufferDesc and the data stream dsIndex that we just created. This creates a buffer object that holds the index data.

Finally, we set the index buffer in the input assembler stage of the graphics pipeline using the IASetIndexBuffer method. The input assembler stage is like the backstage area where everything is prepared before the show begins. So, we're saying, "Hey, pipeline, use this index buffer to determine the order of the vertices!" We provide the index buffer, specify the format of the indices (Format.R32_UInt, so 32-bit unsigned integers), and set the offset to 0.

C#
public void SetRenderState()
{
	...
	deviceContext.IASetIndexBuffer(indexBuffer, Format.R32_UInt, 0);
}