June 17th, 2023
Let's continue with our look at how to render stuff with DirectX 11 inside a WinUI 3.0 window.
To make changes in the view, we need to be able to update the vertex shader as the program is running. Let's assign values to our fresh matrices in a new method called Update.
But first, things have gotten a bit out of hand with the CreateResources method. So, I'm going to split it into three new methods to make the code a bit easier to read. It's just for organization, so I won't be telling you what needs to go where, but I'm telling you so when I refer to one of these methods, you'll know what's going on.
private void SwapChainCanvas_Loaded(object sender, RoutedEventArgs e)
{
CreateSwapChain();
LoadModels();
CreateShaders();
CreateBuffers();
SetRenderState();
timer.Start();
}
private void LoadModels()
{
}
private void CreateShaders()
{
}
private void CreateBuffers()
{
}
Since we want to be able to eventually shade our model, we need to set up normals into our process. This means that the Vertex struct we created earlier is going to get an extension.
[StructLayout(LayoutKind.Sequential)]
public struct Vertex
{
public Vector3 Position;
public Vector3 Normal;
}
Then, in the LoadModels method, as we're iterating over the imported mesh to get the vertices from it, we can get the normals at the same time and save them using the Vertex structure.
mesh = model.Meshes[0];
vertices = new List<Vertex>();
for (int i = 0; i < mesh.Vertices.Count; i++)
{
Vector3D vertex = mesh.Vertices[i];
Vector3D normal = mesh.Normals[i];
Vertex newVertex;
newVertex.Position = new Vector3(vertex.X, vertex.Z, -vertex.Y);
newVertex.Normal = new Vector3(normal.X, normal.Z, -normal.Y);
vertices.Add(newVertex);
}
Normals are essential components used to calculate lighting and shading effects on 3D objects. A normal is a vector, perpendicular to a surface. Normals provide information about the orientation of a surface at each vertex of a 3D object. They define which direction the surface is facing, whether it's facing towards or away from the viewer. This information is crucial for determining how light interacts with the surface.
Normals are used in lighting calculations to determine how light sources illuminate the surface of an object. By comparing the direction of the normals with the direction of incident light, DirectX can calculate the amount of light reflected or absorbed by the surface, resulting in realistic lighting effects. Normals are also used to calculate shading effects, such as smooth shading or flat shading. Smooth shading involves interpolating normals across a surface, creating a smooth transition of lighting between vertices. Flat shading, on the other hand, uses the same normal for all vertices of a polygon, resulting in a faceted appearance. Normals can be visualized as lines or arrows extending from the vertices of an object. This visualization aids in understanding the orientation and smoothness of the surface, assisting with debugging and fine-tuning of object geometry.
Earlier, we we're just passing a hard-coded size as ByteWidth to the vertex buffer. But with so much more going on in our vertices, it's better to just let the compiler calculate the size of the thing. Just remember, we have to wrap this to unsafe, since Vertex doesn't have a predetermined size.
unsafe
{
Vertex[] vertexArray = vertices.ToArray();
BufferDescription vertexBufferDesc = new BufferDescription()
{
Usage = ResourceUsage.Default,
ByteWidth = sizeof(Vertex) * vertexArray.Length,
BindFlags = BindFlags.VertexBuffer,
CPUAccessFlags = CpuAccessFlags.None
};
using DataStream dsVertex = DataStream.Create(vertexArray, true, true);
vertexBuffer = device.CreateBuffer(vertexBufferDesc, dsVertex);
}
Here's a common way of creating a depth buffer in Direct3D 11. The depth buffer is a critical component for 3D rendering, as it enables proper visibility determination by performing depth tests to decide which objects are in front of others from the viewer's perspective.
private ID3D11DepthStencilView depthStencilView;
public void CreateSwapChain()
{
...
Texture2DDescription depthBufferDesc = new Texture2DDescription
{
Width = (int)SwapChainCanvas.Width,
Height = (int)SwapChainCanvas.Height,
MipLevels = 1,
ArraySize = 1,
Format = Format.D24_UNorm_S8_UInt, // 24 bits for depth, 8 bits for stencil
SampleDescription = new SampleDescription(1, 0), // Adjust as needed
Usage = ResourceUsage.Default,
BindFlags = BindFlags.DepthStencil,
CPUAccessFlags = CpuAccessFlags.None,
MiscFlags = ResourceOptionFlags.None,
};
ID3D11Texture2D depthBuffer = device.CreateTexture2D(depthBufferDesc);
}
This block of code is creating a 2D texture that will serve as a depth buffer in the Direct3D 11 rendering pipeline.
Width
and Height
of the depth buffer are being set to match the dimensions of the SwapChainCanvas
. The depth buffer needs to have the same dimensions as the render target view so that every pixel in the render target has a corresponding depth value.MipLevels
and ArraySize,
specify that the texture has only one mipmap level and is not an array texture. Mipmap levels are used for textures to create lower resolution versions for objects that are farther away, but for a depth buffer, we only need one level. Similarly, an array texture is a texture containing multiple textures, but we only need one depth buffer.Format.D24_UNorm_S8_UInt
indicates that it is a 32-bit format with 24 bits for the depth value and 8 bits for the stencil value. The depth value is used for depth testing, while the stencil value is used for stencil testing (for more complex rendering effects).SampleDescription
describes the multisampling parameters for the texture. Multisampling is a technique used to reduce the aliasing (jagged edges) in an image. However, in this case, the SampleDescription
is set to (1, 0), indicating no multisampling is used.ResourceUsage.Default
specifies how the texture is expected to be used in the pipeline and by whom. ResourceUsage.Default
is the most common usage and means that the GPU will read from and write to the texture.BindFlags.DepthStencil
indicates that the texture will be bound to the pipeline as a depth stencil. This will allow it to be used in depth and stencil testing.CpuAccessFlags.None
tells the CPU does not need to access this resource, only the GPU.MiscFlags
tells that no other miscellaneous options needed for this resource.This piece of code creates a depth stencil view, which is a representation of the depth stencil buffer. This is used by Direct3D for depth testing (to determine whether a pixel should be drawn based on its depth compared to other pixels) and stencil testing.
private ID3D11DepthStencilView depthStencilView;
DepthStencilViewDescription depthStencilViewDesc = new DepthStencilViewDescription
{
Format = depthBufferDesc.Format,
ViewDimension = DepthStencilViewDimension.Texture2D,
Flags = DepthStencilViewFlags.None,
};
depthStencilView = device.CreateDepthStencilView(depthBuffer, depthStencilViewDesc);
Format,
is set to the same format as the depth buffer. This ensures that the depth stencil view and the depth buffer are compatible.ViewDimension
is set to DepthStencilViewDimension.Texture2D
, which specifies that the depth stencil view will treat the resource as a 2D texture.CreateDepthStencilView
method, with the depthBuffer
and depthStencilViewDesc
as arguments, to create the depth stencil view.The depth stencil view provides a way to access the depth stencil buffer's data. While the depth stencil buffer stores the actual depth and stencil data, the depth stencil view determines how that data is interpreted and accessed during the rendering process.
If we want to add shading, we're going to need at least one light. So, let's define a LightPosition and throw it to the shaders with a couple of matrices in the constant buffer.
[StructLayout(LayoutKind.Sequential, Pack = 16)]
struct ConstantBufferData
{
public Matrix4x4 WorldViewProjection;
public Matrix4x4 World;
public Vector4 LightPosition;
}
WorldViewProjection
matrix is used to transform objects in a 3D scene. It combines the world, view, and projection matrices to determine the final position, size, and orientation of an object on the screen. So, we're moving the multiplication outside the shader for clarity.World
matrix, on the other hand, represents the transformation of an object in its own local coordinate system. It specifies the object's position, rotation, and scale relative to its own origin. We're still going to need this one on its own.LightPosition
(x, y, z, w) specifies the position of the light source in the 3D space of your scene. The x, y, and z values define the location of the light, while the w component in D3D11 is primarily used to enable perspective division during the vertex processing stage. However, some applications use it to indicate whether the light is a point light (W = 1.0) or a directional light (W = 0.0). You could do that in your application, if you so wish.Since we're trying to shade the surface of our 3D object, we have to move away from rendering it as wireframe. So, in the RasterizerDescription over at CreaterShaders method, change the FillMode parameter to Solid.
RasterizerDescription rasterizerStateDescription = new RasterizerDescription(CullMode.Back, FillMode.Solid)
{
...
};
The Update()
method needs a bit of updating. We want to pass some lighting data to the shaders via the constant buffer. So, let's come up with some values.
private void Update()
{
Vector3 lightPosition = new Vector3(0.0f, 1.0f, -5.0f);
float angle = 0.05f;
worldMatrix = worldMatrix * Matrix4x4.CreateRotationY(angle);
Matrix4x4 worldViewProjectionMatrix = worldMatrix * (viewMatrix * projectionMatrix);
ConstantBufferData data = new ConstantBufferData();
data.WorldViewProjection = worldViewProjectionMatrix;
data.World = worldMatrix;
data.LightPosition = new Vector4(lightPosition, 1);
deviceContext.UpdateSubresource(data, constantBuffer);
}
lightPosition
sets the position of the light source in the scene. The -5.0 value for Z axis places the light around the same area as our camera.worldViewProjectionMatrix
is a matrix that combines the worldMatrix with the viewMatrix and projectionMatrix. It represents the overall transformation of the object from its local coordinates to the final position on the screen. As before, the order of the multiplies here does matter.data.LightPosition
is expecting a Vector4, so we're padding our lightPosition
with an extra 1
for the w
coordinate.UpdateSubresource
updates the constantBuffer
with the new data to pass to the shader. This allows the shader to use the updated matrices and lighting information when rendering the next frame.private void Draw()
{
deviceContext.ClearDepthStencilView(depthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
deviceContext.OMSetRenderTargets(renderTargetView, depthStencilView);
deviceContext.ClearRenderTargetView(renderTargetView, canvasColor);
deviceContext.DrawIndexed(indices.Count, 0, 0);
swapChain.Present(1, PresentFlags.None);
}
ClearDepthStencilView
clears the depth-stencil view. In 3D rendering, the depth buffer (also known as a z-buffer) keeps track of the distance between the camera and the objects in the scene for each pixel. This is essential for determining which objects are in front of others and should thus be drawn over them. The "depth-stencil view" is a way to access and manipulate the depth buffer. The ClearDepthStencilView
function resets the values in the depth buffer to their initial state before drawing a new frame. This is necessary because the results of rendering one frame are generally not relevant for the next frame, so we want to start with a clean slate.
In this case, the depth buffer is being cleared, with each pixel being set to a depth of 1.0, which represents the farthest possible depth, meaning that initially, there are no objects in the scene. The 0 passed in for the stencil clear value would clear the stencil buffer to 0 if the DepthStencilClearFlags.Stencil
was also included in the clear flags.
OMSetRenderTargets
sets the render targets for the output-merger stage of the graphics pipeline, which is the final stage that produces the rendered image. The render targets are the locations where the results of the rendering process are stored. In most cases, this will be a back buffer that is then presented to the screen. The OMSetRenderTargets
function is used to set one or more render targets, as well as optionally a depth-stencil view.
In this case, the function is setting both a render target view (that we already did previously), where the rendered image will be drawn, and a depth-stencil view, which is used for depth and stencil testing to determine which pixels of the rendered objects are visible and should be drawn to the render target. This addition is because with the wireframe view of our model, I could already see pieces of the geometry being visible from behind the rest of the model. This should fix the issue.
With this setup, we can get to altering the shaders in the next part.
Visual Studio project:
d3dwinui3pt9.zip (50 KB)
D3DWinUI3 part 9 in GitHub