DirectX Paint and WinUI 3.0

2D Texture

We've got a bunch of things on the list to define before we can get out texture out on the screen. We'll need the Texture2D object, a shader resource view, a sampler state, and a blend state.

Let's get on with creating the 2D texture for the brush.

C#
Texture2DDescription brushTextureDesc = new Texture2DDescription()
{
	Width = bitmapWidth,
	Height = bitmapHeight,
	MipLevels = 1,
	ArraySize = 1,
	Format = Format.R8G8B8A8_UNorm,
	SampleDescription = new SampleDescription(1, 0),
	Usage = ResourceUsage.Default,
	BindFlags = BindFlags.ShaderResource,
	CPUAccessFlags = CpuAccessFlags.None,
	MiscFlags = ResourceOptionFlags.None
};

ID3D11Texture2D brushTexture = device.CreateTexture2D(brushTextureDesc, subresourceData);
Marshal.FreeHGlobal(dataPointer);

Similarly to when we created the depth buffer, the Texture2DDescription gets filled with the definitions to properly initialize the texture. Noteworthy here is to set the Width and Height correctly, to set Format to 32 bit Unsigned Normalized (range 0.0 to 1.0) RGBA, and the BindFlags to ShaderResource, as that's how this one will be used.

Shader Resource View

Let's then do a shader resource view (SRV) for the 2D texture.

C#
private ID3D11ShaderResourceView brushSRV;

It's sort of trivial with such a small amount of data, but as a practice, we might as well get used to passing textures to shaders via SRVs. In DirectX, the maximum size of a constant buffer is typically 64KB. This means that if we need to pass a large amount of data that exceeds the constant buffer size limit, we need to use alternative methods, such as structured buffers or shader resource views, to pass the data.

So, here's the description.

C#
ShaderResourceViewDescription brushSRVDesc = new ShaderResourceViewDescription()
{
	Format = brushTextureDesc.Format,
	ViewDimension = ShaderResourceViewDimension.Texture2D,
	Texture2D = new Texture2DShaderResourceView()
	{
		MipLevels = 1,
		MostDetailedMip = 0
	}
};

brushSRV = device.CreateShaderResourceView(brushTexture, brushSRVDesc);

Interesting part here are the mip values. MipLevels set to 1 indicates that only the base level (most detailed level) of the texture is being used. And MostDetailedMip property as 0 indicates the base level (level 0) of the texture is the most detailed level. Mip levels represent different levels of detail for a texture.

C#
deviceContext.PSSetShaderResources(0, 1, new[] { brushSRV });

In the SetRenderState, we'll set the device context to set the brush srv as a shader resource. The other parameters define the start slot and the count of resource views.

Sampler State

A sampler state is used to define how textures are sampled during rendering. It determines how texture data is accessed and interpolated to generate pixel colors on the screen.

C#
private ID3D11SamplerState samplerState;

A sampler state controls control how neighboring texels (texture pixels) are combined when a texture is accessed at a location between texels. Filtering techniques, such as linear or anisotropic filtering, help smooth out the texture when it is magnified or minified. Sampler states allow you to specify the desired filtering mode. For example, in the pixel shader, you might have a texture and texture coordinates for each pixel. You want to get the color from the texture at the spot given by the texture coordinates. The sampler state decides how this is done.

Then there's texture addressing, or how texture coordinates outside the [0, 1] range are handled. There's mipmapping, which is a technique that uses precomputed versions of a texture at different levels of detail. And we can control anisotropic filtering, which is a more advanced filtering technique that improves the quality of texture sampling when surfaces are viewed from oblique angles. For instance, the sampler can choose the nearest color from the texture, or it can blend together colors from around the spot. It also decides what to do when the texture coordinates go outside the texture. It could wrap around to the other side of the texture, or it could just use the color from the edge.

C#
SamplerDescription samplerDescription = new SamplerDescription()
{
	Filter = Filter.MinMagMipLinear,
	AddressU = TextureAddressMode.Clamp,
	AddressV = TextureAddressMode.Clamp,
	AddressW = TextureAddressMode.Clamp,
	MipLODBias = 0,
	MaxAnisotropy = 1,
	ComparisonFunc = ComparisonFunction.Always,
	BorderColor = new Color4(0, 0, 0, 0),
	MinLOD = 0,
	MaxLOD = float.MaxValue
};

samplerState = device.CreateSamplerState(samplerDescription);
  • Filter.MinMagMipLinear sets the Filter property of the sampler description to Filter.MinMagMipLinear. This indicates that the sampler will use linear interpolation for both minification and magnification, as well as mipmapping.
  • TextureAddressMode.Clamp is set for AddressU, AddressV, and AddressW properties, indicating that texture coordinates outside the [0, 1] range will be clamped to the edge of the texture.
  • MipLODBias property is set to 0, indicating that no bias will be applied to the level-of-detail computation for mipmapping.
  • MaxAnisotropy property is set to 1, indicating that anisotropic filtering is disabled.
  • ComparisonFunction.Always property indicates that no comparison will be performed when sampling the texture.
  • BorderColor property is set to a Color4 value of (0, 0, 0, 0), indicating that no border color will be used.
  • MinLOD property is set to 0, indicating that the minimum level-of-detail for mipmapping will be 0.
  • MaxLOD property is set to float.MaxValue, indicating that the maximum level-of-detail for mipmapping will be the maximum value possible.
C#
deviceContext.PSSetSamplers(0, new[] { samplerState });

And again, we need to set the device context to use the sampler state.

Blend State

A blend state controls how the output color of a pixel shader is blended with the existing color in the render target. It determines how the colors from multiple pixels, or multiple render targets, are combined to produce the final pixel color. We need one to achieve transparency effects by blending the color of a pixel with the background color based on its alpha value. Blend states can also be used to perform various color manipulations, such as tinting or fading effects, by modifying the blending factors and equations. This provides flexibility in altering the appearance of rendered objects.

In some scenarios, multiple render targets are used to render different elements of a scene into separate textures. Blend states are crucial for controlling how the colors from these multiple render targets are combined to produce the final output. Additionally, blend states are often utilized in the implementation of special effects like additive blending (for creating light sources or particle effects) or subtractive blending (for simulating fog or atmospheric effects).

C#
BlendDescription blendDescription = new BlendDescription()
{
	AlphaToCoverageEnable = false,
	IndependentBlendEnable = false,
};

blendDescription.RenderTarget[0] = new RenderTargetBlendDescription
{
	BlendEnable = true,
	SourceBlend = Blend.One,
	DestinationBlend = Blend.InverseSourceAlpha,
	BlendOperation = BlendOperation.Add,
	SourceBlendAlpha = Blend.One,
	DestinationBlendAlpha = Blend.Zero,
	BlendOperationAlpha = BlendOperation.Add,
	RenderTargetWriteMask = ColorWriteEnable.All
};
  • AlphaToCoverageEnable determines whether alpha-to-coverage multisampling is enabled. In this case, it is set to false, indicating that it is disabled.
  • IndependentBlendEnable determines whether each render target in a multiple-render-target scenario has its own blend settings. Here, it is set to false, meaning that all render targets will have the same blend settings.
  • blendDescription.RenderTarget[0] sets the blend settings for the first render target in the RenderTarget array of the BlendDescription object.
  • BlendEnable enables or disables blending for the render target. Here, it is set to true, indicating that blending is enabled.
  • SourceBlend specifies the blending factor for the source color. In this case, it is set to Blend.One, which means that the source color will contribute fully to the blended result.
  • DestinationBlend specifies the blending factor for the destination color. It is set to Blend.InverseSourceAlpha, indicating that the destination color will be multiplied by the inverse of the source alpha.
  • BlendOperation.Add specifies the mathematical operation used to combine the source and destination colors. Here, it is set to BlendOperation.Add, which means that the source and destination colors will be added together.
  • SourceBlendAlpha specifies the blending factor for the source alpha. It is set to Blend.One, indicating that the source alpha will contribute fully to the blended result.
  • DestinationBlendAlpha specifies the blending factor for the destination alpha. Here, it is set to Blend.Zero, meaning that the destination alpha will not contribute to the blended result.
  • BlendOperationAlpha specifies the mathematical operation used to combine the source and destination alphas. It is set to BlendOperation.Add, indicating that the source and destination alphas will be added together.
  • RenderTargetWriteMask specifies which color channels will be written to the render target. Here, it is set to ColorWriteEnable.All, indicating that all color channels (red, green, blue, and alpha) will be written.
C#
ID3D11BlendState blendState = device.CreateBlendState(blendDescription);

float[] blendFactor = new float[] { 1.0f, 1.0f, 1.0f, 1.0f };
unsafe
{
	fixed (float* ptr = blendFactor)
	{
		deviceContext.OMSetBlendState(blendState, ptr, 0xffffffff);
	}
}
  • blendFactor states determine how the source and destination colors are blended together. In this case, all four components (red, green, blue, and alpha) are set to 1.0f, indicating full blending.
  • fixed (float* ptr = blendFactor): The fixed statement pins the blendFactor array in memory so that it can be accessed via a pointer. This is required because the OMSetBlendState method expects a pointer to the blend factor array.
  • deviceContext.OMSetBlendState(blendState, ptr, 0xffffffff) takes three parameters: the blendState object created earlier, the pointer to the blend factor array, and a bitmask (0xffffffff) to enable writing to all color channels.

Updating The Brush

Alright, let's tackle Update. First we need to make changes to the constant buffer structure.

C#
[StructLayout(LayoutKind.Sequential, Pack = 16)]
struct ConstantBufferData
{
	public Matrix4x4 WorldViewProjection;
	public Matrix4x4 World;
	public Vector4 BrushColor;
	public Vector2 ClickPosition;
	public Vector2 padding;
}

We're still going to use our WorldViewProjection and World matrices. But in addition, we'll be adding a BrushColor vector, coordinates for a mouse click (as a test), and some padding to align the data to 16 bytes, as required by the GPU. So, to clarify, the padding isn't going to be used for anything, it's just there as filler.

Let's set something as our current brush color so we can pass it to the shader.

C#
private Color4 brushColor;

public void InitializeDirectX()
{
	brushColor = new Color4(1.0f, 0.0f, 0.0f, 1.0f);
	...
}

Then we'll need to set up a new version of the projection matrix. For that, we'll need a world dimensions, that I'll just name desiredWorldWidth and desiredWorldHeight. The 10.0f units is just a magic number that I tested and it happened to work with the default size. We'll be changing this soon to match the canvas and update the brush to use a pre-defined size.

C#
private float desiredWorldWidth;
private float desiredWorldHeight;

public void InitializeDirectX()
{
	desiredWorldWidth = 10.0f;
	desiredWorldHeight = 10.0f * (float)SwapChainCanvas.Height / (float)SwapChainCanvas.Width;
	...
}

Here's a trick you can do, multiply the Height by inverse aspect ratio of the canvas to maintain the aspect ratio of the final projection. Of course, canvas width and height are the same number, so we're just multiplying it by 1.0 here.

Next, let's replace the old projection matrix definition with this...

C#
float aspectRatio = (float)SwapChainCanvas.Width / (float)SwapChainCanvas.Height;
float nearPlane = 0.1f;
float farPlane = 100.0f;
projectionMatrix = Matrix4x4.CreateOrthographic(desiredWorldWidth, desiredWorldHeight, nearPlane, farPlane);

You'll notice, we're using the CreateOrthograpic method from Matrix4x4 to get rid of the pesky perspective we were dabbling with in the 3D mesh example.

Let's also change the camera to point to the canvas.

C#
Vector3 cameraPosition = new Vector3(0.0f, 0.0f, 1.0f);

And change the cull mode, so we can see what's going on.

C#
RasterizerDescription rasterizerStateDescription = new RasterizerDescription(CullMode.None, FillMode.Solid)
{
	...
};

Let's make the example at least a little bit interactive and include a PointerPressed event call for the SwapChainCanvas.

C#
private Windows.Foundation.Point lastClickPoint;

private void SwapChainCanvas_PointerPressed(object sender, PointerRoutedEventArgs e)
{
	lastClickPoint = e.GetCurrentPoint(SwapChainCanvas).Position;
}

There, we can take the mouse coordinates for the click, and save them to a variable. And finally, we're ready to Update.

C#
private void Update()
{
	Matrix4x4 worldViewProjectionMatrix = worldMatrix * (viewMatrix * projectionMatrix);

	ConstantBufferData data = new ConstantBufferData();
	data.BrushColor = brushColor;

	data.ClickPosition = ConvertMousePointTo3D(lastClickPoint);
	data.WorldViewProjection = worldViewProjectionMatrix;
	data.World = worldMatrix;
	deviceContext.UpdateSubresource(data, constantBuffer);
}

So, we're taking the position of the last brush stamp and passing it to a helper function, that converts it to work with out world size and makes it a Vector2 instead of the Point type we've been handling so far.

C#
private Vector2 ConvertMousePointTo3D(Point mousePoint)
{
	float worldX = (float)((mousePoint.X / SwapChainCanvas.Width) * desiredWorldWidth - desiredWorldWidth / 2f);
	float worldY = (float)(desiredWorldHeight / 2f - ((float)mousePoint.Y / SwapChainCanvas.Height) * desiredWorldHeight);
	return new Vector2(worldX, worldY);
}

This just makes sure that the coordinates we're passing to the shaders with the constant buffer aren't going to be way outside of the visible canvas.

Speaking of the shaders...

Shaders

The vertex shader is fairly straight forward with just the vertex position and texture coordinates coming in.

hlsl
cbuffer ConstantBuffer : register(b0)
{
	float4x4 WorldViewProjection;
	float4x4 World;
	float4 BrushColor;
	float2 ClickPosition;
}

struct VertexInput
{
	float3 position : POSITION;
	float2 texCoord : TEXCOORD;
};

struct VertexOutput
{
	float4 position : SV_POSITION;
	float3 world : POSITION0;
	float2 texCoord : TEXCOORD;
};

VertexOutput VS(VertexInput input)
{
	float4 position = float4(input.position, 1.0f);
	position.xy += ClickPosition;
	
	VertexOutput output;
	output.position = mul(WorldViewProjection, position);
	output.world = mul(World, position).xyz;
	output.texCoord = input.texCoord;
	return output;
}

We take the vertex position and add the click coordinates to it. After that, it's the matter of fact business of passing the altered positions to the pixel shader.

hlsl
cbuffer ConstantBuffer : register(b0)
{
	float4x4 WorldViewProjection;
	float4x4 World;
	float4 BrushColor;
	float2 ClickPosition;
}

Texture2D BrushTexture : register(t0);
SamplerState BrushSampler : register(s0);

struct PixelInput
{
	float4 position : SV_POSITION;
	float3 world : POSITION0;
	float2 texCoord : TEXCOORD;
};

float4 PS(PixelInput input) : SV_TARGET
{
	float alpha = 1.0f - BrushTexture.Sample(BrushSampler, input.texCoord).r;
	return float4(BrushColor.rgb * alpha, alpha);
}

In the pixel shader, in addition to the constant buffer, we grab the 2D texture and SamplerState that we passed through in the SetRenderState method. With these defined, we calculate an alpha value by first using the Sample function to grab the color value at that specific location in the texture. Then, we grab the red channel of that texture, though it doesn't matter which of the three color channels we use, since it's a grayscale image. And finally, the value is inverted by subtracting the whole thing from 1.0f. This is because we want white to be transparent, not black.

And, if you run the project now, you can click around in the panel, and the bitmap appears in the right location, in the right color. Creating brush strokes requires a bit more work. Let's get to it next time around.