DirectX10 Tutorial 10: Shadow Mapping Part 1

I’ve had some downtime lately and seeing as I wrote a basic shadow mapping demo, I figured I’d write a short tutorial on the theory and implementation of shadow mapping. Shadow mapping is one of those topics that tends to get explained in a overly complicated manner when in fact the concept is rather simple. It is expected that you understand the basic of lighting before attempting this tutorial, if you want to learn more about some basic lighting models please read my lighting tutorial. The figure below shows a sample scene with a single light illuminating a cube.

How shadows are formed

With regards to lighting, a light source emits light rays and any surface hit by these light rays is illuminated. Shadows are simply the regions that are not directly illuminated by the light rays from the source due to an object blocking the light ray. These blocking objects are referred to as occluders as they obstruct the light rays. In our case, we are going to discuss shadow mapping with regards to a single light source. The extension of shadow mapping to scenes with multiple lights is trivial and so is not discussed. The idea behind shadow mapping while simple can be a little complicated to explain so I’m going to do it step by step. So please bear with me.

Shadow Map Generation

The first step in shadow mapping is to create a shadow map. The shadow map contains information about the occluders for a specific light. The shadow map is created by rendering the scene from the viewpoint of the light and storing the depths of the occluders. To render the scene from the viewpoint of the light, we need to first create a view matrix for the light. The direction of the light’s view is dependant on the direction of the emitted light (points lights are problematic for shadow mapping so for now just assume light sources are either directional lights or spotlights). The second matrix necessary is a a projection matrix which represents the view volume and in this case represents the light volume emitted by the light. For a directional light this volume is cubic in nature so an orthographic projection is used while for a spotlight a perspective projection is used resulting in a frustum shaped light volume.

Once both these matrices are constructed, we can render the scene from the viewpoint of the light. the resulting image will contain the entire are illuminated by the light i.e. all the surfaces directly illuminated by the light (the occluders). What we care about are the depth values for the occluding surfaces, i.e. the closest surfaces to the light. The depth buffer automatically stores this information for us, and so the shadow map is simply the resulting depth buffer of scene rendered from the viewpoint of the light source.

Only the occluding faces are stored within the shadow map

Since we only need the depth values contained in the shadow map, there is no need to have wasted processing time rendering the scene from the viewpoint of the light. So during the shadow map generation stage the color writes are disabled (i.e. a void pixel shader and a null render target).

Take for example the sample scene shown in the figure belowe, the scene is rendered from the viewpoint of the camera.

A sample scene drawn from the viewpoint of the camera

To generate the shadow map, the same scene is rendered from the viewpoint of the light and the depth buffer stored as a texture, the resulting shadow map is shown below.

The shadow map (depth buffer) of the same scene rendered from the viewpoint of the light

Shadow Rendering

So why is the shadow map useful? Well, we render the scene from the viewpoint of the camera, we can project all the camera clip space coordinates to light clip space coordinates. Once we’ve done that we compare the depth value of the projected coordinates to the value stored in the shadow map. If the shadow map value is less than the projected value then that means the point we projected is behind (further away from the light) the nearest occluder to the light source and so is in shadow.

If a point is in shadow then the lighting calculations for that pixel are not performed and only the ambient light is returned. (NOTE: remember that ambient light is a rough simulation of all the secondary light being reflected of off lit surfaces and so is uniform across the scene meaning that ambient light is applied to each and every object in the scene even ones in shadow.)

Once the shadow map is generated we render our scene as normal with one difference. The world space position of each vertex rendered is projected into two spaces both the clip space of the camera as well as the clip space of the light. So know we have the clip space position of each vertex from both the viewpoint of the camera and the viewpoint of the light. These positions are stored in the output struct for the vertex shader and interpolated across the surface created by the vertices.

This means that for each fragment reaching the pixel shader, we now have the position of that fragment in light space. We now compare the depth of that fragment (the z value) with the depth stored at that fragment’s location (calculated from the light clip space position) in the shadow map. If a fragment’s depth is greater than that stored in the shadow map, then we know that the fragment generated lies behind an occluder and so it is in shadow. This means that we dont need to perform the lighting calculation at that pixel and we can simply return the effect of the ambient light on that pixel. There are a few more things that need to be discussed from an implementation standpoint but overall this is the theory of shadow mapping in a nutshell.


The implementation of this technique is rather involved, the first step is to create a second texture to which we can render the shadow map to. This is done in a similar manner to the manner in which a depth buffer is created (refer to my depth testing tutorial) except that the bind flags and texture type need to be changed.

Since we need to access the shadow map during the standard rendering stage, we need to create a shader resource view and bind it as a texture. This means that the bind flags for the texture we create needs to be able to be bound as both a depth stencil and a shader resource so the bind flags are modified to: D3D10_BIND_DEPTH_STENCIL | D3D10_BIND_SHADER_RESOURCE. Also the type for the depth texture is specific in that it is of DXGI_FORMAT_D32_FLOAT (for a 32bit float) whereas shader resources require a different type i.e. DXGI_FORMAT_R32_FLOAT. To cover both cases we need to set the type to DXGI_FORMAT_R32_TYPELESS, so allowing us to create both shader resource views and depth stencil views to it. The code to create the shadow map is listed below:

 width = shadow map width
 height = shadow map height

//create shadow map texture desc
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DEFAULT;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = 0;

// Create the depth stencil view desc
descDSV.Format = texDesc.Format;
descDSV.ViewDimension = D3D10_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;

//create shader resource view desc
srvDesc.Format = DXGI_FORMAT_R32_FLOAT;
srvDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = texDesc.MipLevels;
srvDesc.Texture2D.MostDetailedMip = 0;

//create texture and depth/resource views
if( FAILED( pD3DDevice->CreateTexture2D( &texDesc, NULL, &pShadowMap ) ) )  return false;
if( FAILED( pD3DDevice->CreateDepthStencilView( pShadowMap, &descDSV, &pShadowMapDepthView ) ) ) return false;
if( FAILED( pD3DDevice->CreateShaderResourceView( pShadowMap, &srvDesc, &pShadowMapSRView) ) ) return false;

NOTE: Your shadow map size will often differ from that of your rendertarget, in that case you will need to create a new viewport with the dimensions of your shadow map and bind that viewport when rendering to the shadow map. In my case my shadow map and my render target are the same size so I can use the same viewport for both.

Once you have created your shadow map you need to write the scene depth data to it, which is done by setting the render targets to null and the depth stencil to your shadow map. Then rendering the scene using a void returning pixel shader. Once the shadow map has been generated, you will set the rendertarget and depth stencil to your standard ones, and bind the shadow map as a texture. Then you will render the scene as normal, after which you need to unbind the shadow map as a shader resource. The C++ code for setting the rendertargets and binding resources for both the shadow map generation and the final scene generation is shown below:

//Create shadow map

//set render targets
pD3DDevice->OMSetRenderTargets(0, 0, pShadowMapDepthView);
pD3DDevice->ClearDepthStencilView( pShadowMapDepthView, D3D10_CLEAR_DEPTH, 1.0f, 0 );


//Render final scene

//set render targets
pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, pDepthStencilView);
pD3DDevice->ClearRenderTargetView( pRenderTargetView, D3DXCOLOR(0.6f,0.6f,0.6f,0) );
pD3DDevice->ClearDepthStencilView( pDepthStencilView, D3D10_CLEAR_DEPTH, 1.0f, 0 );

//bind shadow map texture
pEffect->GetVariableByName("shadowMap")->AsShaderResource()->SetResource( pShadowMapSRView );


//unbind shadow map as SRV and call apply on scene rendering technique
pEffect->GetVariableByName("shadowMap")->AsShaderResource()->SetResource( 0 );
pShadowMapTechnique->GetPassByIndex(0)->Apply( 0 );

//swap buffers

Shadow Map Generation HLSL

As mentioned all that’s necessary to generate the shadow map is to render the scene from the viewpoint of the light and disable color writes. The lightViewProj matrix contains the concatenation of the light’s view and projection matrices. The following shader programs are used for the generation of the shadow map:

	float4 pos : SV_POSITION;

// Vertex Shader
    output.pos = mul( input.pos, mul( world, lightViewProj ) );
    return output;

// Pixel Shaders
void ShadowMapPS( SHADOW_PS_INPUT input ) {}

HLSL for Shadow Rendering

To render the scene with shadowing, the vertex shader remains largely the same except for the addition of the vertex world space coordinates being projected into light clip space and then being interpolation during fragment generation. To ensure that interpolation occurs during fragment generation the float4 light clip space coordinates are stored usign a texcoord semantic. The vertex shader to achieve this is listed below:

struct PS_INPUT
	float4 pos : SV_POSITION;
	float4 lpos : TEXCOORD0;		//vertex with regard to light view
	float3 normal : NORMAL;

// Vertex Shader
	PS_INPUT output;
    output.pos = mul( input.pos, mul( world, viewProj ) );
	output.normal = input.normal;

	//store worldspace projected to light clip space with
	//a texcoord semantic to be interpolated across the surface
	output.lpos = mul( input.pos, mul( world, lightViewProj ) );

    return output;

The pixel shader for shadow mapping gets a bit tricky as there are a few steps that need to be performed that we didnt discuss in detail above. The first step is to re-homogenize the clip space coordinate. I’m not going to discuss the intricacies of projection matrices or homogenous coordinates here but it is important to mention that the W value of a point must always be 1 (i.e. a point). During the interpolation all the values (x,y,z,w) are interpolated so the w value is not guaranteed to be 1 any more. The actual explanation behind this is rather involved so if you are interested just google it ;). Now to re-homogenize the light clip space position, we  simply divide all the values (x,y,z,w) by w and boom, we’re back to homogenous coordinates.

The next step is a simple check to see whether the light clip space coordinates lie within the view of the light, if they dont then obviously that fragment is shadowed. Since clip space coordinates ranges from [-1, 1]  for the x and y coords and [0,1] for the z, any coordinates that lie outside the cube creating by those ranges is in shadow and so the ambient value is returned.

Converting from clip space to texture space

We now neeed to sample the shadow map to get the depth value at that specific point, this means that we need to convert from clip space coordinates to texture space coordinates. We can do this since the clip space coordinates refer to the light clip space which directly relates to the shadow map texture space.

The figure above illustrates the conversion between clip space to texture space for the fragment light space coordinates. The math behind it is really simple as all you’re doing is converting from the range: ( [-1,1], [-1,1] ) to ( [0,1], [0,1]). Once we have the texture coordinates, we sample the shadowmap and compared the light clipspace z value to the stored depth value in the shadow map. If the shadow map value is lower than the current fragment’s z value then that fragment is in shadow. The final pixel shader is shown below:

float4 PS_STANDARD( PS_INPUT input ) : SV_Target
	//re-homogenize position after interpolation /= input.lpos.w;

	//if position is not visible to the light - dont illuminate it
	//results in hard light frustum
	if( input.lpos.x < -1.0f || input.lpos.x > 1.0f ||
	    input.lpos.y < -1.0f || input.lpos.y > 1.0f ||
	    input.lpos.z < 0.0f  || input.lpos.z > 1.0f ) return ambient;

	//transform clip space coords to texture space coords (-1:1 to 0:1)
	input.lpos.x = input.lpos.x/2 + 0.5;
	input.lpos.y = input.lpos.y/-2 + 0.5;

	//sample shadow map - point sampler
	float shadowMapDepth = shadowMap.Sample(pointSampler, input.lpos.xy).r;

	//if clip space z value greater than shadow map value then pixel is in shadow
	if ( shadowMapDepth < input.lpos.z) return ambient;

	//otherwise calculate ilumination at fragment
	float3 L = normalize(lightPos -;
	float ndotl = dot( normalize(input.normal), L);
	return ambient + diffuse*ndotl;

The final result is shown in the figure below, the hard light volume is clearly visible as are the shadows resulting from the sampling of the shadow map. The technique used above uses a point sampler (nearest texel) and so only samples the shadow map once. This results in pretty poor quality shadows, which can b improve through a variety of methods such as percentage closer filtering (PCF) (covered in part 2).

Basic 1-tap shadow mapping

Part 2: Improving Shadow Mapping Quality

Source Code: Please refer to Part 2 of this tutorial


26 thoughts on “DirectX10 Tutorial 10: Shadow Mapping Part 1

  1. Heh, I thought after reading up on shadow maps you might make a tutorial, but I didn’t expect it so soon!

    Your blog has been immensely helpful to me, I’ve looked at other tutorials, but I think I’m starting to understand the concepts at work rather than just copying code from samples. Thank you!

    I’m looking forward to percentage closer filtering, particularly because I’ve wanted to experiment with a variation of that called percentage closer soft shadows.

    I wish you the best of luck with finding a job in the industry, from where I’m sitting it certainly looks like you deserve it 🙂

  2. thanks for the kind words! I’m really glad my posts have been able to help you. Thats the whole reason that I take the time to write them! I’m hoping to write the second part of the tutorial today or at the latest tomorrow(depending on how much free time I have today).

    I think percentage closer soft shadows and percentage closer filtering are the same thing. The end effect of PCF is blurring around the edges of the shadows resulting in “soft” shadows… 😉

    Haha, I hope I can get a job too 😛

  3. Actually, Percentage-closer soft shadows isn’t a filtering method, and actually uses PCF to filter shadows (you can use other filtering types, too). 🙂

    PCSS is, in short, a “blocker search” to search for blocking geometry in the depth map and use it to create varying penumbra as you’d see in real-life. There’s a number of limitations to PCSS but it’s rather easy to implement into existing shadow-map code. You basically use it to scale the blur radius for your PCF filtering. 😉

    Anyway, thanks for the really awesome tutorial. 😀 There aren’t many shadow-mapping tutorials online, so this is definitely something nice to have. Sucks that it was posted like 2 days after I struggled to get the C++ side of my own shadows working. xD

    1. yup just found the NVIDIA paper on PCSS, been doing a bit of reading on the topic and it seems that the number of shadow mapping techniques is insane. Its almost as bad as all the various anti aliasing techniques. haha.

  4. Oh also, on the note of moving from clip-space to texture-space for the shadow coordinates:

    //transform clip space coords to texture space coords (-1:1 to 0:1)
    input.lpos.x = input.lpos.x/2 + 0.5;
    input.lpos.y = input.lpos.y/-2 + 0.5;

    If it interests you, you could use the following code to do it on the CPU side instead of calculating it for each pixel in the shader. Saves you from doing it over and over for each object, too.

    D3DXMATRIX clip_to_tex = D3DXMATRIX( 0.5, 0, 0, 0,
    0, -0.5, 0, 0,
    0, 0, 1, 0,
    0.5, 0.5, 0, 1 );

    D3DXMATRIX lightViewProjTex;
    D3DXMatrixMultiply(&lightViewProjTex, &lightViewProj, &clip2tex);

    1. Oops I made a typo (sorry for so many posts!). The “clip2tex” in the last line should be “clip_to_tex”. Obvious mistake but something I guess I should point out. 🙂

    2. thanks for the optimization but remember this is a tutorial for people who want to understand how the technique works so its often necessary to have slower but more obvious code. Once someone completely undertand a technqiue should they start optimizing.

      1. Heh yeah, agreed. I posted it for those looking to go the extra mile to move some of the workload off the GPU. It’s likely the first thing I’d do after getting everything working. 🙂

  5. Thank you for posting this great and basic tutorial! It was very useful. I am programming indy games using D3D10. Could you post something related to Cascaded Shadows Map?. And i am waiting for your post about migrating from D3D10 to D3D11.

  6. Hi! Congratulations, it’s a wonderful blog! I’m following your tutorial for shadow mapping, but I have some problems in order to use this effect into my project … can I have the code you used to create the result of the first part? Thx

  7. Hei, thank’s for your tutorial!! I ‘ve a problem white the implementation:
    if i put texDesc.Format = DXGI_FORMAT_R32_TYPELESS; then the CreateDepthStencilView( pShadowMap, &descDSV, &pShadowMapDepthView ) crash,
    if i put texDesc.Format = DXGI_FORMAT_R32_FLOAT; then those who crash is the CreateTexture2D…

    The only why to run it is to set texDesc.Format = DXGI_FORMAT_R32_TYPELESS
    and srvDesc.Format = DXGI_FORMAT_R32_FLOAT;
    it could be done or it will cause an error in the render? thank’s

    1. It should be:

      texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
      srvDesc.Format = DXGI_FORMAT_R32_FLOAT;
      descDSV.Format = DXGI_FORMAT_D32_FLOAT;

      anything else won’t work.

  8. First of all, I love this tutorial, very informative, and exactly what I’m looking for to learn how to implement shadows in my engine. However, I am getting a few errors.

    The first is in my shader. I’m getting “Invalid Bytecode: Instructions calculating derivatives across pixels, and using temp storage or indexed values for input coordinates, are not permitted within flow control that has a branch condition that could vary across p”. My pixel shader function is below.

    float4 ps_lighting(VS_OUTPUT IN) : SV_Target
    if (renderingSkel || renderingRBody)
    return float4(1, 1, 1, 1);
    //shadow mapping stuff /= IN.lpos.w;

    float4 ambient = ambientBright * materialAmbient;

    if( IN.lpos.x 1.0f || IN.lpos.y 1.0f || IN.lpos.z 1.0f )
    return ambient;

    IN.lpos.x = IN.lpos.x/2 + 0.5;
    IN.lpos.y = IN.lpos.y/-2 + 0.5;

    float shadowMapDepth = shadowMap.Sample(ShadowSampler, IN.lpos.xy).r;

    if (shadowMapDepth 0)
    finalColor = saturate(finalColor + specular);

    float4 color = texDiffuse.Sample(DiffuseSampler, IN.tex0);
    float4 sphCol = sphTexture.Sample(SphereSampler, IN.spTex);
    if (useTexture)
    finalColor *= color;
    if (useSphere)
    if (isSphereAdd)
    finalColor += sphCol;
    finalColor *= sphCol;
    float4 o = (finalColor * extraBright) * fadeCoef;
    if (useTexture)
    o.a = color.a * materialDiffuse.a;
    o.a = materialDiffuse.a;
    return o;

    The second is a runtime issue. When I create the depth stencil view, it fails every time to create it with “Parameter is Incorrect”. It does, however, create the shader resource view and the texture2d correctly.

    Thanks in advance!

  9. I am extremely impressed along with your writing talents and also with the
    format to your blog. Is that this a paid theme or did you
    customize it your self? Either way stay up the excellent quality
    writing, it’s uncommon to see a great blog like this one these days..

    1. I love this on8e2#:2&1; I deal with internet criticism all the time. I find the best way to deal with it is to go out and buy a new cat.” You can’t tell me that isn’t someone from here.

  10. Me ha maravillado este post, en realidad es que está bastante cuidada
    la forma de expresarse, lo que permite disfrutar
    de la lectura. En una ocasión leí una cosa que trataba de esto, y la verdad es que tambien me impresionó.
    Ojala que prosigas escribiendo artículos parecidos a este, ya
    que voy a visitar a menudo tu blog.

  11. I’m really loving the theme/design of your blog. Do
    you ever run into any web browser compatibility issues?
    A small number of my blog readers have complained about my blog not operating correctly in Explorer but looks great in Opera.
    Do you have any suggestions to help fix this problem?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s