永远也不完美的程序

不断学习,不断实践,不断的重构……

常用链接

统计

积分与排名

好友链接

最新评论

Light Pre Pass in XNA: Basic Implementation

转自:http://mquandt.com/blog/2009/12/light-pre-pass-in-xna-basic-implementation/

NOTE: This article is now obsolete. An up-to-date sample and article can be found at http://mquandt.com/blog/2010/03/light-pre-pass-round-2/

In this part I will cover how to implement the basic form of the Light Pre Pass renderer, with support for point lights, and the basic Blinn-Phong shader, including Albedo texture support.

As this article is fairly advanced in nature, I have to make certain assumptions about my audience, so that I do not spend half my time explaining basics. Firstly, you should have an understanding of basic concepts such as Cameras, Fullscreen Quads (including how to render one) and rendering a mesh with custom effects.

This pretty much means that as long as you have done some 3D work before, you should be fine. It would be best if you also knew XNA, as I will be using that to write this implementation, however as long as you can translate from C# and get the basic idea, that should be enough.

As you can see from these requirements, this article is not aimed at beginners, and if you are looking for tutorials on how to get started with XNA for 3D development, I would recommend you visit some great sites such as:

Those sites will help you get started with XNA, and once you are familiar and comfortable with the concepts behind 3D graphics, you can return here to learn an advanced renderer implementation.

My focus in this article will be on the implementation of the renderer, as a result, I will not be referring to the implementation of cameras or scene graphs.

Now that the housekeeping is out of the way, we can begin.

The Renderer in C#

Light Pre Pass (LPP), or Deferred Lighting, operates in 3 stages.

  1. Depth + Normals Rendering
  2. Light Rendering
  3. Materials Rendering

These 3 stages accumulate information into render targets, which are used by the next stage, until the Materials stage produces the final image. So the first thing we must do, is set up at least the following Render Targets:

  • Depth (SurfaceFormat.Single)
  • Normals (SurfaceFormat.Bgra1010102)
  • Lights (SurfaceFormat.Color)
1
2
3
4
depth = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Single, RenderTargetUsage.DiscardContents);
normals = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Bgra1010102, RenderTargetUsage.DiscardContents);
light = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);
final = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);

We use Bgra1010102 for storage of normals because we want maximum precision for the 3 channels we are using. The closest format that provides 32bits of depth and 3 channels is 1010102, where there are 10 bits for the 3 channels we care about, giving greater precision over the 8 bits in a normal A8R8G8B8 (or Color) surface format.

The Materials, or final pass can be rendered directly to the Backbuffer, or to a Render Target, this depends on your needs, and is completely up to you. I have provided suggested SurfaceFormats above, however you can feel free to use your own, however note that the shaders I provide may not work [correctly] with your chosen format.

Depth + Normals

The first stage of the renderer, requires you to render the Depth and Normal values for each pixel to the screen. You can optionally render the position directly, however many post processing techniques use depth information, so why not render it now to re-use later.

First we must setup the render targets on our device, easily done with two lines of code:

1
2
gfx.SetRenderTarget(0, depth);
gfx.SetRenderTarget(1, normals);

For those who have not worked with multiple render targets before, the number in the above code indicates the render target index, and allows you to un-set and resolve the render target later.

Now you must first clear the render targets. As we are using multiple render targets, a simple call to GraphicsDevice.Clear will not suffice, instead we simply render a fullscreen quad using a cheap shader to write out the clear colours to the render targets.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
struct VS_OUT
{
    float4 Position        : POSITION;
};
  
VS_OUT vs_main(float3 position : POSITION)
{
    VS_OUT output = (VS_OUT)0;
    output.Position = float4(position, 1);
  
    return output;
}
  
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT ps_main()
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = 1.0f;
  
    output.Normals = float4(0, 0, 0, 1);
  
    return output;
}

Next you render the objects, using a special shader that writes the Depth and Normals to the two render targets. If you intend to implement Normal Mapping, or a similar technique, this is where you would calculate and combine the Normals. For the purposes of this article, only the basic per-vertex normals will be stored here.

One thing I had to do, was ensure a couple of render states were set correctly, specifically DepthBufferEnable and DepthBufferWriteEnable. Ensure both of these are set to true before continuing.

The Depth and Normals shader is quite simple. First the object is transformed as it would normally be when rendering, and then the Z and W values from the transformed position are passed to the pixel shader, alongside the Normal.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
struct VS_IN
{
    float4 Position   : POSITION;
    float4 Normal     : NORMAL0;
};
  
struct VS_OUT
{
    float4 Position   : POSITION;
    float4 Depth      : TEXCOORD0;
    float4 Normal     : TEXCOORD1;
};
  
VS_OUT depthNorm_VS(VS_IN input)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(input.Position, wvp);
  
    output.Depth.xy = output.Position.zw;
  
    output.Normal = mul(World, input.Normal);
  
    return output;
}

If you look at your render targets, you may see a white image for the depth buffer, this is normal, as the differences in depth between most points on an object are miniscule, and close to 1. Your normals buffer however should look something like this:

normals

Inside the pixel shader, the Z value is divided by the W value to get the depth, and that is written to the first render target. Then the Normal is normalised and shifted from a range of [-1, 1] to [0, 1].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT depthNorm_PS(float4 depth : TEXCOORD0, float4 normal : TEXCOORD1)
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = depth.x / depth.y;
  
    output.Normals.rgb = 0.5f * (normalize(normal) + 1.0f);
  
    // Set alpha for both Depth and Normals to 1 (for some reason its required)
    output.Depth.a = 1.0f;
    output.Normals.a = 1.0f;
  
    return output;
}

Now that we have our Depth and Normal values stored in the render targets, we can resolve and get their respective textures so that the lights can be rendered using this data. This is quite simple in XNA, just set the render targets on the graphics device to either another render target, or null. In this case, we can set RT0 to the light buffer, and set RT1 to null.

1
2
3
4
gfx.SetRenderTarget(0, light);
gfx.SetRenderTarget(1, null);
depthImage = depth.GetTexture();
normImage = normals.GetTexture();

Be sure to clear the light buffer to TransparentBlack, and then we can move on to rendering the lights.

In this first tutorial, I will implement point lights only. Check back for future tutorials about implementing other types of lights, like Directional Lights, etc.

Rendering the light stage is a little bit more complicated than the Depth + Normals stage. This time around, a number of Render States must be set in the beginning, and even more for each light based on the position of the camera.

Render States

The following render states must be set when drawing the lights, to take advantage of alpha blending for blending multiple overlapping lights.

1
2
3
4
5
6
7
gfx.RenderState.AlphaBlendEnable = true;
gfx.RenderState.SeparateAlphaBlendEnabled = false;
gfx.RenderState.AlphaBlendOperation = BlendFunction.Add;
gfx.RenderState.SourceBlend = Blend.One;
gfx.RenderState.DestinationBlend = Blend.One;
gfx.RenderState.DepthBufferEnable = false;
gfx.RenderState.DepthBufferWriteEnable = false;

Here we are disabling the Z-culling feature of the graphics card so that overlapping lights can be drawn, as well as enabling Alpha Blending over all channels of the render target so that the process of combining overlapping lights will be handled by hardware automatically. We also ensure that no modifications to the destination or source values are made during the blending stage, and that Additive blending is used. (Remember that lighting equations add multiple lights together)

Now you run through each light and set the CullMode render state based on where the camera frustum is located. If the frustum is inside or overlaps the light bounding volume (in this case a sphere), the CullMode needs to be set to CullClockwiseFace. CullCounterClockwiseFace should be set if the frustum is completely outside the light bounding volume. Remember to also ensure that the CullMode is set to CullCounterClockwiseFace after all of the lights have been rendered.

In the sample code, I use a Mesh to easily load and store the light volume, which for a Point Light, would be a sphere. A scaling matrix allows for the attenuation to be changed, so be sure to update any matrices as needed.

Some notes about the next code sample:

  • cmanager is my CameraManager, it is used here to set the ViewProjection and InverseViewProjection matrices, which are required to transform the Depth back into a position for lighting.
  • caller is the Renderer class, which coordinates rendering each stage, as well as setting up and resolving the appropriate buffers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public void DrawLightDeferred(GraphicsDevice gfx, CameraManager cmanager, Renderer caller)
{
    shader.Begin();
  
    // Set Matrix params
    cmanager.ApplyCameraParameters(ref shader);
    shader.Parameters.GetParameterBySemantic("WORLD").SetValue(world);
  
    // Set Depth and Normals buffers
    shader.Parameters["Depth_Tex"].SetValue(caller.GetDepthImage());
    shader.Parameters["Normals_Tex"].SetValue(caller.GetNormalsImage());
  
    // Set lighting params
    shader.Parameters["LightPos"].SetValue(_pos);
    shader.Parameters["Attenuation"].SetValue(_attenuation);
    shader.Parameters["SpecPower"].SetValue(SpecularPower);
    shader.Parameters["LightColor"].SetValue(LightColor.ToVector4());
  
    for (int j = 0; j < lightMesh.Meshes.Count; j++)
    {
        gfx.Indices = lightMesh.Meshes[j].IndexBuffer;
  
        for (int k = 0; k < lightMesh.Meshes[j].MeshParts.Count; k++)
        {
            for (int i = 0; i < shader.CurrentTechnique.Passes.Count; i++)
            {
                EffectPass pass = shader.CurrentTechnique.Passes[i];
                pass.Begin();
  
                gfx.VertexDeclaration = lightMesh.Meshes[j].MeshParts[k].VertexDeclaration;
  
                gfx.Vertices[0].SetSource(lightMesh.Meshes[j].VertexBuffer,
                    lightMesh.Meshes[j].MeshParts[k].StreamOffset,
                    lightMesh.Meshes[j].MeshParts[k].VertexStride);
  
                gfx.DrawIndexedPrimitives(PrimitiveType.TriangleList,
                    lightMesh.Meshes[j].MeshParts[k].BaseVertex,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].NumVertices,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].PrimitiveCount);
  
                pass.End();
            }
        }
    }
    shader.End();
}

Now I need to run through some helper methods I use in the upcoming point light shader. These methods handle transforming a position from Post Projection space, to Screen space, as well as calculating the half pixel offset required by DX9.

1
2
3
4
5
6
7
8
9
10
float2 postProjToScreen(float4 position)
{
    float2 screenPos = position.xy / position.w;
    return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));
}
  
float2 halfPixel()
{
    return -(0.5f / float2(fViewportWidth, fViewportHeight));
}

These are simple enough, and more importantly, *just work*.

Now for the point light shader. Here the light volume is transformed as needed in a really simple vertex shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
struct VS_OUT
{
    float4 Position            : POSITION;
    float4 LightPosition    : TEXCOORD0;
};
  
VS_OUT vs_main(float4 inPos : POSITION)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(inPos, wvp);
    output.LightPosition = output.Position;
  
    return output;
}

The following variables are also passed to the shader for lighting calculations:

1
2
3
4
5
6
float3 LightPos;
float Attenuation;
float SpecPower;
float4 LightColor;
float3 CamPos : VIEWPOSITION;
float3 EyeDepthRay;

The key code comes in the pixel shader. The first thing needed is to transform the position of the pixel from post projection space to screen space. This is handled by the helper method I mentioned earlier. Then the half pixel offset is deducted from the screen space position, so that the values read from the Depth and Normal buffers are correct.

1
2
3
4
5
6
// Transform from post-projection to texcoords
float2 screenPos = postProjToScreen(projPos);
// DX9 half pixel offset
float2 texCoord = screenPos - halfPixel();
  
float depth = tex2D(depthSampler, texCoord);

Next, read the depth from the Depth buffer, and if the value is not less than 1, we simply write a value of 0 for this pixel, as there is no depth information at that point, and nothing to light. If there is however, the lighting can be calculated for that point.

1
2
3
4
5
6
7
8
// Reconstruct position from screen space + depth
float4 position;
position.x = texCoord.x * 2 - 1;
position.y = (1 - texCoord.y) * 2 - 1;
position.z = depth;
position.w = 1.0f;
position = mul(position, InvViewProjection);
position.xyz /= position.w;

For more information on how to reconstruct a position based on a depth value, read this. There are also alternative, and improved methods listed there, which can be used depending on your needs.

Next the normal is acquired from the normal buffer, and restored to the [-1, 1] range so that it can be correctly used in the lighting calculations.

1
2
3
// Restore Normal
float3 normal = tex2D(normSampler, texCoord);
normal = normalize(2.0f * normal - 1.0f);

Now the lighting can begin. There are two key elements that need to be calculated for our light buffer: N.L and Attenuation. N.L is the basic element in every lighting equation, and simply consists of the dot product between the Normal and the Light Direction.

Attenuation is calculated by simply determining the ratio of distance to light over maximum attenuation, this is then flipped so that 0 is the furthest point from the light. Here I also pre-combine the attenuation and the N.L value. You can of course combine these later when writing out the buffer, ultimately it gives the same result.

1
2
3
4
5
6
7
// Attenuation Calcs
float3 lDir = LightPos - position;
float atten = saturate(1 - dot(lDir/Attenuation, lDir/Attenuation));
lDir = normalize(lDir);
  
// N.L
float nl = dot(normal, lDir) * atten;

Next we calculate the specular value. As we are using the Blinn-Phong lighting equation later on, the Half Vector is used instead of the reflection Vector, which ends up being a cheaper calculation for us. (Negligible for most modern systems – but visual difference is imperceptible)

For the purposes of this article, I will only include the code from the Blinn-Phong variant, however in the downloadable sample, I provide both methods that can be toggled with a boolean. (Change the technique to change the method)

Remember that this only affects the specular value, so do not worry that this will restrict you to the Blinn-Phong (or just Phong) lighting model.

1
2
float3 halfDir = normalize(lDir + camDir);
spec = pow(saturate(dot(normal, halfDir)), SpecPower);

Finally we generate the buffer and this is where we combine the light colour with the calculated N.L and Attenuation values.

1
return float4(LightColor.r, LightColor.g, LightColor.b, spec) * nl;

You should get something that looks like this: (Note that due to transparency this may look weird, however the essential part to note is the lights making up the shape of the model)

lights

Now we are entering the home stretch. All that is left to render now is the materials for each object. This is simply a matter of rendering each object again, and using the Light buffer to shade the object. Here is also where the material-flexibility of LPP comes into play, as each object uses its own shader.

To prepare for this stage, simple resolve the light buffer by setting either the backbuffer (null) or a “Final Image” render target as RT0. Then you can get the light texture, and provide it so the objects can use it when rendering.

This is the pixel shader:

1
2
3
4
5
6
7
8
9
float2 scrCoord = postProjToScreen(input.ScrCoord) - halfPixel();
  
float4 light = tex2D(lightSampler, scrCoord);
  
float3 texCol = tex2D(texSampler, input.TexCoord);
  
float3 lighting = saturate(AmbientLight + (light.rgb * texCol) + light.aaa);
  
return float4(lighting, 1);

Here I adjust by the half pixel offset and transform from post projection to screen space inside the vertex shader, so those calculations are as before, however I pass the corrected Texture Coordinate to the pixel shader.

As this material is a Blinn-Phong material, it is a rather simple equation. The “Sum of light colour multiplied by N.L and attenuation” is handled by the Alpha Blending and light shaders, so that simply needs to be multiplied by the texture (Albedo) colour, which is then added to the ambient light term and specular term to complete the lighting equation.

Finally this is done, you now have either a backbuffer, or render target filled with a lit scene.

final

There are many other materials which can be adapted to use the light buffer, and there is also a modification that can be done to the light buffer and final material shaders to allow for a material specular value, however I will leave those to future articles.

I hope this has been informative, and if you have any questions, please post them in the comments. Also be sure to check back for new tutorials covering different light types, materials, and other additions. I hope to get shadows implemented into the system, and also outline combining this with a forward renderer to allow for transparent objects and particles.

The screenshots in this post use 1000 point lights arranged in a 10x10x10 cube around the model.

posted on 2010-08-15 10:01 狂烂球 阅读(1142) 评论(0)  编辑 收藏 引用 所属分类: 图形编程


只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理