Saturday 19 June 2010

Light PrePass Renderer

Over the past few weeks, I've been experimenting with various ideas for the renderer of my pet engine. I set my mind on the Light PrePass first described by Wolfgang Engel (http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html) because that methods looks really promising. And then I also had the opportunity to discuss that with a friend at work who is really knowledgable about all that and that finished to convince me :)

As I progress through the implementation of the renderer in my engine, I would like to keep a log of issues and solutions I came across. Mainly becausethere is a fair amount of documentation on the principle itself but not that many about the gritty details of the implementation - or at least, that doesn't cover all my questions so far, maybe because I'm not that experienced with deferred rendering and so on.

What follows is a breakdown of thing that are already implemented with some details about implementation choices and issues.

Render Passes:

The main idea is to have 3 (or 4) rendering passes:

  1. (Optionally,) A ZPrePass where objects are rendered in the ZBuffer only, without color information output thus enabling this pass to run twice as fast.
  2. A Geometry Pass, where GBuffers are filled with information like the position and normal vector of the geometry for each pixel
  3. A Light Pass, where light information for each pixel is calculated using the GBuffers from the Geometry pass
  4. A Material Pass where information from the 2 former passes plus per-object material parameters can be combined for generating the final color value for each pixel
GBuffers:

So the first thing to think about is, as stated in many documentation, how to organize output data into GBuffers so we use as less RenderTarget memory as possible without sacrificing flexibility and image quality.

Here is what I plan to use:

 Buffer        | Format        |     8 bits     |    8 bits     |    8 bits     |     8 bits       |
---------------+---------------+----------------+---------------+---------------+------------------+
 Depth/Stencil | D24S8         |                 Depth                          |    Stencil       |
 GBuffer 0     | R32F          |                    Linear Depth (View Space)                      |
 GBuffer 1     | G16R16F       |           VSNormal.X           |           VSNormal.Y             |
 GBuffer 2     | A8R8G8B8      |    Diffuse.R   |   Diffuse.G   |   Diffuse.B   | Specular Factor  |
 Back Buffer   | A8R8G8B8      |       R        |      G        |      B        |       A          |

Those buffers will be rendered as follows:

ZPrePass: Depth is output to the DepthStencil buffer. Nothing special here, apart from turning off the color output, to enable doubling the speed of this pass. This is done by calling
m_pDevice->SetPixelShader( NULL ); 
m_pDevice->SetRenderState( D3DRS_COLORWRITEENABLE, 0 ); // restore it by passing 0x0000000F
Geometry Pass: GBuffers 0 and 1 will be filled during the Geometry Pass. As suggested in many papers, instead of storing the position, I will store the depth with a full 32bits precision and then reconstruct the position from the depth and the view/proj matrices. For the normal vector, I only store 2 of the 3 channels and reconstruct it. Also I know that many techniques for packing/unpacking normals exists and have been discussed but I'll dig into that later on.

Light Pass: GBuffer2 will receive lighting information generated during the Light Pass. During this pass, the diffuse contribution of each light is simply accumulated. For the specular factor, as it is commonly done, I choose to discard color information and only store a modulation factor that will amplify the diffuse color. This can be inaccurate in some situations where the scene contains multiple lights with different colors and intensity but I'm happy to sacrifice that over memory usage. One thing I am not too sure of right now if the format I should use for that. If I output these values into a A8R8G8B8 buffer, I believe that these numbers will be clamped in the range [0, 1], preventing me from doing the lighting in HDR. I could use some trick and always normalize it in the [0,2] range a-la Unreal to fake the HDR but I'd really like to have a proper HRD [0, +inf [ range. I'll have to dig into that a bit more.

Optionally, I could also generate a Velocity buffer during the Geometry Pass, in order to be able to make a nice perpixel motion blur as a post-process. I'll leave that for now and will come back to it later.

Wrap up:

Right now, this is as far as I went. I need to do some ground work in my engine before I can start writing code and shaders for all that but I think I have enough info to start working on a draft implementation.

I still need to clear lots of things out, here's a list of questions I'll have to answer.

  • Which format for the Light Buffer if I want true HDR?
  • Depth in linear space instead of homogenous space?How does the hardware do the 'divide by w' and can this be an issue?
  • Effect of texture filtering when using these buffers?
  • Better way of storing normals?
  • Gamma space vs Linear space?
  • Is there something better than a traditional forward renderer for translucent geometry?
  • How to incorporate shadows in all that?

That's all for today !

-m

No comments:

Post a Comment