Monday 21 June 2010

Work in progress

After posting the previous ticket, I started hacking my previous forward renderer in order to implement a first draft of the new rendering loop. I generally think that even when you worked through the theory, you always end up experiencing a lot of small issues that arises while you implement you idea. That's why I wanted to quickly get my hands dirty and face the actual practical problems.That's how I learn :)

Here is a screenshot of what I have so far:



The scene features a character in bind pose and a simgle point light, placed in front of its waist. The main view is still empty as the Material Pass is not done yet, but at the bottom, you can see some debug output of GBuffers 0, 1 and 2, storing depth, normals and light preview (ie. diffuse + diffuse*specular^8) respectively.

Following is a sort of log going through various bugs/issues I had to solve. I extensively used PIX for debugging the shaders. It was very unstable because of bugs in my rendering code. Switching to the reference rasterizer allowed me to fix those bugs. In turn, the stability of PIX has improved significantly, to a point where it is really reliable.

Default values in GBuffers:
GBuffer 0 stores the linear distance to camera for each pixel. This distance will be used in lighting calculations. In order to avoid any artifact, pixel that are part of the void need to have a distance ideally equal to positive infinity. For that, and because this rendertarget is a R32F buffer, I clear it to 0x7FFFFFFF, which according to the IEEE standard is the greatest 32bits positive float value.

GBuffer1 stores the normal vector for each pixel, directly as float values. The ovious default value for this buffer is 0, so the normal will be initialized to a null vector for all the pixel that were not rendered during the Geometry Pass

Reconstructing the position from the Depth value:
I struggled quite a bit to get it right, only to realize that I had a blatant bug elswhere in one of the shaders ... That's where PIX come in handy!.. The main idea is to reconstruct the position using the depth and a ray vector that originates from the camera and passes through the current pixel. This ray can be deduced from the camera frustum corners. For that, I compute in the main app the vector that goes from the camera to the top right corner of the camera frustum. As I want this vector in view-space, it has coordinates looking like

cFrustumRayInfo = Vector3( dx, dy, 1 ) * FarDistance;

Then, in the Vertex shader, I use this vector (passed as a constant) and the clip-space position of the current pixel (which is always going to be in the [-1,1] range on both axis) to compute the ray that originates from the camera and passes through this pixel

output.vRay = float3( dx * clipspace.x, dy * clipspace.y, FarDistance );

This gets output to the pixel shader and thus interpolated per-pixel. In there, I sample the depth from the GBuffer0 filled in the Geometry Pass and I can reconstruct the position by doing

float4 vPosition = float4( depth * in.vRay, 1 );

I think that it is one of the best way to do t because that will either if you write a fullscreen quad or just portions of the screen. And depending the type of lights you're dealing with, you surely want to do both, as writing a fullscreen quad for a point light with a radius of 10 pixel is a bit of a waste, isn't it?


That's it, more stuff when'I ll be able to work further on :)

-m

No comments:

Post a Comment