1) My ray stepping algorithm is now linear + binary search. What this means is I step my ray by large increments initially, detect when the ray's z-value is below the depth value, move my ray backwards, and decrease the step size. This is better than my previous approach of doing pure linear stepping because I can detect intersections faster and refine my results in less time.
2) Back-face rendering. I now use two framebuffer objects, one for front faces and one for back faces. Although this is somewhat slower than before, it makes reflections more accurate at certain angles.
3) Refractions. These were actually not that hard to do. Right now, the only difference between reflection and refraction in the shader is glsl refract vs reflect on the input ray. I might need to differentiate the two effects to be more physically accurate. I do not render refractive objects to the frame buffer because they would get in the way of the pixels behind.
4) Falloff. What I mean by this is the longer the ray must move to reflect an object, the more the reflected pixels fade. This lets me create objects that are cloudy or blurry, yet still reflective (not included in the demo below).
5) Lots of objects at once + physics (using Bullet)
What I need to do:
1) Improve visual quality. I've noticed that the reflected pixels get really jagged at around 45 degree angles to the surface. I'm still trying to figure out why this is happening. Also, I need to tweak my ray step algorithm to find the right balance between speed and accuracy.
2) Fresnel reflection. Reflection and refraction should adjust based on the view angle (assuming an object is both reflective and refractive).
3) I don't know if this is feasible for a couple reasons, but I would like to try out a linked list fragment structure for extremely accurate reflections. This would get rid of the ugly shadows when the reflective object covers the wall behind it. Unfortunately, this technique has high memory consumption. More details on this technique from Sean Lilley's blog: http://gamerendering.blogspot.com/
Anyway, there is still a good amount to work to do but I thought I would show a demo of my progress:
Note: The fps drops to around 40 when I'm really zoomed in, but otherwise it is 60+
Yesterday I got the framebuffers working and today I've been messing with the reflection shader. Right now the code is inefficient in many ways, but it is starting to resemble proper reflections. Anyway, here are some pictures I've taken throughout the process:
I'll be discussing the last three images. All of these have been running on my laptop's bad graphics card, so I expect a speedup on my dekstop. Speedup will also come with making the shader more efficient. I'm aware of at least one major slowdown at the moment, which is I'm advancing the ray in view space and converting to screen space every step, rather than converting the reflection direction to screen space from the start and only working in screen space from that point on. I was doing this initially, but I encountered some problems so I said screw it and did it the slower way. The jagged lines correspond to the ray step size. The smaller the steps, the more refined (but slower).
Also, you may notice in the final image that the reflections do not draw the backfaces of the spheres. This is because the framebuffer can only store one color value per pixel, the front pixel. Since we are accessing the framebuffer to get the reflected color, it will not find anything for the back pixel.
One interesting thing to point out is there is a cool shadowing effect on some of the walls. I'm not totally sure how this happens at the moment, but it is definitely cool.
Clearly there is a lot more work to be done in terms of visual quality and speed, but at least my results resemble proper reflections.
Here is my progress report a couple weeks into the project:
I've been doing a lot of reading on frame buffers (FBO's). I found a nice tutorial about shadow mapping that uses FBO's and basically explains how they are constructed, written, and accessed. Although shadow mapping it not what I am doing for this project, there are several technical similarities. You can read the tutorial here: http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html .
Anyway, I now have a better sense of how I'll implement screen space reflections.
How to use FBO's:
1. Give each object a Material. This would include diffuse color,
specular color, specular intensity, transparency, reflectivity, refractivity, and
2. Create an FBO that stores depth and color information.
3. The first render call will write to the FBO. This will use my regular material shader but with reflections turned off (set through a unform buffer object). I might split the two render calls into two different shaders since the lighting computations are redundant the second time.
4. Render a second time but use color texture data from the FBO to determine reflected pixel colors.
How to calculate reflections in a shader:
1. Reflect the view vector off of the fragment's surface normal.
2. Reflected ray is marched at a pixel length interval across the screen until it hits the end of the window or collides with an object (explained in step 4).
3. Convert view vector to screen space by dividing the clip space value by its w component, followed by scaling by .5 and shifting by .5 (to get it into screen space coordinates for texture access).
4. If the sampled texture depth value falls between the old and the new ray depth values, there has been an intersection. Take the color at that position from the FBO's color texture and apply it to the original fragment.
5. Mix reflected color value with existing color value based on the object's reflectivity constant.
This is the high level breakdown of how I will implement this. I'm sure there is something I'm missing or wrong about, but as of now it seems like a pretty good approach that shouldn't take too long to at least get an initial working version.
Improvements in GPU performance have
made it possible to create visual effects that were formerly reserved
for offline renderers. Two such developments that interest me are
reflections and refractions. Intuitively, these effects are not
particularly complicated. Reflection involves bouncing a ray around
an object's surface normal and refraction involves changing the angle
of a ray based on the refractive indexes of two media. Unfortunately,
these visual effects can be very difficult to compute efficiently
because they require many intersection tests for multiple rays. As a
result, there has been a great deal of research lately into
developing real-time solutions. As I discuss such techniques, I
will be referring to reflections only because refractions are
One of the most common and earliest
techniques for simulating reflections is to put a reflective object
in the center of a cube map. A cube map is a six sided texture that,
for conceptual purposes, is infinitely large. The fragment shader
simply takes the eye vector and reflects it off of the fragment’s
surface normal. Next, it finds the texture coordinate where the ray
intersects the cube map and draws that color onto the fragment. This
approach creates a mostly realistic visual effect, but it cannot
reflect arbitrary objects in a dynamic scene. More info on reflection
mapping here: http://en.wikipedia.org/wiki/Reflection_mapping
Another interesting approach uses
billboard impostors to simulate reflected geometry. This technique
involves projecting an object onto a texture and intersecting rays
with that texture during the reflection process, akin to what we do
in the cube map. This approach has obvious speed limitations,
especially for scenes with numerous objects. More info on billboard
My goal for this project is to simulate
reflections and refractions between many different objects that move
and deform. One novel approach that has minimal dependence on scene
complexity and detail is screen space reflections (SSR). Although
there are a few ways to achieve this effect, the most understandable
is to do two separate render passes. First, render the scene with no
reflections. Second, use depth and color information from the
previous pass to determine the reflected colors. Interestingly, the
second pass is accomplished with ray-tracing techniques. We convert
the view-space reflection vector to screen space and advance the ray
incrementally. For each step, compare the screen space depth with the
existing depth from the previous render pass. If the depth of the
reflected ray is less than the existing depth, then we take the color
at that position and apply it to the original reflective fragment.
One drawback to this technique is it cannot reflect geometry that is
not visible in the screen.
I'm excited to start work on this
project because it's applicable to many different graphics programs I see
myself working on in the future. As I learn more about SSR I will
update this post to fix any inaccuracies.