Real Time Lighting and shadow methods in OpenGL

 

 

 

 

Robert Bateman u9714944

 

 

 

Computer Visualisation And Animation

Year 3

 

 

 

 

Innovations Project

 

 

 

 

 

 

 

 

 

 

Contents

 

 

 

1.  Introduction

 

 

 

 

2.  The pre-calculation of lighting in openGL

 

 

 

2.1                        Quick Simple Lighting

2.2                        Increasing the Lighting Complexity

2.3                        Overcoming OpenGL’s Eight Light Limit

2.4                        Speed Vs. Memory Requirements

 

 

3.  Realtime Shadow Calculation

 

3.1                        Simple Shadow Projection Using Similar Triangles

3.2                        Casting shadows on a flat plane of using point lights

3.3                        Casting Soft Shadows

3.4                        Producing Reflections in a Plane

3.5                        Using the stencil buffer to draw shadow interactions.

 

 

4.   Conclusion

 

 

5.    References

 

 

 

 

 

 

 

 

 

 

 

1.Introduction

 

 

            The purpose of this project is to investigate a number of real time graphical effects primarily concerned with the difference between pre-calculation and dynamic effects. The two major areas of research have been into performing lighting calculations outside of the normal openGL renderer, in order to investigate whether it is possible to create a higher visual quality. The second has been into investigating real time shadows and effects.

 

            My initial reseach looked into Nurbs surfaces, from the point of view of the animator, this very quickly became looking into real time uses of those surfaces. I wanted to move away from passing as much data straight to the graphics hardware , and look instead at whether by passing the hardware altogether would yield better results.

 

 

 

2.The Pre-calculation of Lighting in OpenGL

 

            One effect that has a noticeable effect on the speed of an application is the calculation of lighting equations. With openGL, every time a call is made to glVertex*() the current normal vector is used to calculate lighting for every light in the scene, this can have severe implications for the speed of the application if you want to utilise a number of lights in a scene. For a static object, there is very little point in running these calculations for every frame; if each time the same results will be generated.

 

            OpenGL’s lighting equations are based on per-vertex calculation of ambient, diffuse and specular terms with regard to the ambient, diffuse, specular and emissive material properties.  For most simple applications this is quite adequate, but there are certain situations where this may not be useful, especially with performance critical applications.

 

1. Quick simple lighting

 

            Imagine we have a game environment in which we want to simulate a single diffuse calculation from a global light source combined with a given global ambient lighting value. The environment itself remains static throughout, and we only wish to add an appearance of depth. Figure 1 illustrates an outdoor environment with textures applied and no lighting.

 

Figure 1 boona-taxi level with no lighting (models and textures by Michael Bonnington)

            We wish to simulate the effect of the suns illumination, so we can imagine that the sun is a light source of infinite distance from the viewpoint. We can therefore treat this light as a directional light and represent it as a constant unit length vector L.

 

            To simulate the lighting we can set the texturing function to modulate the texture colour and the underlying polygon colour with…

 

glTexEnv( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,  GL_MODULATE );

 

Any fragment colour that we now specify with glColor*() will affect the colour of the texture. In order to produce a linear colour change between the lowest and highest light intensity we can assume that for any colour applied,

 

I = R = G = B

 

Where I is the light intensity at a given vertex and R,G,B are the red, green and blue parameters passed to glColor*() respectively. We therefore need to calculate the Intensity of the light at all normals on our polygonal object.

 

For each vertex of each face, we can perform a simple dot product between the light vector L and the normal at the vertex N (figure 2). This will give us a basic diffuse illumination term, with which we can calculate the colour.

 

I = (L.N)

 

This does suffer the problem that when the light source is behind the polygon, the colour will be black, which is not what we want. To combat the problem we can add in an extra term to denote the global ambient illumination.

 

Figure 2 simple lighting calculation

            Our calculation now becomes …

 

I = A + (L.N)

 

The simplicity of this equation gives us a large speed increase when compared to enabling openGL lighting. In this situation, further speed increases are possible because the scene is static, and the light direction is constant, therefore we can precalculate the values and store them in our polygon data structure. Below is the class definition of the triangle class. When the object is loaded, we can store 3 RGB values in the colour array.

 

class Triangle : public plane4f

{          

            public:

 

Triangle( void ) ;

uint   tinds      [3] ,  // uv indices

                   ninds      [3] ,  // normal indices

                   vinds      [3] ;  // vertex indices

float  colour     [9] ;  // pre-calculated colours

bool   shadowEdgeMarkers[3];

};

 

            When we come to draw the object, all we have to do is loop through the number of polygons and specify a colour, texturing co-ordinate and vertex for each vertex. The results are shown in Figure 3. 

 

Figure 3 same as Figure 1 but with pre-calculation of lights

 

 

2. Increasing the lighting complexity

 

            If we are able to pre-calculate the lighting within a scene, then we could calculate more than just a straightforward diffuse term, and start adding in attenuation factors, material properties etc.

 

            The code used to do the lighting calculations is ordered into classes derived from a single base class :

 

class cLight : public cVector3f

      {

 

         public:

 

            cLight( void ): cVector3f( );

 

            void enable( void );

            void disable( void );

            float getAttenuationFactor( float &d );

 

            virtual float setFragmentColour( cVector3f  pos ,

                                             cVector3f  nml ,

                                             float     *col )

            { return 0; };

 

            float  fConstantAttenuation  ;

            float  fLinearAttenuation    ;

            float  fQuadraticAttenuation ;

 

            bool   bEnabled;

      };

 

            The base class itself provides functions to enable or disable the light and get the attenuation factor given a distance d. It holds constant, linear and quadratic attenuation values and a Boolean variable to determine if the light is enabled. The class is derived from a vector to make it easier to manipulate the lights position and make the lighting calculations more intuitive.

 

            For example,

 

                        cVector3f displacement( 10 , 0 , 0 );

            cLight light_one;

            light_one += displacement;

 

would create a light and translate it by 10 units in the x direction.

 

 A virtual function called ‘setFragmentColour’ is defined which will be redeclared by every class derived from this one. The main reason for this is to allow more complex lighting classes the option of re-using previous classes to perform parts of the lighting calculation.

From this base class, three further classes were derived,

 

CAmbientLight           : performs ambient lighting calculations

CDirectionalLight       : performs diffuse lighting calculations

CPointLight                 : holds specular lighting and attenuation calculations

 

Each class is derived from the class above so that cDirectional Light can apply ambient lighting, and cPointLight can apply ambient & diffuse lighting.

 

The re-declared function setFragmentColour looks like this as you move through the inherited classes. One thing to note is that material properties are not taken into account, the main reason for this was to prove that the lighting calculations function as they should. If you need the material properties, then the work in amending the code to take account of them would be minimal.

 

            First the ambient light class, this sets the colour of the fragment to the ambient light colour. It assumes that all ambient material colour components are 1.

 

float cAmbientLight::setFragmentColour( cVector3f  pos ,

                                        cVector3f  nml ,

                                        float     *col )

  {

      // apply ambient colour

    col[0] = pAmbient[0] ;

    col[1] = pAmbient[1] ;

    col[2] = pAmbient[2] ;

    return 0;

  }

 

 

The directional class performs the diffuse lighting calculation.

 

float cDirectionalLight::setFragmentColour( cVector3f  pos ,

                                            cVector3f  nml ,

                                            float     *col )

  {

      // get ambient colour…

    cAmbientLight::setFragmentColour( pos,nml,col );

 

      // calculate diffuse term

    float diffuse_term = pDirection.normalized() * nml.normalized() ;

 

      // final colour = ambient + diffuse_term*diffuse_colour

    col[0] += diffuse_term*pDiffuse[0] ;

    col[1] += diffuse_term*pDiffuse[1] ;

    col[2] += diffuse_term*pDiffuse[2] ;

 

      // clamp it to a value between 0 & 1

    clamp( col );

    return pos.length();

  }

          The final class calculates the attenuation factor of the light as part of the lighting equation, and works out the direction vector between the light and the vertex co-ordinate.

 

float cPointLight::setFragmentColour( cVector3f  pos ,

                                      cVector3f  nml ,

                                      float     *col )

  {

    cVector3f lpos( x , y , z ) ,lv;

    lv =  (*this) - pos;

    float d = lv.length(),

    att  = getAttenuationFactor( d );

 

    cAmbientLight::setFragmentColour( pos , nml , col );

 

    float diffuse_term = lv.normalized()*nml.normalized() ;

    col[0] +=  (diffuse_term*pDiffuse[0])*att ;

    col[1] +=  (diffuse_term*pDiffuse[1])*att ;

    col[2] +=  (diffuse_term*pDiffuse[2])*att ;

    clamp( col );

    return d;

  }

 

      The application of the lighting occurs from a function within the objFile class, that takes a pointer to a light and a boolean to determine if the colour values currently stored should be over-written, or whether they should be added to ( If using more than one light ).

 

void objFile::calc_lights( cLight *light , bool additive );

 

      This function loops through all the faces in the polygonal mesh, and will perform the lighting calculation defined by the particular light type passed to the function. This allows you to combine any number of lights, of any type.

 

 

 

            The one thing that has not been discussed so far is specular lighting component calculation. The only class that has been defined to perform the calculation is the cPointLight class. The actual calculation of this has been separated from the diffuse and ambient calculations. There are a number of reasons why you may want to separate the lighting calculations.

 

  1. Specular lighting is dependant on the viewpoint which is likely to change every frame.
  2. In the real world, specular lighting appears as a gloss on top of any underlying surface material. Figure 4 shows two teapots. The one on the right is done with normal openGL lighting, where, the texture colour is modulated with the specular colour. The one on the left is done with the openGL extension GL_EXT_separate_specular_color. This applies the specular highlight after texturing has been performed which enhances the overall look of the object.

 

Figure 4 separate specular pass compared with a single pass

 

            The specular colour component is calculated using the Phong reflection model.

 

Is = Ii * ( R.V )^n

 

Where Is is the specular light intensity, Ii is the intensity of incident light, R is the reflected light vector and V is the viewing vector.

 

            In order to be able to calculate this value you have to ensure that you keep track of the camera’s position at all times in order to calculate the viewing vector.

 

3. Overcoming openGL’s Eight Light Limit

 

          Using this pre-calculation method of working out the lighting in a static scene, we can theoretically use as many lights as we wish to, with out any degradation of performance. Its easy to imagine that for a large level of a computer game, we may want to use a significant number of lights to illuminate the entire area. We could use some method of determining which are the closest lights, and enable and disable them to ensure no more than eight are enabled at once.

 

            The largest problem with this is that for a computer game we are likely to have many other aspects that require significant processing such as AI, collision detection etc. It wouldn’t be practical to use that number of lights if the game then slows at an unacceptable frame rate. Pre-calculation for scenery lighting would work extremely well, and allow extra processing for dynamic objects. The trade off for the extra speed would be extra memory storage requirements and extra time pre-calculating the data.

Figure 5 scene lit with 8 pre-calculated lights

 

 

          The next step for where this code base could be directed in would be in the direction of light mapping and radiosity rendering. The basic premise behind light-mapping is to remove the pre-calculation from the program entirely and create a separate rendering program that spends as long as is necessary rendering the scenery. Rather than calculating vertex colours as is demonstrated here, the rendering approach here would be to calculate the colours per – pixel. Obviously, rendering a true per pixel image is extremely slow, and until recently with the appearance of hardware T&L graphics cards was impossible in real time. With light mapping, the effect of per pixel lighting is faked with the use of precalculated texture maps that are blended with the surface texture, in order to achieve a similar effect.

 

4.Speed Vs. Memory Requirements

 

          In order to compare the pre-calculation of lights on polygon, with something a bit more dynamic, water_pool is a small demonstration that loads a nurbs surface from a text file, and utilises GLU Nurbs interface functions in order to tessellate the geometry in real time. It uses a sine curve dependant on time, in order to deform the surface to create ripples, and sets openGL up to calculate the texturing co-ordinates using sphere mapping. GL_AUTO_NORMAL is enabled as is openGL lighting. All of this means that, all calculation for the surface is done in the graphics hardware, and the amount of data that is essential consists of 12 knots in the U direction, 12 in the V direction, and 64 CV’s each consisting of 3 floats representing x,y,& z co-ordinates. In total therefore, we only require 216 floats to hold the surface. If we assume that a float requires 4 byte’s, then we have a total of 864 bytes of data (excluding textures).

 

Figure 6 water pool executable

 

          If we then consider polygons, with the pre-calculation method, for each triangle (at a minimum) we would require 3 vertices, 3 colours, 3 normals, and 3 texturing co-ordinates. The texturing co-ordinates would require 2 floats and the rest would need three. That would mean 33 floating point values are required per triangle, which is equivalent to 132 bytes. After seven triangles, we would have used a greater amount of memory than to hold the entire nurbs surface. Because you can set the tessellation level of the surface, the complexity achieved can be far greater.

 

Figure 7 nurbs surface executable

 

 

 

 

 

 

 

 

 

 

3.Realtime Shadow Calculation

 

          After experimenting with lighting calculation and pre-calculation, it would be sensible to look at dynamic lighting effects. The main focus of this section is to investigate real time shadow calculation in terms of how it can be achieved, to what costs that can degrade performance and whether the visual quality is worth that degradation.

 

1.simple shadow projection using similar triangles

 

            If you consider a situation where you wish for a single object to cast a shadow onto a ground plane where y = 0. In order for you to do this you can simply transform the vertex co-ordinates of the object along a vector that defines a directional light, until they hit the surface of the plane, where the normal is defined as being ( 0 , 1 , 0 ), and the d parameter is zero.  The shadow can then be represented by redrawing all polygons in the mesh with the deformed vertex positions.

 

            This technique was first reported by Blinn in (1988), in which he used a projection matrix to deform all vertex co-ordinates to the ground plane. Using the same idea, but a slightly different proof, the whole system can be represented with the use of similar triangles (Figure 8). If the global directional light vector Lv has the associated components ( lx,ly,lz ) and is known to be of unit length,  then we can assume that :

 

(sx – x)/lx = (sy – y)/ly = (sz – z)/lz

 

            We know that the final y position of the vector must be zero, so we can calculate a ratio between the size of each lv component and the vector shadowPos – vertexCoord.

 

 

Figure 8 Similar triangle calculation

sx = y * lx

sz = y * lz

sy = 0

 

            If we now know the deformed vertex co-ordinates, we can now draw the polygons again to form a shadow. This does however require that we cull front faces rather than the back facing ones. Figure 9 shows a scene that is illuminated by a single directional vector, that has the shadows cast on the floor plane. You can also see all of the transformations undergone by the polygonal scenery.

 

Figure 9 boona taxi with simple shadows

 

            It is possible to use this method to cast shadows onto separate objects. This is more of visual trick than any thing else, but it does give a fairly impressive illusion.

 

            If you add to this scene an extra object that travels around and you wish to simulate shadows, it’s extremely difficult to cast shadows cleanly on the object, but we can calculate whether a particular vertex lies within the shadows, and reduce the diffuse and specular material properties if it is. Figure 10 shows a car that is half in shadow.

Figure 10 left hand side has a lower diffuse material value

The basic steps are :

 

1.      calculate the shadow positions of the scenery and draw (pre-calculated in this example )

  1. deform the bounding box of the car onto the same shadow plane.
  2. test each corner to see if it falls in the shadow region of the scenery.
  3. if it doesn’t, calculate all shadowed vertex positions of the car, draw the shadow, followed by the car.
  4. if it does, when transforming the vertices to the ground plane, test each one to see if it falls in the shadow region. If it does set a flag to denote that it is in shadow.
  5. draw the car, for each vertex, if in shadow, set a lower diffuse material colour and no specular, if not draw normally.

 

The end result is that vertices in shadow will be slightly darker than those outside the shadowed areas, and wont have any specular highlights.  One way to improve this algorithm would be to calculate ‘shadow bounding boxes’ that determine the maximum and minimum x and z values of separate shadows generated by objects in the scene. This way you could test against these boxes before narrowing your search to see if the cars bounding box is within individual polygon shadows.

 

            For this example it works fairly well, we have an environment that is fairly low in the number of polygons that it is constructed from, and we have a car that travels fairly quickly through these regions. It does suffer a number of drawbacks. The primary one is that essentially you aren’t casting a shadow as such, you are merely altering colours at the vertices to give the illusion that the car is in shadow, at slower transitions through the shadowed regions, this becomes very apparent.

 

2.Casting shadows on a flat plane of any orientation using point lights

 

Figure 11 hardShadow executable

            In order to cast shadows on a plane of any orientation you need to first of all, to obtain the plane equation of that plane, the world space co-ordinates of the object that is to cast the shadows and the position of the light source. 

 

From this we can calculate the light vector lv to the vertex that we wish to deform. We then need to calculate the line intersection of the plane. In order to calculate the distance form the vertex along the light vector to where the intersect occurs we can use the equation:

 Intersect = -( ( N * V ) + d ) / ( N * lv )

 

where N is the normal to the plane and V is the vertex co-ordinate.

Once we have this information we can calculate the actual co-ordinate of this intersection using the equation :

 

Intersect Coord = vertex  +  lv * Intersect

 

Figure 12 shadow calculation off a plane

            We can now draw the polygons again using the deformed vertices in order to create the shadow.

 

3.Casting soft shadows

 

            The generation of Soft shadows is only a slight tweak from the previous method. Rather than dealing with a point light source, we can assume that the light has an area, and casts shadows off a variety of points in the defined circle. We can alter the softness of the shadows by altering the area of the light source. When the area is zero, a hard shadow is produced. (figure 13).

Figure 13 soft shadow generation – softShadow executable

4.Producing Reflections in a plane

 

            Including details of real time reflections may seem a bit odd in a section on shadows, but essentially the same mechanism is used to create both effects. Instead of transforming the vertices to the plane, we can use the equation

 

Reflection Position = vertex  +  2*lv * Intersect

 

            This essentially scales the object through the plane. To prevent the reflection being visible outside the bounds of the plane, you can utilise a stencil buffer to limit the drawing of the reflection within the bounds of the plane( figure 14 )

 

Figure 14 reflections in a plane

 

 

5. Using the stencil buffer to draw shadow interactions.

 

            Advancing from the idea of casting shadows onto a plane, the next step is to look into the generation of shadow volumes in order to create interactions between shadowed objects. I shall try to explain what I was trying to do, followed by what I ended up doing.

 

            The steps involved in producing the shadow volumes revolve around trying to determine the silhouette edges of the object. This can be found by working through a list of edges and testing to see if the a polygon on one side is lit and on the other isn’t. If this is the case then we have found a silhouette edge and a flag is set. When we have parsed all the edges of all the objects, we then proceed to the next step.

 

The stencil buffer is cleared to a reference value. We have to decrease that initial value if the camera lies inside one of the shadow volumes , because it will offset our count.

 

            The depth buffer is enabled and all of the objects are drawn with only the ambient lighting affecting them. This gives us information as to how far from the camera the objects are which will be used in the next pass.

 

            A depth buffer test of GL_LESS is then used, and writing to the depth and colour buffers is disabled. We then enable the stencil test so  that we can draw the shadow polygons that define each shadow volume. The shadow volume is defined by the silhouette edges when they are extended to a distance outside the viewing frustum. Care is needed so that these polygons are created facing in the correct direction. These are then drawn.

 

 This is done in two stages.

 

  1. culling of front faces is enabled so that we can draw the polygons of the shadow volume that are facing away from the viewpoint. Whenever one is drawn, the value in the stencil buffer is decreased.
  2. culling of back faces is enabled so that we can draw the polygons of the shadow volume that are facing towards the viewpoint. Whenever one is drawn, the value in the stencil buffer is increased.

 

The idea here is that if a polygon lies in the middle of a shadow volume,  then it will mask out the rear facing shadow polygon due to the depth buffer being enabled. The value in the stencil buffer at the shadowed pixels will be one greater than the original value.

 

The final stage is to redraw the objects with the lighting fully enabled, but only to areas of the screen where the stencil buffer is the same as the original value.

 

On paper it works, but I was unable to put it into practice.  The main problem was getting a closed set of silhouette edges, quite often the shadow volumes would only be constructed part way round and were missing a face. This meant that shaded areas started flashing on and off, and various other unsightly artefacts appeared.

 

I decided to pull together this method with the previous techniques described in order to simulate the effect as closely as possible.  The steps involved are:

 

  1. clear the stencil buffer with zero’s
  2. draw the scene objects and place a 1 in any areas where drawing occurs (excluding the ground plane) with ambient lighting only
  3. disable colour buffer and depth buffer writes
  4. draw all edges from the mesh as though they formed a shadow volume, but only into areas where the stencil buffer equals 1. If the poly is drawn, then increase the value in the stencil buffer.
  5. enable depth and colour buffer writes.
  6. redraw the scene objects but only in areas where the stencil buffer equals 1, this will then leave the shadowed areas as they were before.
  7. disable the stencil buffer and draw in the floor.

 

The effect this creates is pretty impressive, and only if you pay very close attention will you notice that it isn’t quite correct. ( figure 15 shows the method when it appears to work, figure 16 shows the flaws).

 

Figure 15 complex shadows that appear to work

 

 

Figure 16 look very carefully, the hoop is casting a shadow in front of the cube, yet the cube is shadowed by it.

4.Conclusion

 

Many techniques are available that try and emulate higher visual quality in real time 3D graphics. The pro’s and con’s of the techniques will to a certain degree determine which particular technique you will use. In the end the major deciding factor comes down to a balance of speed over memory requirements. If one method suits your needs, then that is probably the one for you. The largest problem with real time computer graphics is that you cant take into account all of the global illumination factors. There has to be a reason that ray tracing and radiosity methods havent found their way into 3D computer games, it’s all to do with what is an acceptable price that you are prepared to pay in term of performance and memory usage.

 

 

 

5.References

 

Game Architecture And Design – Andrew Rollings / Dave Morris

ISBN 1-57610-425-7

 

3D Computer Graphics Third Edition – Alan Watt

ISBN 0-201-39855-9

 

Advanced Animation and Rendering Techniques – Alan Watt / Mark Watt

ISBN 0-201-54412-1

 

OpenGL Programming Guide Third Edition – OpenGL ARB

( Mason Woo , Jackie Neider , Tom Davis , Dave Shreiner )

ISBN 0-201-60458-2

 

Advanced 3D Game Programming Using DirectX 7.0 – Adrian Perez

ISBN 1-55622-721-3

 

Advanced Per Pixel Lighting in OpenGL - Ronald Frazier

 

Real-Time Per-Pixel Point Lights and Spot Lights in OpenGL using nVidia Register Combiners - Ronald Frazier

 

GLUT 3 Specification Document

 

GLUI 2beta Specification Document

 

OpenGLext.pdf document – www.nvidia.com

 

OpenGL1.2.1.pdf document – www.openGL.org

 

RegisterCombiners.pdf – www.nvidia.com