Facial expression in computer graphics and animation

 

As stated earlier, animating your characters facial expressions in a natural and realistic way is crucial to the success of the animation. A character just will not look or feel believable if this aspect is not done well. However, to model a human face and then animate the subtle changes in facial expression presents a huge challenge for computer animators. Because of our familiarity with the face and how it moves, we are not very forgiving if the facial animation does not look right, and we will be able to detect even the smallest error in expression. Many techniques exist to help computer graphics artists to model and animate 3D heads, and these will be discussed later.

Facial animation is not a new aspect within the industry – using computers to animate faces started more than twenty years ago. Recently however, there has been an increased interest in this area of animation, due to more application areas taking advantage of the technology that is becoming more and more accessible and readily available. Obviously, the largest developer and user of character animation is the animation industry itself, but this can be split further into different application areas, these being the games industry, advertising, and television and film production.

Facial animation is being used more and more within the games industry. Examples being animated characters within educational computer packages, and characters within practically every video game for Playstation 2, Nintendo and PC’s (such as Mario or Crash Bandicoot). The games industry used to be limited by the CPU and graphics performance of computers, but as these improve, so does the viability of using highly realistic facial animation within video games. In-game characters are now becoming more hyper-realistic in style, taking advantage of newly developed modelling and animation techniques. This, together with high-powered games consoles, allows the animators to use good facial animation techniques to further express a characters emotion.

Advertising and television and film production naturally use facial animation with their characters. Much more highly powered computers are available to work with, allowing for detailed facial expression. Examples of this can be seen in the Dreamworks production ‘Shrek’, and Walt Disney’s ‘Pirates of the Caribbean: the curse of the black pearl’.

Dreamswork’s ‘Shrek’ and Disney’s ‘Pirates of the Caribbean’

 

 

Other application areas for facial animation include medicine, teleconferencing and social user interfaces. Two particular aspects of facial animation are of specific interest within medicine: craniofacial surgical planning and facial tissue surgical simulation. Computer models of the patient’s head can be produced from digital scanning of the head. Using these models of patient’s skulls, muscles, etc, surgeons can plan operations before they occur, and prepare simulations on skin, muscle and bone.

 

Techniques currently available for modelling characters for facial animation

In order to animate facial expression, we must first have a computer model of a head on which we can then animate. The animator can then set up the 3D head model in a way that they prefer in order for any facial expression to be created.  The actual creation of artificial faces that look like a real or imaginary person is just one of the challenging aspects of computer facial animation. Work on representing faces using digital techniques dates as far back as the 1970’s. The first three – dimensional facial animation was created in 1972 by Frederik Parke, and in 1974 Parke developed a parameterised three – dimensional face model. In recent years, good progress has been made in realistic computer facial modelling thanks to our increasingly more efficient computers, with their improved CPU and graphics capabilities.

 

Frederik Parke’s three-dimensional parameterised face model

 

 

Many techniques currently exist to model digital faces, be it human or otherwise. Every technique has its advantages and disadvantages, but the choice of using one method over another generally lies with what the animator/modeller prefers to work with or is more experienced in. However, the final structure of a model can greatly determine its animation potential, as the way that it is constructed can limit its overall facial movement. The physical mechanics of the face also need to be regarded when modelling. Things that the modeller may want to consider during construction are:

·        The jaw must work freely, moving up and down

·        The importance of the eyes in facial animation – eyelids need to open and close, and surrounding skin react

·        The skin of the neck and cheeks are influenced by jaw movement and expression

·        The mouth needs sufficient detail to allow for its wide range of movement

·        Nose, ears, teeth and hair also play a part in facial expression

 

            Polygonal modelling

Polygonal modelling is very popular. They are simple to use and consume the least amount of memory and disk space. They are also very easy to render, thanks to Gouraud. Every three – dimensional object created in a computer can be made of or converted to polygons. In polygon modelling, each 3D point is specified. These points are connected to each other as polygons. This makes them extremely easy to use, as each vertex and face can be manipulated individually. Polygons have their downside, however. All organic surfaces, are naturally curved, and many polygons are required to achieve an approximated curved surface (polygon smoothing). This process can slow things down considerably. The polygons must also be placed so that their edges coincide with creases in the face, so that correct deformation will occur. Polygons are an excellent modelling choice though, regardless of these disadvantages.

            NURBS modelling

NURBS (Non – Uniform Rational B – spline) are a special type of spline curves. A set of NURBS splines (also known as a patch) indirectly defines a smooth curved surface from a set of control points known as CV’s. Only a small number of CV’s are required to make a complex surface, and the surface can easily be manipulated by moving the CV’s. Using NURBS is generally preferred over polygons when creating organic objects, due to only needing a few splines to create a curved surface, which would otherwise require many polygons to achieve the same effect. When creating a head from NURBS, more splines can easily be inserted where areas of high detail are needed, such as eyes or mouth. Using patches can create a problem when modelling a face containing a lot of creases, however.

 

      Sub division surface modelling (sub – d’s)

Sub division surface modelling is a fairly recent technique, which extends the polygon modelling method. This technique uses continuous surface slicing to make a line or mesh resemble a smooth curve or surface. This allows the modeller to use polygonal modelling tools and techniques, whilst being able to see a smooth, NURBS – like surface. This method is excellent for modelling faces, as detail need only be put where it is needed, and the animator can work with a low detail model, which can then be easily increased for rendering. Although using sub – d’s is becoming more and more popular, not all types of software offer this functionality, and it can consume a lot of a computers resources.

      Digitising

Another modelling technique is using a digitiser on a live person or animal, to produce a three – dimensional representation inside a computer. To use a digitiser, a sensor or some other type of locating device is moved over the subject, which then maps points on the surface of the object to a computer. These points create the surface mesh of the subject. Using a digitiser can produce a very highly detailed model, but this can be very difficult to work with, with a slow response rate. Digitisers are also expensive pieces of equipment, and other cheaper modelling techniques are capable of producing the same high quality results.

A few other methods that are used to model faces (and other objects) are below, which I have not discussed in detail:

·        Face generation system

·        Photogrammatic measurement

·        Volume/surface representations

As previously discussed, the modelling technique that is employed to create a face for animation is an important factor in how successful the animation will be. A poorly constructed model will deform and look incorrect, along with being incredibly frustrating to work with. As before, the facial movement will not only be determined by the character set-up and animation system used, but also by how and where the mesh deforms, i.e. around the corners of the mouth, for example. A well-constructed model will also make rigging a face ready for animation an easier job, as less work will be needed to produce expressions if the mesh already creases in the correct place – it will save a lot of tweaking later.

For my facial animation system I needed to decide what modelling technique to use, to construct the face on which my system will be operated. Either polygons, NURBS or sub division surfaces are the most effective techniques that I could use, as they are all simple to manipulate and all update well in real time, without putting too much of a strain on the computer resources that I have available. Using a digitiser is impractical, as adjusting the resulting mesh is very time consuming. The most important part of my project is the actual muscle animation system itself, so I considered using a model that is available for free download off of the Internet, or having one donated to me that somebody else has constructed.

 

Current available methods to animate facial expression

After modelling a character, the next step is the character rigging to prepare it for animation. A good character set-up will almost certainly save time and effort during the animation stage, so this part of the process should not be taken lightly. My muscle – based rig will be discussed in a later chapter. After the rigging, the next step is the animation itself. Numerous methods exist that can be used to animate the face of a character, and again, which method is chosen depends on the animator and what is the desired final effect. Several of the following methods can be combined if it is necessary. The most widely used facial animation techniques are discussed below.

Keyframing

Keyframing, or interpolation, is probably the most widely used facial animation technique. It is the most straightforward method and the simplest to set up. The animator will create a set of key poses at prime points in the animation, and the system will attempt to interpolate between the two poses. This method, although straightforward, is more suited to animating simpler objects rather than an intricate characters face, as there are many points on a face that must move individually to produce complex expressions.

Blend shapes/morph targets

Using blend shapes is a very popular way to animate expressions, and it is extremely effective if done well. The animator produces a series of heads (from the same character) of every different expression that is required to be made by the character in the animation. These ‘morph targets’ are then combined with the original, and a set of interactive sliders can then be used to transform between all of the expressions. Different expressions can also be blended together, producing different results. This method, although effective, is limited by the number of morph targets that are created, and so the number of possible expressions may not be as diverse as needed.

A face with two blend shape targets modelled

 

 

Character animation tools

These are tools that can be applied during the character rigging stage to help animate the face. They are mainly to help aid the animation, and should not be relied upon as the sole way to animate facial expressions, as their results are limited. These tools include free form deformations, or FFD’s, bones and clusters. All of these tools alter the mesh that they are affecting in some way.

Muscle simulation systems

In 1987, Keith Waters created a simple muscle-driven animation system. It allowed a variety of facial expressions to be created by controlling computer generated ‘muscles’ that pulled the skin. Using this type of system is not as popular as the above methods, but it can be extremely effective. The Dreamwork’s picture, ‘Shrek’ used a highly sophisticated muscle system to animate the characters to a degree of realism never before seen.

Other animation techniques not discussed include:

·        Motion capture

·        Parameterised systems

·        Speech generated systems

A full discussion of the facial animation technique that I will use in my project is presented in section 'My muscle driven facial animation system'.

 

<Previous                                                                                                                                                         Next>