Deciding on the project.
I decided to investigate two methods. One is the technique most commonly
used for facial animation in high-end packages, blendshapes. Animating
lip-sync with blendshapes relies on the principle that all the sounds we
make are produced by a limited number of mouth shapes, or phonemes. If
we create deformed versions of a base face representing all these phonemes,
we can morph between them and create the illusion of speech. The other
technique is motion capture. We used motion capture in my group project
last year, to good effect, and I felt it would be suitable for experimenting
with in a games environment. I took some parts of the program from Tom
Box, who wrote the motion-tracking program we used last year. The code
I used was for saving and loading JPEG images. The rest I decided to re-write
myself, to give myself practice in C programming for the exams, and also
to make it more suitable for working with Maya (the group project was produced
in CGAL).
Deciding what artefact to produce.
I decided to work to a specification of Dreamcast/Mid range PC level
performance. The character I used would therefore be around 1000 polygons.
I decided that the best way to show the techniques that I investigated
would be to produce a short animation. I decided to set it in an archetypal
game setting – the weapon shop. I built the shop with a minimal amount
of polygons and low texture resolution in keeping with the game specification.
I wanted my character to say a short sentence, and I decided on having
him say "Anything I can help you with? Guns, bombs, sharp pointy sticks?".
This was long enough to demonstrate the techniques effectively, but short
enough that test renders and mel scripts would not take to long to execute.
I originally wanted to have the main character performing some kind of
action, but realised that this would be distracting from the lip-sync.
Filming the source material.
I needed to record source material for the motion capture, so I borrowed
a home camcorder from a friend, and used a 500w garden light as a light
source. I had to sacrifice my beard (in the name of art) to help with the
tracking. I also increased the shutter speed on the camera, as any motion
blur would interfere with the tracking program. I used 10 tracking points,
one on my nose as a base point, 8 around the mouth at the same positions
as the vertices on the model, and one on the chin that measures the displacement
of the jaw. I filmed a number of takes, and chose the one with the least
rotation of the head and with a reading of the line I liked. The video
was digitised on a Matrox Rainbow Runner, then converted to a series of
JPEGs. The video had to be de-interlaced for it to be tracked effectively,
so I created an action in Photoshop to de-interlace the frames and reduce
them to half-pal size.
Continue to Motion Tracking.
Home. |