The blendshape section of my project was considerably easier to implement
into my project, because the functionality was built into Maya. I used
7 phonemes to create the movement, and 2 to control eye movement (one for
blinking, the other for eyebrow movement). One technical issue I overcame
was that it's nice to be able to work on the keyframing of the lips first,
and then go over and add the eye movement. Maya's implementation of blendshapes
makes this difficult, as all the blendshape channels are keyed together,
so if you add in extra keyframes, it affects the animation curves. To get
around this problem, I added an extra stage of blendshapes. I created two
faces, one to control the lip movement, and one to control the eye movement.
These both acted as blendshapes for the master head, with the influence
set to 1 in both cases. The result of this is I could animate the lips
and eyes separately using the two heads. This helps the workflow, as you
only have to concentrate on one area at a time.

Animating the lip-sync proved quite difficult, especially during fast speech.
Even though I had the motion captured head as a guide, it was still difficult
to get the timing right, and to try and stop the blendshapes looking like
they were just going from pose to pose.
The motion tracking and blendshape processes were implemented simultaneously
in the project by using two duplicate heads attached to the character's
skeleton. When one is not required it can simply be hidden. This helped
the project as I could use the same animation for the main character for
both animations, which could otherwise distract from the main focus of
the project, the lip movement.
Continue to results.
Home. |