(Back to INDEX)
                                                                                                 (PREVIOUS)                     (NEXT)

RESEARCH.

HARDWARE BASED SOLUTIONS.

                    This is only intended to be a brief grounding in the subject of  hardware motion capture.  I have tried to keep it as short as possible. At various points throughout this 'potted' history I will make references to motion capture systems related to the body, these can be easily extended to include facial animation.  Follow these links to cover far more information on the subject than I am willing to write in this report.

                    The use of motion capture as a suitable means for computer animation is a fairly new invention.  It began in the 1970's via Disney.  Although strictly not motion capture in the sense we understand today, Disney employed a system known as 'Rotoscoping'.  Rotoscoping involved tracing over live action footage, and is still successfully used today.

                    At the New York Institute of Technology Computer Graphics Lab, Rebecca Allen used a half-silvered mirror to superimpose video footage of real life dancers onto a computer screen.  These images were then rotoscoped.  This was the first use of computers as a means to capture direct animation from live footage.

                    Around 1980-1983, Tom Calvert, a professor of kinesiology and computer science at Simon Fraser University, attached potentiometers to a body and used the output to drive computer animated figures for choreographic studies.   Also around the early 1980's, both the MIT Architecture Machine Group and the New York Institute of Technology Computer Graphics Lab experimented with optical tracking of the human body.  Optical trackers use small markers attached to the body - either using flashing LED's or small reflecting dots - and a series of two or more cameras focused on the performance space.  A combination of special hardware and software pick out the markers in each camera's visual field and, by comparing the images, calculate the three-dimensional position of each marker through time.

                    1988 was the time of 'Mike the Talking Head'.  The first real use of hardware motion capture for facial animation.  DeGraf and Wahrman used Silicon Graphics machines to provide real-time interpolation between facial expressions for Mike.  Mike, was a marionette driven by a specially built controller that allowed a single puppeteer to control many parameters of the character's face, including mouth, eyes, expression and head position.  Mike was performed live in that year's SIGGRAPH film and video show.  The performance clearly showed that the technology was ripe for exploitation in production environments.

                    1988 also saw the arrival of 'Waldo C. Graphic'.  Jim Henson Productions in combination with Pacific Data Images hooked a custom eight degree of freedom input device (a kind of mechanical arm with upper and lower jaw attachments) through a standard SGI dial box, they were able to control the position and mouth movements of a low resolution character in real-time.  The resulting computer image was mixed with the live video feed of the camera focused on the real puppets  so that everyone could perform together.

                    1991 saw the arrival of a real-time character animation system whose first success was the daily production of a character called 'Mat the Ghost'.  Mat was a friendly green ghost that interacted with live actors and puppets on a daily children's show called 'Canaille Peluche'.  Using Data Gloves, joysticks, MIDI drum pedals and any other digital interactive device, puppeteers performed Mat, chroma-keyed with the previously shot video of  the live actors.  The finger motions, joystick movements, and so on, of the puppeteers are transformed into facial expressions and effects of the character, while the motion of the actor is directly mapped to the character's body.

                    The final real innovation in hardware motion capture for facial animation, arrived in 1992.  SimGraphics had long been in the VR business, having built systems for some of the first Data Gloves in 1987.  Around 1992 they developed a facial tracking system they nicknamed 'Face Waldo'.  Using mechanical sensors attached to the chin, lips, cheeks, and eyebrows, and electro-magnetic sensors attached to the head, they could track the most important motions of the face and map them in real-time onto computer models.  The importance of this system was that one actor could manipulate all the facial expressions of a character by just miming the facial expressions himself.  At last a truly natural performance could be achieved.
 
 
 
 
 

Back to top of page.....