
There are many different ways to approach the recovery of a camera position in three dimensional space. Most modern methods use the information presented from the scene's two-dimensional projection on the image plane (screen), as opposed to using mechanical devices controlling the live-action camera's motion. The final system that is illustrated in this paper is a piece of software which manipulates information given by the user and from a tracking program (which is also outlined in this paper). The tracking program determines the concurrent positions of reference points, on the image plane, that make up a recognisable feature. The basic feature that is being tracked in this system is a set of three points, on a planar surface, which define two perpendicular, intersecting lines (such as the top and side edges of a window or poster). This means that only three points need be tracked by the computer.
In the recovery method described the computer is given information about the feature which is being tracked. As well as being told where the feature lies on the image plane, it is told the angle and distance between those points in real life (the feature being tracked contains a right-angle so as to simplify the problem). By solving a series of simple equations the system returns the three dimensional position components and three rotational components of the camera in world space.
The basics of three systems are described in this paper, for each of those two feature tracking programs are outlined which can be used in their implementation.