The system provides improved procedures to estimate head motion between two images
of a face. Locations of a number of distinct facial features are identified in
two images. The identified locations can correspond to the eye comers, mouth corners
and nose tip. The locations are converted into as a set of physical face parameters
based on the symmetry of the identified distinct facial features. The set of physical
parameters reduces the number of unknowns as compared to the number of equations
used to determine the unknowns. An initial head motion estimate is determined by:
(a) estimating each of the set of physical parameters, (b) estimating a first head
pose transform corresponding to the first image, and (c) estimating a second head
pose transform corresponding to the second image. The head motion estimate can
be incorporated into a feature matching algorithm to refine the head motion estimation
and the physical facial parameters. In one implementation, an inequality constraint
is placed on a particular physical parameter—such as a nose tip, such that
the parameter is constrained within a predetermined minimum and maximum value.
The inequality constraint is converted to an equality constraint by using a penalty
function. Then, the inequality constraint is used during the initial head motion
estimation to add additional robustness to the motion estimation.