Two video signals, typically an original signal and a degraded version of
the same signal, are analyzed firstly to identify perceptually relevant
boundaries of elements forming video images depicted therein. These
boundaries are then compared to determine the extent to which the
properties of the boundaries defined in one image are preserved in the
other, to generate an output indicative of the perceptual difference
between the first and second signals. The boundaries may be defined by
edges, color, luminance or texture contrasts, disparities between frames
in a moving or stereoscopic image, or other means. The presence, absence,
difference in clarity or difference in means of definition of the
boundaries is indicative of the perceptual importance of the differences
between the signals, and therefore of the extent to which any degradation
of the signal will be perceived by the human viewer of the resulting
degraded image. The results may also be weighted according to the
perceptual importance of the image depicted--for example the features
which identify a human face, and in particular those responsible for
visual speech cues.