The present invention relates to a method and an apparatus for combining
virtual reality and real-time environment. The present invention provides
a system that combines captured real-time video data and real-time 3D
environment rendering to create a fused (combined) environment. The
system captures video imagery and processes it to determine which areas
should be made transparent (or have other color modifications made),
based on sensed cultural features and/or sensor line-of-sight. Sensed
features can include electromagnetic radiation characteristics (i.e.
color, infra-red, ultra-violet light). Cultural features can include
patterns of these characteristics (i.e. object recognition using edge
detection). This processed image is then overlaid on a 3D environment to
combine the two data sources into a single scene. This creates an effect
where a user can look through `windows` in the video image into a 3D
simulated world, and/or see other enhanced or reprocessed features of the
captured image.