An auditory scene is synthesized from a mono audio signal by modifying, for each
critical band, an auditory scene parameter (e.g., an inter-aural level difference
(ILD) and/or an inter-aural time difference (ITD)) for each sub-band within the
critical band, where the modification is based on an average estimated coherence
for the critical band. The coherence-based modification produces auditory scenes
having objects whose widths more accurately match the widths of the objects in
the original input auditory scene.