ambisonic rendering
Project Home
Listen Now
About the Project
Simulation Methods
Ambisonic Rendering
About the Binaural Web Demo
Project Team

presenting full-sphere auralizations

The creation of these highly accurate acoustical simulations are futile if a rendering system is not available to present the sound simulations spatially.  The original presentation of these auralizations for review by critical listeners dictated several requirements:
  1. An accurate spatial rendering of early reflections and reverberation to a single listener,
  2. Vertical reproduction capability to permit rendering of overhead reflections arriving from the ceiling canopy,
  3. No requirement for listener-specific rendering adaptations: i.e., no binaural techniques that would require listener-specific HRTFs.
For this work, an Ambisonics rendering system was utilized to provide full-sphere rendering to listeners seated in a hemi-anechoic listening room.


ambisonics for periphonic audio

Ambisonics is a technique for encoding a directional sound field into a number of spherical harmonic directivity components.  The impulse responses from the ray tracing engine were synthesized into second-order Ambisonics components, which includes an omnidirectional component (0th-order) W, three bidirectional components (1st-order) X, Y, and Z, and five 2nd-order components R, S, T, U, and V.

These component channels may then be decoded onto an arbitrary loudspeaker array using a simple gain matrix (plus optional shelf-filtering).  For this project, a dodecahedron loudspeaker array was employed, which allows for the symmetrical decoding of all nine 2nd-order components into the center listening position, reproducing a full-sphere (periphonic) sound field.



system implementation

The dodecahedron loudspeaker system was integrated into a hemi-anechoic listening room, in order to minimize the impact of listening-room boundary reflections on the auralizations.  The decoding was accomplished in real time using a pair of BSS Soundweb DSP processors.  Twelve EAW JF60 loudspeakers comprised the dodecahedron array, with a pair of EAW SB120 subwoofers extending the low-frequency response of the system.  The user interface for the auralizations was prepared using MATLAB, combined with with the pa-wavplay multichannel audio library and a MOTU 24 I/O audio interface.

The ideal decoding condition for the loudspeaker array is when each loudspeaker is equidistant and pair-opposite (taking the shape of a regular dodecahedron).  The irregular shape of the listening room dictated that the loudspeaker array must be "warped" in order to fit within the enclosure, so signal processing (delay, gain, FIR filtering) was utilized to restore the ideal loudspeaker "image sources" from the perspective of the center listening position.  This has the effect of reducing the area of the listening "sweet spot", although, in this case, the effect was found to be minimal.


Layout of the loudspeakers in the listening room and position of the "virtual" loudspeaker after signal processing.



This material is based upon work supported by eMPAC at Rensselaer.
© 2004, Paul Henderson