EVERTims is an open source framework for 3D models auralization, providing real-time feed-back on the would be acoustics of any given room during its creation.
The framework is based on three components: a Blender add-on, a C++ raytracing client and a JUCE auralization engine. While designing the 3D room model in Blender, the add-on continuously uploads geometry and materials details to the raytracing client. Based on these information, the client simulates how acoustic waves propagates there, from sources to listeners objects positioned in the Blender scene. The results of this simulation are then sent to the auralization engine that reconstructs the Ambisonic sound field as experienced at any given listener's position for binaural listening. The framework also takes advantage of the Blender Game Engine to support in-game auralization for a final interactive exploration of the designed model.
EVERTims was originally developed as a collaboration between LIMSI/CNRS and the TKK/Department of Media Technology. It has now become a joint effort between researchers there and at the IRCAM institute.
The raytracing client is a C++ real-time beam tracing standalone, based on the EVERT library that provides a beam-tree for any given source / geometry configuration which maximum depth depends on a required reflection order. It uses an iterative refinement procedure: whenever a change of geometry or sound source position occurs, an approximate beam-tree is constructed up to the minimum (parametrized) allowed reflection order. The paths visible by the listener are then sent to the auralization engine while the system continues to compute the solution up to the next reflection order until the chosen maximum order is reached.
The auralization engine also comes as a C++ standalone, developed with the JUCE framework. From the raytracing client, it receives direction of arrivals and paths length for each beam (impinging the listener), along with octave band absorption coefficients relative to the materials on which the beam bounced during its propagation. To each beam is associated a new audio stream, tapped-out of a delay line (further delayed and attenuated as the aforementioned path length grows) and encoded on a set of Ambisonic channels (with spherical harmonics weights issued from its direction of arrival) before being summed with its peers. Octave band filters are applied prior to the Ambisonic encoding to account for beam-specific absorption. The resulting Ambisonic field is then decoded on a set of virtual speaker for binaural listening.
The framework uses Blender as its main interface. The EVERTims add-on sits on Blender tool-shelf, allowing to define 3D objects as EVERTims elements (room, source, listener, etc.) and launch both raytracing client and auralization engine (as python subprocesses). Any user modification on room's mesh and material triggers automatic uploads to the raytracing client, as do sources and listeners re-locations and re-orientations. The add-on seamlessly runs in the Game Engine to provide real-time auralization during the final exploration of the virtual model.
First install the EVERTims raytracing client, then the auralization engine, and finally the Blender add-on.
Other used it, have a look at their projects
EVERTims is licensed under the MIT license.
You may use the following reference for any research making use of the EVERTims framework:
[pdf] S. Laine, S. Siltanen, T. Lokki, and L. Savioja, “Accelerated beam tracing algorithm”, Applied Acoustics, vol. 70, no. 1, pp. 172-181, 2009
For a complete description of the architecture of the EVERTims raytracing software itself, see:
[pdf] M. Noisternig, B. Katz, S. Siltanen, and L. Savioja, “Framework for real-time auralization in architectural acoustics”, Acta Acustica United with Acustica, vol. 94, no. 6, pp 1000-1015, 2008