A bit of architecture
Concerns sound sources located in the 3d environment.
On the C++ side, sound::system manages all the sound sources and knowns about the location and orientation of the cam. It also contains the general configuration of the mix (gain, number of tracks, etc.).
At each frame, the sound::system renders a serie of relations, representing the position of sources in the camera local coordinates (where are the sound sources in the user point of view, to be a bit simplier).
The sound::system being aware of the number of tracks in the sound engine, it is able to manage efficiently wich source to send to witch track. For instance, if the number of active sound sources is higher than the number of tracks, sound::system will order the closer or the most important sources to send to the sound engine.
On the #puredata side, the job is simplified. All tracks are similar and reacts to messages sent by sound::system.
Each sound source knowns about the sound file to load. This info is sent onnce when the source becomes active : activated status.
Sound::system is able to handle 2 types of communication.
- In edition, it sends osc messages to puredata, the program with graphical interface. This allows musician to work live on the sound.
- In production, it sends the messages to libpd internally.
This 2 modes implies using custom objects in pd that are able to use both modes.
Thanks to @yacine_sebti for his expertise in pd.
Currently in development in #peel.