GSoC/Network scalability and Blender integration

From Clam
Jump to: navigation, search

Application Information

Title: Network scalability and Blender integration

Mentor: Pau Arumí Albó

License: GNU General Public License (GPL)


  • Network Scalability:
The CLAM NetworkEditor is a powerful tool to easily make complex audio processing units ("networks") without need to write a single line of code. But if you want to do a big network with many processing boxes, even if it's a simple process could be difficult to understand the signals flows. The use of subnetworks allows to group the processing boxes by levels, allowing not only to simplify the reading of signals flows, but also allowing the users to improve their designs in an object-oriented way, making new processing units connecting and grouping previous ones.
That is the main idea of the network scalability proposal.
  • Blender integration:
The main goal of the Blender integration proposal is to allow the importation and use of 3D scene objects data within CLAM. This will allow to use 3D models parameters and variables in CLAM sound processes to, for example:
  • implement acoustic models / simulations within CLAM
  • define custom predefined trajectories for spatialization plugins
  • define custom 3D parametrizations to apply on any control signals within CLAM networks


This proposal cover two main areas. The first one claims to improve the connections and processes into the NetworkEditor interface, and the second to integrate 3D objects functionality to the CLAM network. The tasks of each area are quite independent, so I think to work on both at the same time.

Network Interface

For the NetworkEditor interface, these are some proposed chronologically ordered tasks:

  • normalize processing boxes descriptors keys (add missings keys in some objects, improve the factory class key implementation to manage the key missing errors without generating segmentation faults) Done.png
bool CLAM::Factory::AttributeExists (const std::string& key, const std::string& attribute)
//returns true if attribute exists in list "key".
  • improve the user interface to interconnect signals of the same type. Make contextual menus showing compatible type pair to connect to (implemented now only with the port monitors type).
Discussing about the interface. Specific things to define:
  • Kind of interface to simplify cable connections between processings
  • ToolTips, messages boxes, and error management
  • Implement sending/receiving named processing boxes, for every control and signal types (allowing to make "wireless" connections).
In process.
  • improve NE GUI processing boxes manipulation. Allow the user to group/lock a group of processing boxes, and operate with them as a unique object (for copy/paste/duplicate/move/etc).
Done copy / cut and paste. Added canvas geometries to the network XML store/load management. Still needs refactoring and be sure of the format to use, since it then will be improved to manage subnetworks.
  • improve the grouping implementation to a new instance, allowing the user to save a group as a network, and load it as a closed processing box into a network ("subnetwork"). Derivate the sending/receiving processing boxes to manage subnetworks inputs/outputs.
In process.
  • improve the subnetwork support to allow the user to "enter" in the subnetworks (if possible when playing, to allow manage subnetworks controls in RT)

Blender-CLAM integration

The main goals for the Blender-CLAM integration are:

  • Allow Blender to export scenes objects parameters on intermediate files (with cameras-listeners and objects-sound sources positions over time) make them readable into CLAM, and use as offline spatialization choreography.
  • Allow Blender to send the previous point parameters in RT over OSC, and CLAM to use them onto networks. Adapt the spatialization networks examples to use them, and make new ones.
-successfuly first python script to send scene parameters using osc to a receiver osc clam plugins Done.png
-needs refactoring on sender code and improvements on OSC clam receiver. Make specifics spatializer parameters receiver processings (using SpatDIF?).
  • Improve the Vector Based Amplitude Panning related processing boxes to manage distances (with variable delay lines and optional filters) and different speakers configurations. Allow the spatial speaker's configuration importing blender scene objects.
  • Make new VBAP alike processing boxes to enconde/decode Ambisonics (externals LADSPA/VST plugins can be used too)
  • Make a simple OpenGL monitor to display the 3D scene within NE.
  • Make a network prototype which can manage multiple sound sources and:
  • allow the definition of sound sources emition directivity
  • spatialize them taking care of objects sound oclussions (and emition direct.)
  • Make a processing box doing the work of the previous prototype network (at this point an intermediate task could be to make a subnetwork, using the proposed scalabilities improves).

Firsts tasks needed on this area:

  • in Blender
  • implement an intermediate scene description file format. SpatDIF over SDIF?
  • implement Open Sound Control sender on python scripts
  • implement some 3D scene description format over OSC. SpatDIF?
  • in CLAM:
  • implement the Blender intermediate used formats (SpatDIF over OSC/SDIF?)
  • make an earlier simple OpenGL monitor


  • Improve the scene description management to allow X3D/VRML/MPEG4/BIFS use.


  • Milestone 1 (for Blender Integration):
    • place a source and a listener object in blender. Done.png
    • when source/listener moves an osc is sent to clam Done.png
    • clam receives an audio source (by jack), and source/listener positions by OSC. Then processes this controls and convolves the audio using HRTF and distance gain compensation.
To be done:
    • The OSC receiver, adapting from the existing processing
    • The osc->azimuth/elevation conversion
    • If it's needed, extend the HRTF database processing.
  • Subnetwork implementation milestones:
    • Milestone 1:
      • Decouple MainWindow and Canvas (NE, basically) from Network using an abstract BaseNetwork class (aka interface). Rename Network to FlattenedNetwork
      • Improve the "graph getter" interface (used by Canvas and FlowControl)
      • Refactor FLowControl so it have a unique GraphChanged() method
    • Milestone 2:
      • Create a new class Network and duplicate the graph model, using IDs that refers to the flattened network. Eacy graph change is automatically sync with the flattened network.
    • Milestone 3:
      • introduce subnetworks with a Composite (truly Composite) pattern. Thoroughly unit tested
    • Milestone 4:
      • User Interface, and complex workflows (like create subnetwork)

Related TODOs pages:

Pending tasks, proposed features and improvements

  • Define subnetworks implementation:
Related tasks:
- do group managements on canvas. Load and Save groups from files.
- implement several opened networks management on canvas. ?
- do the ports and controls sinks/sources and send/receivers
- define the semantics and hierarchy for subnetworks management
  • Pau proposed refactoring:
change actual (CLAM::Network):
typedef std::map< std::string, Processing* > ProcessingsMap;
struct ProcessingInfo { Processing* p, Geometry g};
typedef std::map< std::string, ProcessingInfo > ProcessingsMap;
to merge processings and geometries maps getting on canvas.
  • Change and simplify "connect to" context menues
  • (Low-Priority) Improve embedded Faust diagrams displaying (antecedents)

(optional) Blogging

(optional) Selected posts

The most relevant ones related with the project

Tasks prior to the coding period

Related links

  • CLAM:
  • Wiki:
  • C++
  • Python
  • Blender:
  • Catalog: a large listing of scripts, grouped by function, one page per script.
Sctructures, formats & protocols:

Navigation menu