World Music's DIVERSITY and Data Visualisation's EXPRESSIVE POWER collide. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born.

Thursday, August 11, 2016


A Reusable Visualisation Framework

A spinoff of this proof-of-concept is that the framework handling the loading and initial display of visual components is entirely domain independent, opening up the possibility of reuse in other timeline-driven applications.

Big, brave, open-source, non-profit, community-provisioned, cross-cultural and hanging from the chandeliers crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..

A Reusable Music Visualisation Framework. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory

A Reusable Framework

Potential Crowdfunder?

Pre-Crowdfunding Registration (* indicates required)
Email Format

Reuse in other timeline-driven applications?

This means that users will be able to entirely personalise their environment, yet still have full interactivity between their animation-driving source and their selected animations.

Moreover, as any synchronisation between possibly several remote nodes in a given learning cluster would likely be made with reference to a common, shared driving protocol, this suggests possible use with differing animations at each end node (reflecting a common practise amongst traditional instrumentalist, where -say- a fiddler teaches an accordionist or vice-versa).

Orchestration is another obvious target, but my feeling is there could be many potential synergies - especially in the area of real-time monitoring.

What do we mean by reusable?

A Reusable Music Visualisation Framework. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheoryWith no knowledge of what is being loaded (this is governed entirely by the content of data files)- individual animations can be loaded into the central animation panel.

You can liken this to a festival stage, where all manner of acts can be accommodated, but with each supplying all it's own needs. Moreover, the stage has space for more than one act at a time..

These models and tools are then animated by the timeline protocol entirely independently of the framework. In other words, the timeline source's data file contents are relevant only to animation interfaces, not to the loading framework.

The framework itself is extremely light-weight, as with (so far) no need for pre-built GUI elements (the range of elements used is tiny, and so quickly built using SVG), there was no need for a large 'bloatware' framework.

Furthermore, with direct, physical URLs, routing is superfluous.


If the framework is to be used on devices with less screen real estate, some control over the arrangement -and possibly scope- of information displayed will be necessary - at all levels.

This goes beyond the remit of the proof-of-concept and the remit of 'responsive' design. My feeling is that on the smallest of devices, only one of several possible contexts will be supported, in the form of a restriction on the range or number of models.


About Cantillate -

Autodidact. Laird o' the Windy Wa's. Serial Failure with Attitude. Bit of a Dreamer..

Comments, questions and (especially) critique welcome.