World Music's DIVERSITY and Data Visualisation's EXPRESSIVE POWER collide. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born. Entirely Graphical Toolset Supporting World Music Teaching & Learning Via Video Chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

Wednesday, September 21, 2016


World Music Visualisation Platform: The Provisioning And Integration Process

The processes underlying provisioning (or population) and usage or our world music visualisation aggregator platform will need to be simple to understand. Here I try to gather some thoughts. Driven by the practical needs of initial configuration building and user's later (in part literally) fine-tuning of these, I envisage a split into two processes.

Potential Crowdfunder?

The initial or base configuration is concerned with establishing broad configurations. For instruments, these would include for example scale or channel lengths, number of courses or channels, and for wind instruments generic (positional, instrument-tuning-independent) fingering layouts.

The basic provisioning applies to musical instruments, theory tools, plus any esoteric, storytelling, psychophysics or other applications for which model reuse or extension is an objective.

The second or fine-tuning part would relate to the setup for specific use by a user, and cover things like tuning, use of aids such as capo.

The initial configuration process will add configuration details to instrument classification trees, and so rely on an underlying classification system. Put the other way around, where there is no classification, there is no order and hence presumably no point in reuse.. In some cases, such as for music theory, such classification systems do not (yet) officially exist.

Working out from a given generic instrument model or theory tool form) knowledgable users can run through unvisited configurations of instrument, theory tool or other animations, storing each to a unique position in the underlying classification tree.

Each instrument within a given family is stored only once, but can of course be associated with multiple instrument names.

For simple instruments such as those of the lute family, this means that amongst many others an instrument with (for example) 5 double courses might be defined and associated with (for example) bluegrass banjo and cittern or (different scale length) charango, mandore - and many others.

Big, brave, open-source, non-profit, community-provisioned, cross-cultural and batshit crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..

For instruments, user-governed characteristics such as tunings or application of a capo belong within the 'usage' part of the process, and can be stored -and retrieved- along with instruments as user preferences.

Looking at the upper part of the diagram in a little more detail, we can see how a single menu selection brings us into the realm of instrument configuration.

The process is initiated from a so-called 'Builder' menu governing specific types of instrument (stringed, wind, keyed, brass and vocal). These in turn bring us in-tool menus governing specific instrument families. For stringed instruments, these might (amongst others) include generic models for lutes, harps and various types of zither, plus any hybrids.

Instrument forms are described under (amongst others) the Hornbostel-Sachs instrument classification system, but this specifically omits any mention of musical characteristics.

Our provisioning processes add musical function (musical characteristics) to these definitions - in the form of number of strings or channels, scale lengths, temperament or intonation, tunings. Ultimately these determine what musical behaviour the instrument exhibits, both statically in the form of finger- or keyboard roadmaps, or dynamically under real playing conditions.

Our visual models are, however, based on the laws of physics. In this sense we can talk of three constituent parts to any model: form, function and physics representing respectively instrument construction, interface configuration, and the underlying physical properties.

Integration Process

This is an evolving topic. What follows is simply a first cut -a suggestion- as to how community provisioning might function.

Commons based peer production rests on three principles.
  • First, the potential goals of peer production must be modular. In other words, objectives must be divisible into components, or modules, each of which can be independently produced. That allows participants to work asynchronously, without having to wait for each other's contributions or coordinate with each other in person.
  • Second, the granularity of the modules is essential. Granularity refers to the degree to which objects are broken down into smaller pieces (module size).[7] Different levels of granularity will allow people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation.
  • Third, a successful peer-production enterprise must have low-cost integration—the mechanism by which the modules are integrated into a whole end product. Thus, integration must include both quality controls over the modules and a mechanism for integrating the contributions into the finished product at relatively low cost.
Foreseen are two more or less parallel processes applying respectively to internal and community-submitted code.

o Platform-internal (closed source), pink in the diagram below
o Community provisioned (open source), blue.

These will be, in all but the degree of platform/framework access, identical, yet for (I hope) obvious reasons separate.

Animation code (that which runs in the animation window) is de facto open-source. This includes example animation or template code provided by any platform-own development team.

Process Description

It makes sense to approach both planning and cooperation from a visual perspective.

All open source code will be subject to first community and then team/domain expert scrutiny and acceptance prior to it's integration into the platform itself. The primary goal is to bring, under a rigorous reuse regime, robust base instrument models and theory tool versions online which can then be progressively extended and improved on.

Familiarity with in-browser (and hence close-to-DOM) development tools is more or less essential. These are clear, well-documented, free, and powerful. We will be able to make recommendations, but in as far as the results conform to expectations, community members are free to use any development environment they choose.

A simulator (to be provided) is used to feed music data derived from MusicXML (as if from a score during playback) to the new animation at speeds under user control.
Code submissions will be made via open source code repository such as Github using it's git workflows.

These will then run through a secondary, platform test phase check behaviour and performance are acceptable, but also ensure platform and user integrity and security are not compromised.

On passing a module test phase, the code is released for integration test. Here, the primary goal is assuring platform stability under a range of operational conditions. Successfully passed, the code is freed for integration to production and any recovery systems.

In this sense, it is important to see each instrument family as complete ONLY when all possible musical configurations have been incorporated. This is analogous to the mountaineer's goal of arriving not just on the summit, but moreover back at base camp with team and health intact.

Code Templates

Potential Crowdfunder?

Pre-Crowdfunding Registration (* indicates required)
Email Format

Template code will provide the standard interface shared by all animations aspiring to mutual, dynamic configuration via (for example) a radar chart or slider set.

The musical qualities configurable through this interface can be split into two nominal groups: those gravitating towards abstraction (can be directly used in theory tools, no information redundancy), and those associated more with instruments (with possible information redundancy).

To the former belong parameters such as number of notes or tones per octave, and the temperament or intonation system to be used.

To the latter belong the number of channels, rows or courses, general layout, tunings and effective scale length.

Example and/or template animation code will be sufficiently well documented that a developer can use them as a base for own work.

Any community member is free to extend a template for use in a new animation to be submitted by means of an open source code repository. In this way, the code is available for reuse in other contexts, thereby raising it's primary, ie social, value.

Code Expectations

Cross browser compatible (IE 9+, Chrome, Safari, Firefox, Opera) and cross platform compatible (Desktop, Tablet, Smartphone, ER or VR headsets).

A significant challenge, perhaps, but one which is progressively succumbing to advances in web stack standards and technologies.

Buzzwords? Web components, Material Design, Web Animation (new standard), and some well established technologies, such as WebGL and our old friend D3.js.


o Continuous delivery model
o Initially, delivery cycles for each main family of instrument and theory tool

Once the provisioning model has been proven viable, we can move to the parallel production described in the three principles of commons-based peer production.

We can expect one of the first steps to be to identify what modifications are required across various classifications such as genre, instruments and theory groupings, and to cluster these by implementation cycle.

Here, for example, we see a series of projects clustered by categories affected (category A, for example, might represent 'Genre').

In this way we have a clear picture of our rollout priorities and (with a little extra labelling) timeline.

This could, indeed, be made dynamic in that projects are progressively added and removed depending on their active status: effectively a continuous release pipeline.

Visualisation Models and Tools

Theory Tools menu. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory Instrument Models menu. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory Take a look at this blog's Instrument Models and Theory Tools menus (L & R). These will give you some idea of the modelling scope of this project.

o application across the entire spectrum of instrument or theory tool configuration.
o code reuse is paramount.
o code clarity and simplicity. What can be understood can be improved.
o by default, unsupported configurations (instruments or tools) provoke a warning.
o changes at a lower model level cause a refresh at all higher levels.


o Javascript minimum ES5, where possible ES6 compliance.
o given it's flexibility and DOM-affinity, D3.js is the library of choice for data-driven elements
o D3 is not a compatibility layer, so if your browser doesn't support standards, tough shit.
o Compatibility across main platforms and browsers
o In particular, no jQuery, bloatware, unstable and obsolete platforms


o documentation should -as with the animations themselves- primarily be visual/graphical (as a direct aid to cross-cultural understanding). The less text the better.
o these diagrams should mirror the physical layered construction - an 'exploded view' is perfect
o indicated library calls (shown as layer labels) resulting in the modelling of a given layer should correspond 1:1 with the library's API documentation.
o good for first-cut diagrams is the free online tool


As explained elsewhere, I envisage a more or less community managed, moderated and provisioned system with social value, accountability and transparency at centre. With a mix of commons based peer production and community-commissioned work, I hope it will challenge legacy revenue-driven business models with a social value driven one.

This is not to say community commissioned should go unpaid: for challenging implementations, there is a case for staged financing through a combination of crowdfunding, micropayments, grants and donations, with core work carried out contractually by domain experts.

Whatever the route taken, the provisioning and process will be well suited as playpen for students of graphical development for the web.


Last but not least, the driver scores. A few observations:

o there are many MusicXML score collections worldwide, covering a large range of open source (cultural heritage) works.

o While the number of cultural works featured is limited, platforms such as Noteflight are helpful in re-democratising music. Open source material is quickly scored and distributed, and the resulting MusicXML as suited as any other to usage within this platform

o it would be helpful if a solid open-source exercise collection applicable to a wide range of instruments were established. These should perhaps emphasise the interval and modal qualities of various world music genres, and act as a stepping stone to musical virtuosity.

o For further information on anything relevant to notation, try this forum.


online music learning,
online music lessons
distance music learning,
distance music lessons
remote music lessons,
remote music learning
p2p music lessons,
p2p music learning
music visualisation
music visualization
musical instrument models
interactive music instrument models
music theory tools
musical theory
p2p music interworking
p2p musical interworking
comparative musicology
world music
international music
folk music
traditional music
P2P musical interworking,
Peer-to-peer musical interworking
WebGL, Web3D,
WebVR, WebAR
Virtual Reality,
Augmented or Mixed Reality
Artificial Intelligence,
Machine Learning
Scalar Vector Graphics,
3D Cascading Style Sheets,

Comments, questions and (especially) critique welcome.