Mass-interaction methods for sound synthesis, and more generally for digital artistic creation, have been studied and explored for over three decades, by a multitude of researchers and artists. However, for a number of reasons this research has remained rather confidential, subsequently overlooked and often considered as the odd-one-out of physically-based synthesis methods, of which many have grown exponentially in popularity over the last ten years. In the context of a renewed research effort led by the authors on this topic, this paper aims to reposition mass-interaction physical modelling in the contemporary fields of Sound and Music Computing and Digital Arts: what are the core concepts? The end goals? And more importantly, which relevant perspectives can be foreseen in this current day and age? Backed by recent developments and experimental results, including 3D mass-interaction modelling and emerging non-linear effects, this proposed reflection casts a first canvas for an active, and resolutely outreaching, research on mass-interaction physical modelling for the arts.

# Introduction

This paper intends to express a refreshed vision on the use of mass-interaction modelling for real-time sound-synthesis and interactive digital arts. After positioning some general concepts, we briefly document how a step aside from sound-based considerations has led to new grounds for investigating the potential of mass-interaction physical modelling. We then present multi-dimensional geometry as a starting point for any kind of mass-interaction model (in terms of mathematical roots, modelling methodologies and performances), and finally discuss the relevance of this approach for modelling and real-time simulation of virtual acoustical structures that present emergent non-linear behaviour.

# Physical Modelling: why bother ?

When considering the term physical modelling as it is used in a large number of fields of research, the first and most fundamental question to arise is :

*Why would one painstakingly reproduce the physical behaviour (phenomenological approach), or even a virtual representation (causal approach), of a well characterised real physical object ?*

In most of these fields the answer will have to do with saving lives, or a significant increase in the understanding our world, from quarks to universe(s?). But when it comes to sound synthesis, this question might be slightly more difficult to answer as it becomes:

*Why reproduce - and try to play - a virtual instrument that mimics a real instrument that one could play in real life?*

Over the years, the Computer Music community has developed very serious and meaningful arguments justifying this approach, such as those recalled by Bilbao and Smith in the opening chapters of their respective books [1, 2]. Resulting research has subsequently led to the development of a rich variety of techniques [3-5], and continues doing so to this day.

However, if we allow ourselves to take a more poetic stance, the above question could subjectively be qualified as irrelevant, to the benefit of the following :

*How could our most common day-to-day physical experience of the physical world inspire and ease an artistic process in its digital counterpart ?*

The entire approach described hereafter takes ground here. Although it is anchored in a physical paradigm, its main focus is not the common “realistic sound certified by impeccable evaluation methodology” achievement. It approaches physically-based synthesis from a different angle, oriented mostly towards intuitive, interactive and multisensory exploration - without excluding methodological questioning of its validity along the way. Here, physical modelling consists in designing and experimenting virtual mechanical constructions, with the aim of discovering and crafting a range of uncanny sound-producing objects that can be directly explored and interacted with.

# A world of interacting masses

All approaches for modelling the physics of macro-scale mechanical systems^{1} stem from a common root: Newton and his laws. The mass-interaction (MI) paradigm is one out of many ways to transcribe these laws into discrete time and space algorithms that allow for the computation of physical dynamics. It does so by representing physical models as networks of mass-type elements and interaction-type elements [6].

## A balancing act

While MI is probably not the most computationally effective physical modelling approach, nor the most elegant in terms of mathematical formulation, and - let’s face it - not the most suited paradigm for synthesising supra-realistic sounds of legendary acoustic instruments^{2}, it possesses four undeniable qualities: genericity, modularity, ease of use given an intuitive understanding of physics, and very direct possibilities for gestural interaction [7]. These traits are especially important when considering the perspective of making sound synthesis tools accessible to everyone. Ultimately, our choice to further investigate mass interaction physical modelling is motivated by the conviction that it strikes a “good balance”: a generally satisfying compromise that still yields strong potential regarding, on the one hand, scientific and technological considerations, and on the other hand, artistic and creative perspectives. And while such a balance is not always simple to maintain, it is central to the authors’ research methodology.

## Modular Physical Modelling

### Modularity in Artistic Creation

Under general consideration, modularity can qualify any physical or abstract system regarding the capacity of its irreducible elements to connect to each other and thus achieve more complex objects or functionalities (cf. fig.1). Max Mathews [8] said of modularity concept (referring to unit generator in MUSIC III) :

*“It’s a very important concept, and more subtle than it appears on the surface. I wanted to give the musician a great deal of power and generality [...], but at the same time I wanted as simple a program as possible; I wanted the complexity of the program to vary with the complexity of the musician’s desires. [...] The only answer I could see was not to make the instruments myself [...] but rather to make a set of fairly universal building blocks and give the musician both the task and the freedom to put these together into his or her instruments.” *

This concept is specifically valued in recreational contexts (cf. fig.2) and creative fields. As examples of modular systems in musical creation, one could cite modular analog sound synthesis or patching environments such as Max/MSP, PureData and Chuck. Regarding other creative fields, such as graphic design and video, one could think of Processing^{3}, Quartz Composer, vvvv^{4}, and more recently NodeBox^{5}. All of which have led to the emergence of novel artistic processes and to the development of important user communities.

### A modular approach to physical modelling

In the scope of this paper, modularity is the most fundamental *a priori* that the authors will maintain. It is in fact the central pivot around which revolve all the efforts to find true meaning to mass-interaction physical modelling. It is the necessary condition to achieve a vast range of emerging behaviours, and to be struck by surprise (and we are not referring to numerical instability !) each time the smallest part of a model is modified. It is here that one’s creativity is truly set in motion.

Of course, such promising perspectives cannot be expected without a challenging counterpart. We won’t lie - it might be a little bit more complex (what a reassuring euphemism) to handle fully modular rather than non modular approaches. But, if one gives it a little thought, even if the learning curve can be steep, fully-modular systems can always be approached by starting from the simplest possible combination of elements and set of parameters. From there, one can - in his/her own good time - progressively build a knowledge base guided (in the beginning) mostly by very elementary concepts, in our case relating to mechanical physics.

### Does modular physical modelling really pay off?

One might think that this enthusiasm for modularity could be curbed by the involved computational complexity, as it generally limits possibilities for optimisation or algorithmic “shortcuts”. However, with regard to the state of the art of physical modelling approaches (in terms of computation, richness of sounds, dealing with non-linearities, etc.) and given recent sound synthesis experimentation with 3D mass-interaction models described hereafter, the position of mass-interaction physical modelling appears relatively solid. Moreover, when adopting the modular “creative” modelling mindset, complexity and computation can be leveraged by a number of factors. Indeed, while model scale does have a notable impact (especially for structures such as dense plates, e.g. the cymbal), there is a good chance that the sonic essence you (maybe didn’t even know that you) were looking for can *in fact* be obtained with a simpler model than the physical system you *would have* imagined^{6}, resulting in much simpler computation than the discretised solution to a complex mathematical representation of such a system.

Finally, all of the above considerations bring about yet another question:

*If a modular physical modelling paradigm can faithfully ^{7} represent any physical system that can be described by tridimensional point-based Netwon mechanics, shouldn’t it naturally bring forth many of the emergent non-linear characteristics found precisely in the spatial and physical structure of such systems?*

If so, could such an approach to non linearities, crucial factors in the richness of synthesised sounds [9], be easier to apprehend, manipulate and pass on to artists than the common route of advanced mathematical formulations and finite difference schemes? This discussion will be for a later section.

## Here is the motto

In short, the authors still see mass-interaction physical modelling as a very fertile ground to explore, especially under the prism of digital artistic creation. However, unlike much previous work and already significant discoveries made on the subject [6, 10, 11], the motto here has to be explicitly clarified :

Each and every core concept, component or experimental paradigm must be openly stated, positioned and questioned within a global scientific framework.

Every model, each line of code, will be shared so as to allow for a community-driven artistic, scientific and technological reflection regarding this topic.

# Observe, then interact, then - and only then - listen

## Stepping aside from a sound-based approach

The mass-interaction (MI) approach presented here allows to address the modelling question from any kind of preliminary phenomenological consideration. In simpler terms, one could create a virtual physical model with the sole aim to observe a visual rendering of its motion through time, or with the intent to explore the properties of an interaction model, or further still considering the motion of a virtual mechanical deformation as a sound source. Ultimately, every single object designed in MI with a specific idea and modality in mind can be considered (as it is) for its complementary modalities.

This property gives MI a strong potential in the fields of sound synthesis, interaction modelling, visual rendering, to create multisensory virtual objects. Of course, the latter raises questions as to the various contexts in which one can build and simulate such objects.

The following section describes a scenario followed by the authors, stemming from visual considerations in a visual rendering software, and progressively leading to explore the resulting objects for their acoustical properties and playability (including via Haptic interaction) - all within this visual rendering software.

## Computing and rendering mechanical motion

Mass-interaction physical modelling has long been studied and used within the domain of visual arts^{8}, resulting in a string of concepts and tools [12, 13]. On the basis of these visual considerations and the will to increase accessibility to MI modelling through open-source software, the authors recently developed *miPhysics*, a compact JAVA library then targeted essentially for the Processing environment (a software sketchbook & language widely used for prototyping and creating visual and interactive arts). This tool was naturally written to allow designing and computing the motion of point-based models, described as an arrangement of masses and interactions in up to 3 spatial dimensions, and each mass possessing up to 3 degrees of freedom.

The flexibility of a framework such as Processing allows for efficient interactive modelling and naturally leads to consider aspects such as real-time parameter control, *on-the-fly* topology/geometry alterations, as well as numerous rendering methods for large models. Despite the fact that it is, by essence, a prototyping environment (therefore not necessarily well optimised for complex scene rendering), large-scale models composed of tens of thousands - and sometimes over a hundred thousand - elements run in real-time (cf. fig.3) at physics computation rates from 250 Hz up to several kHz, and visual display rates around 60 FPS.

## From motion and physical interaction to sound

While observing the visual motion such rendered models, a recurring interrogation quickly became: “wow, how would that *sound*?”. Unable to contain ourselves, we then started cranking simulation rates up to 44.1kHz in an audio thread, connecting “microphones” into these virtual scenes in the simplest way possible (applying the motion of one or more mass elements directly to a loudspeaker - direct output from *very* localised listening points with no considerations of sound propagation through an aerial medium), and there you have it: multisensory sound and visual objects at your fingertips, directly within Processing (cf. fig.4).

...well, as such the last statement is not strictly true - actually *touching* these objects requires force-feedback interaction. This was integrated using the Haply^{9} open-source device (shown in fig.5) as detailed in [14].

This leaves us with an entirely open environment in which virtual objects can be synthesised in real time in a 3 dimensions and 3 degrees of freedom space, and are accessible to all of our senses, save for smell and taste. This constitutes a result in itself, especially given that many of today’s aspects of sound and music research (interaction mapping, model visual rendering, VR-reinforced presence...) address larger considerations than sound alone.

But that’s not all^{10}, since another historical [15] and still very actual state of the art problem in sound synthesis [9] naturally finds some solutions when Newton’s equations are finally *given some space*^{11} : non-linearities.

# 3D Mass-interaction 101

## The grizzly details

Now that we have presented our position in regards to a research dedicated to mass-interaction physics and teased at some early results, it seems a good time to introduce (or reintroduce) the scientific and technological concepts behind mass-interaction physical modelling.

### Have you met Newton?

A little while back, a fine gentleman by the name of Isaac Newton stated the following:

*In an inertial frame of reference, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a force*.*In an inertial frame of reference, the vector sum of the forces f on an object is equal to the mass m of that object multiplied by the acceleration a of the object: f = ma*.*When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body*.

These three rules are the basis for resolving just about any mechanical system, by representing it as punctual masses and by expressing different kinds of forces (gravitational, elastic, frictional, etc.) applied to them.

### A numerical discretisation scheme

As in any numerical resolution to a set of partial difference equations, various discretisation schemes may be employed, from lower order methods (such as Euler) to more complex schemes (such as Runge-Kutta). The choice of a scheme results from considerations of numerical stability, computational complexity and causality.

If finite-difference schemes for lumped methods in the 1D case are well documented in physical modelling literature (see [1, 5, 6]), the *N-*dimensional case has rarely been a topic of interest within this community. Below, we present a formulation in which positions and forces are *N*-D vectors (3D in the case of the *miPhysics* library).

A common starting point for representing and computing discretised modular mass-interaction systems^{12} is to apply a second order central difference scheme to Newton’s second law:

$$\label{eq:massNewton}
\textbf{f} = m.\textbf{a} = m \frac{d^2 \textbf{u}}{dt^2}$$

(\( \mathbf{f} \) is the force applied to the mass, \( m \) is its inertia, \( \mathbf{a} \) its acceleration vector and \( \mathbf{u} \) its position vector), resulting in the following normalised form (discrete-time position and force vectors noted \( \mathbf{U} \) and \( \mathbf{F} \), \( M \) the discrete time inertial parameter defined as \( M=m/ΔT^2 \), with \( ΔT \) the sampling interval:

$$\label{eq:discretetimemass}
\textbf{U}_{(n+1)} = 2.\textbf{U}_{(n)}- \textbf{U}_{(n-1)} + \frac{\textbf{F}_{(n)}}{M}$$

This leaves interactions (the elements that apply forces to material points). In most cases of mechanical interactions, the force exerted can be expressed as a function of position and velocity: as an example, the magnitude of a visco-elastic force applied by a linear spring (with stiffness coefficient \( k \), damping coefficient \( z \) and resting length of \( l_{0} \)) connecting a mass \( m_{2} \) at the position \( \textbf{u}_{2} \) to a mass \( m_{1} \) at the position \( \textbf{u}_{1} \) is given by:

$$\label{eq:HookeSpring}
f_{{1\rightarrow2}} = -k.(||\textbf{u}_2 - \textbf{u}_1|| - l_0) - z.(||\textbf{v}_2 - \textbf{v}_1||)$$

Approximating the velocity with the backward Euler scheme, we obtain \( F_{spring} \) the force scalar value:

$$\label{eq:springDampAll}
\begin{aligned}
dist_{(n)} = & ||\textbf{U}_{2(n)} - \textbf{U}_{1(n)}|| \\
F_{spring(n)} = & -K .(dist_{(n)} - l_0)\\
& -Z .(dist_{(n)} - dist_{(n-1)})
\end{aligned}$$

With the discrete-time stiffness parameter \( K = k \), and the discrete-time damping parameter \( Z = z/ΔT \). The resulting force vector (defined along the direction vector between both masses) is finally applied symmetrically onto each mass (Newton’s third law):

$$\label{eq:springFrcApply}
\begin{aligned}
\textbf{F}_{proj(n)} &= F_{spring(n)}. \frac{\overrightarrow{\textbf{U}_{2(n)} - \textbf{U}_{1(n)}}}{||\textbf{U}_{2(n)} - \textbf{U}_{1(n)}||} \\
\textbf{F}_{{2\rightarrow1}(n)} &=- \textbf{F}_{proj(n)} \\
\textbf{F}_{{1\rightarrow2}(n)} &=+ \textbf{F}_{proj(n)}
\end{aligned}$$

The main difference in regards to the classical “topological” 1D algorithms is the explicit use of Euclidian geometry associated to the spatial attributes of the physical mass-type elements.

### Computing the system dynamics

A step of the discrete physical computation is structured as follows: masses compute their new positions according to the discrete-time vector sum of forces \( F \), resulting in an update of the model positions. Interactions then calculate applied forces (using the newly calculated positions and positions from the previous step), resulting in new sum vector forces for each mass. This is used in the next computation step for the calculation of new mass positions, and so forth.

### Modules and properties

Mass modules are defined by an inertia parameter (possibly infinite, hence representing anchored points) and a set of spatial coordinates and speed initial values.

Interaction modules are defined by one or both stiffness and viscosity parameters and a resting length. Theses modules can also include conditional proprieties naturally leading to non-linear interactions (such as representing contact forces between material elements).

## Modelling

Based on the previous elementary concepts and modules, modelling with MI consists in building a geometrical model by positioning and connecting material components together through interactions components, and by specifying the parameters and initial conditions of each one. Furthermore, every consideration regarding ways of listening, visualising or interacting with one or several modules, parameters, or even with topological or geometrical properties of a model, are up to the user.

### Do not fear simplicity

Given that models can contain tens of thousands of elements, the perspective of configuring physical parameters and initial states can seem a little daunting. When exploring larger structures a simple first approach is to consider locally homogeneous parameters in sections of the object, as they can always be fine-tuned at a later stage. However, it is worth noting that richer mass-interaction behaviour does not always stem from huge homogeneous models (which take on a role similar to a propagation medium) but often somewhere in the middleground - where careful parameter tuning and interaction design meets with sufficiently rich resonant structure to catch our ear ! More generally, users can deploy several strategies to build up their models, either from scratch one module at a time, or through scripting strategies for geometry or multi-parameter specification.

### Figuring out 3D

Even though clear parallels can be drawn between mass-interaction methods and those of general finite differences and/or finite element methods, the most notable difference lays in the way matter and spaciality are considered. FDM/FEM methods discretise unidimensional or multidimensional objects, reducing them to numerous small sections of linear, planar of volumetric geometries. Each section is literally anchored to a static position in space around which the matter it represents will evolve locally, allowing wave propagation. The latter can occur along one or several dimensions, while, in most cases, each local section of space allows the matter it represents to move along one degree of freedom. But at the global scale, the general geometry of these objects will never ever change. On the other hand, 3D mass-interaction do not pre-suppose any spatial grid or subsections that virtual matter can be hung to. In MI, masses are free to go wherever they see fit (and they can move along 3 degrees of freedom!). Hence, the interactions connecting them, and their properties (particularly in terms of resting length), are crucial. Ultimately, each and every module contributes to the global materially of an object, considering both its geometry (and structural consistency) and its mechanical properties.

In simpler terms, this means that if one creates a cube with 8 nodes and 12 edges (i.e. 8 masses and 12 interactions), the slightest blow on it will make it collapse in on itself : such a cubic mesh is insufficient to describe a structurally consistent cubic virtual object (cf. fig.6). You will have to consolidate it and more generally think “deformable solid” (think of it as if fig.2’s links could be elastic).

## Efficiency

Even if the 3D mass-interaction engine implemented in Processing must be regarded as a non-optimised prototype, it allows to seize the potential of such a method. Figure [fig:bench] gathers four scenarios of real-time models, referencing model complexity (number of mass-type and interaction-type elements) and involved modalities (visual, audio and/or haptic). Performance was measured on a single core of a standard laptop^{13}: models pass if there are no image, audio or haptic dropouts^{14} during computation. The general stability of *miPhysics* is not an obstacle to the wildest experiments. Plus, its boundaries are very well known and understandable by any user.

# Non-linear behaviours !

## Do the math - or maybe don’t

Within the last 20 years, the emphasis of physically-based sound synthesis has shifted from exciting (generally) linear resonators via non-linear interactions towards replicating complex acoustical behaviours through the modelling of non-linear dynamics in resonating structures. Modal synthesis [16], Waveguide methods [17], FDTD and even 1D mass-interaction systems [18] have seen themselves reinvented to this end.

Conversely, recent years show a strong increase in finite-difference-time-domain schemes, as computational limitations lessen, now allowing for off-line synthesis of very large models, as well as real-time synthesis of small to medium scale ones. In the dominant literature, systems are first formalised under linear conditions (from the 1D, 2D, or more rarely 3D, wave equation - and often considering vibrations along a single dimension) before adding specific non-linear formulations to account for phenomena such as “airy stress”, leading to effects such as pitch glide and chaotic oscillations in plates [1].

In all of the above, non-linearities present themselves as mathematical ramifications, incorporated into formulations primarily rooted in acoustics - taking modal representations outside of their comfort zone, one could say.

We propose the *complete* opposite: within a framework that fully accounts for the tri-dimensional spatial properties of matter (cf. above), build vibrating bodies, give them a good smack, and observe. If Newton was right, the pandora box of non-linear behaviour might just open - without us ever having to write an equation for it^{15}.

## Experiments & Observations

### What can we expect?

Many sonic attributes can be attributed to non-linear phenomena in vibrating bodies. Fletcher and Rossling [19] mention : Dependence of vibration mode frequencies upon amplitude of excitation (equivalent to “tension modulation”), the generation of overtones that are exact harmonics of the fundamental oscillation (“harmonic distortion”), forced oscillations at submultiples of the driving frequency, and chaotic oscillations.

Below, we present results from simple 3D mass-interaction models, with the aim of (hopefully!) observing some of these phenomena.

### Observation of tension modulation

Experimental measures (cf. fig.9) were conducted on a simulated string, composed of 32 masses, excited at 1/3 of its length by varying levels of force impulse. The resulting spectrogram: The pitch glide generated by the tension modulation is immediately apparent, and correlated to the excitation amplitude.

Indeed, the purely linear springs exert a recall force proportional to the Euclidian distance between masses and are therefore dynamically affected by elongation (compression or distension of the spring): larger excitation means more elongated springs, resulting in increased tension.

### Observation of chaotic oscillations

In this case, the physical model is a 3D beam (86 masses and 512 interactions, based on the “structurally consistent” cubes of fig.6) fixed at both ends and struck by a “plucking” mechanism with varying levels of speed (cf. fig.10).

Figure [fig:chaos] shows, on the one hand a low amplitude excitation, resulting in clear-cut and static vibration modes, and on the other a high amplitude blow that brings both heavy pitch glides and chaotic oscillations over a large period of time (the much sought after “whooooshing” sound). This example also pinpoints a creative perspective of this kind of modelling: in real life, it may be impossible to strike a stiff beam with such force that it enters into a chaotic regime. However there is no problem in doing so here. It also means that these rich behaviours can be obtained for almost any given model topology, which is (to say the least) an exciting perspective for sound exploration.

These preliminary observations confirm that generalised non-linear behaviour of physical matter - the current *hot topic* of physically-based sound synthesis - is present by essence in multi-dimensional MI virtual objects. Bearing in mind the potential of such behaviours in the context of “creative” modelling, we believe that mass-interaction physics is a highly relevant method with a key role to play in formally understanding, designing and manipulating virtual vibrating objects that exhibit non-linearities.

# Conclusion and perspectives

This entire paper has been dedicated to highlighting the potential the authors foresee about 3D mass-interaction models for sound and music creation. The methodology in itself is worth mentioning, as a necessary step was to step back from sound-synthesis in order to consider environments and tools that include, as a *continuous* and *fully integrated* workflow, every key aspect for exploring the potential of multisensory 3D mass-interaction physical models: real-time computation and rendering of 3D scenes, sound synthesis capabilities, both control and haptic interaction, a language that enables scripting for model design, etc. Model design and experimental validation prove that within the scope of sound synthesis, such models naturally yield emergent non-linear physical behaviour and - further still - do so without any added mathematical or modelling complexity.

More generally, it appears that the scalar world, in which are built most audio-based DSP environments, might not be the most suited to elaborate new paradigms for musical creation based on 3D mass-interaction physical models. **Multi-dimensional geometry** is the key. And not simply as a consideration helping to create meshes whose acoustic behaviour can then reduced to 1D or 2D, but as a necessary level of description and calculation of virtual physical matter.

From this point onward, all is yet to be done regarding :

The actual tool, by strengthening genericity in terms of languages and environments. As a framework for the production of multisensory objects, it lies at the crossroads of computer graphics, human computer interaction and computer music, and while Processing has been an invaluable tool for prototyping within these three fields, it is not an end in itself.

The exhaustive formalisation and characterisation of MI multi-dimensional modelling, both through analytical considerations and empirical studies.

Reflecting upon how one listens to multi-dimensional and spatially distributed virtual physical objects.

Designing model and signal analysis tools for such objects (since topology-based modal analysis is no longer an option).

Taking the geometrical aspect further: an inherent limit as of yet is that all contacts are currently defined as

*sphere-to-sphere*interactions between two punctual material elements. Could geometrical surface modelling and contact handling from CGI and haptic fields be leveraged in order to extend the formalism?

As a general conclusion, we affirm that mass-interaction physics is still a potent framework for sound-synthesis, and that it should not be put on a shelf as a part of physical modelling history just yet... but don’t take out word for it - grab the library^{16} and get coding to see for yourself!

# Bibliography

[1] S. Bilbao, Numerical Sound Synthesis: Finite Difference Schemes and Simulation in Musical Acoustics. Chichester, UK: John Wiley and Sons, 2009.

[2] J. O. Smith, Physical Audio Signal Processing: for Virtual Musical Instruments and Digital Audio Effects. W3K Publishing, 2010.

[3] ——, “Physical modeling using digital waveguides,” Computer Music Journal, vol. 16, no. 4, pp. 74–91, Winter 1992.

[4] J. -M. Adrien, “The missing link: Modal synthesis,” in Representations of musical signals. MIT Press, 1991, pp. 269–298.

[5] V. Va ̈lima ̈ki,J. Pakarinen,C.Erkut,andM.Karjalainen, “Discrete-time modelling of musical instruments,” Reports on progress in physics, vol. 69, no. 1, p. 1, 2005.

[6] C. Cadoz, A. Luciani, and J. L. Florens, “Cordis-anima: a modeling and simulation system for sound and image synthesis: the general formalism,” Computer music journal, vol. 17, no. 1, pp. 19–29, 1993.

[7] J. Leonard and C. Cadoz, “Physical modelling concepts for a collection of multisensory virtual musical instruments,” in New Interfaces for Musical Expression 2015, 2015, pp. 150–155.

[8] C. Roads, “Interview with max mathews,” Computer Music Journal, vol. 4, no. 4, pp. 15–22, 1980.

[9] S. Bilbao, “The changing picture of nonlinearity in musical instruments: Modeling and simulation,” in Proc. Int. Symp. Musical Acoustics, 2014.

[10] N. Castagne ́ and C. Cadoz, “Genesis: a friendly musician-oriented environment for mass-interaction physical modeling,” in ICMC 2002-International Computer Music Conference, 2002, pp. 330–337.

[11] S. Rimell, D. M. Howard, A. M. Tyrrell, R. Kirk, and A. Hunt, “Cymatic. restoring the physical manifestation of digital sound using haptic interfaces to control a new computer based musical instrument.” in ICMC, 2002.

[12] M. Evrard, A. Luciani, and N. Castagne ́, “Mimesis: Interactive interface for mass-interaction modeling,” in International Conference on Computer Animation and Social Agents, 2006, pp. 177–186.

[13] K. Sillam, M. Evrard, and A. Luciani, “A real-time implementation of the dynamic particle coating method on a gpu architecture,” in 4th Workshop in Virtual Reality Interactions and Physical Simulation 2007. Eurographics Association, 2007, pp. 69–78.

[14] J. Leonard and J. Villeneuve, “Fast audio-haptic prototyping with mass-interaction physics,” in International Workshop on Haptic and Audio Interaction Design (HAID’19), 2019.

[15] J.-C. Risset, “Computer study of trumpet tones,” The Journal of the Acoustical Society of America, vol. 38, no. 5, pp. 912–912, 1965.

[16] D. Roze and J. Bensoam, “Nonlinear physical models of vibration and sound synthesis,” in Unfold Mechanics for Sounds and Music, 2014, pp. 1–1.

[17] V. Valimaki, T. Tolonen, and M. Karjalainen, “Pluckedstring synthesis algorithms with tension modulation nonlinearity,” in Conference on Acoustics, Speech, and Signal Processing. Proceedings., vol. 2. IEEE, 1999, pp. 977–980.

[18] J. Villeneuve, C. Cadoz, and J. Leonard, “Analyse de mode`les physiques, mode`les physiques d’analyse,” Traitement du Signal, vol. 32, no. 4/2015, pp. 365–390, 2015.

[19] N.H.FletcherandT.D.Rossing,Thephysicsofmusical instruments. Springer Science & Business Media, 2012.

[20] K. van den Doel, P. G. Kry, and D. K. Pai, “Foleyautomatic: Physically-based sound effects for interactive simulation and animation,” SIGGRAPH ’01 Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001.

[21] J.-H. Wang, A. Qu, T. Langlois, and D. James, “Toward wave-based sound synthesis for computer animation,” ACM Transactions on Graphics, vol. 37, pp. 1–16, 07 2018.

we will not delve into atom-scale mechanics for now.↩

such as the kazoo.↩

https://processing.org↩

https://vvvv.org↩

https://www.nodebox.net↩

see section 6 for an example regarding chaotic emergence generally associated with cymbal-like structures - obtained with a relatively small model that is nothing like a cymbal.↩

well, within the limits of numerical stability.↩

see for instance A. Mondot and Claire B.’s creation, Hakanai.↩

http://www.haply.co/↩

insert synthesised sound of drum rolls↩

insert synthesised sound of a heavily-struck cymbal, from which a vast panoply of non-linear behaviours emerge↩

such as in the CORDIS ANIMA formalism.↩

Dell Precision 5530 running Ubuntu 18.04 & Processing 3.5.3, Specs: Intel i7-8850H 4 cores at 2.6GHz, 16GB RAM.↩

since the OS is non real-time, sporadic missed haptic frames can occur [14] - although not enough to significantly alter haptic interaction.↩

us mass-interaction people hate writing equations.↩

https://github.com/mi-creative↩