sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes – Post 1
October 18, 2013
by Darrell Gibson
sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes – Post 1 by Darrell Gibson Introduction The phrase “sound design” first started being used in the film industry in the 1970s [1]. Since this time it has been used in many different contexts and now means different things, to different people so it is worth …
sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes – Post 1 by Darrell Gibson
Introduction
The phrase “sound design” first started being used in the film industry in the 1970s [1]. Since this time it has been used in many different contexts and now means different things, to different people so it is worth clarifying its context here. Sound design is considered to be the generation, synthesis, recording (studio and location) and manipulation of sound. That is, creating and making sounds to meet a given set of requirements or specification. Therefore, it means all the following are different forms of sound design: synthesizer programming, generating and recording found sounds, Foley, applying effects during audio production, etc. Today sound design is required in many areas including: music production, soundscapes, film, television, theatre, computer/video games, live sound and sound art. One important area of Sound Design as a discipline is synthesizer programming, where the designer will configure a sound synthesizer’s available parameters to give a desired output sound. Whether the synthesizer is implemented in hardware or software, in a standard synthesizer model the parameters are typically accessed through controls such as dials, sliders and buttons. This method relies on the designer having extensive knowledge of the particular synthesis paradigm used by each synthesizer, the internal architecture and the sound design possibilities of each parameter.
The sheer number of parameters that synthesizers possess, often hundreds and sometimes thousands, further compounds the difficulties, while for some forms of synthesis (e.g. Frequency Modulation and Wavetable) the parameters’ relationship to the sound generation characteristics are not always simple. In addition to these problems, sound designers require creative and critical listen skills that also take considerable practice to develop, in order to move the process towards a defined sonic result. This combination of factors means that it can be very difficult to learn how to design sounds with synthesizers and often places effective design outside of the reach of traditional musicians and casual users. Over the years synthesizer manufacturers have addressed this problem by supplying their devices with extensive banks of preset patches. Although this is satisfactory for users that only want to use predesigned sounds, it detracts from the creative process and can be restrictive. It is also of limited value to those wishing to learn the intricacies of synthesizer programming. The best they can hope for is to audition preset patches until they find something close to the desired sound and then attempt to modify the sound by selectively “tweaking” the used parameters. This situation is also not desirable for experienced sound designers, who will often have a good idea of the sound they are aiming to create, but without considerable synthesizer experience it may not be obvious how to go about either creating them from scratch or moving from a preset to the desired sound. This is particularly evident for more complex synthesis systems. In addition, there is normally no way of working between multiple target sounds so that designers can arrange the sound in different configurations and explore the sound space defined by multiple target sounds. These limitations of synthesizers, combined with the historical origins of sound design in Foley, have perhaps led to designers still working more with recorded sound than with synthetic sound.
Synthesizer Interfaces
One of the unique features of synthesizer technology compared with traditional instruments is that they present two interfaces to the user, one for the programming of the sound generator and the other for the actual musical input. However, during a performance the user can potentially interact with either, or both interfaces offering potentially a very rich form of performance and expression. However, this is dependent on the performer understanding the tonal possibility that a particular synthesizer patch offers and being able to access these, via the synthesizers interface. Over the years there has been significant research in the areas of both synthesizer performance interfaces and programming interfaces.
Synthesizer Performance Interfaces
When the first electronic hardware synthesizers were designed there was no defined structure for the performance interface. As a result, new performance interfaces emerged, such as the Theremin [2] and Trautonium [3]. However, as time progressed the manufactures standardised the interface by using a representation of a traditional piano keyboard. This performance interface offers a logical layout and familiarity to those who had learnt traditional keyboard instruments [4], [5]. Many manufactures, such as Moog, ARP, Korg, Roland, etc., adopted this performance interface for their “all-in-one” designs. The advent of MIDI (Musical Instrument Digital Interface) in the 1980’s [6], led to a standardised mechanism of separating the performance interface from the sound-generating synthesizer. Despite the fact that MIDI was primarily designed for keyboard devices, significant flexibility was allowed in the standard that has permitted new input devices to be designed that do not use traditional designs [7], [8] & [9].
Synthesizer Programming Interfaces
As previously mentioned, the programming interface typically presents the user with knobs, dials, sliders, etc. that are directly control the synthesizer’s parameters. This is a direct mapping of the synthesis parameters and does not relate to the output sound. It follows directly from original electronic hardware synthesizers, such as the Moog Modular [10], where the controls are directly connected to the electronic components. Various proposed solutions examine the mapping of the synthesizer parameters between the synthesis engine and the programming interface to see if the relationship can be more intuitive and less technical [11], [12] & [13].
Synthesizer Interface Mapping
As synthesizers possess two interfaces the mapping between them will ultimately affect the expressiveness of the synthesizer as an instrument. Parameters of the sound synthesised can be changed or modified so that different articulation sounds can be created. Assuming that the performance interface allows suitable physical expressions to be captured then the original sounds and the articulations can be mapped to performance interface. The choice of the parameter mapped and their quantities will ultimately affect the expressiveness of the instrument [14]. As a result, the expressive control of both systems has been considered extensively.
Research Questions
These issues raise three distinct questions in relationship to how synthesizers are used for sound design: First, is there a way that sound design can be performed without an in-depth knowledge of the underlying synthesis technique? Second, can a large number of synthesizer parameters be controlled intuitively with a set of interface controls that relate to the sounds themselves? Finally, can multiple sets of complex synthesizer parameters be controlled and explored simultaneously?
References
1. Whittington, W. B., Sound Design and Science Fiction. University of Texas Press, 2007.
2. Douglas, A., Electrical synthesis of music. Electronics & Power, Volume 10, issue 3, p. 83 – 86, March 1964.
3. Glinsky, A., Theremin: ether music and espionage. University of Illinois Press, 2000.
4. Moog, R. A., and T. L. Rhea. Evolution of the keyboard interface: The Bösendorfer 290 SE recording piano and the Moog multiply-touch-sensitive keyboards. Computer Music Journal 14, no. 2, 52-60, 1990.
5. Goebl, W., R. Bresin & A. Galembo, The piano action as the performer’s interface: Timing properties, dynamic behaviour, and the performer’s possibilities. In Proceedings of the Stockholm Music Acoustics Conference (SMAC’03), August 6–9, vol. 1, pp. 159-162. Stockholm, Sweden: Department of Speech, Music, and Hearing, Royal Institute of Technology, 2003.
6. MIDI, MMA Complete. 1.0 Detailed Specification. MIDI Manufacturers Association, Los Angeles, CA, USA, 2000.
7. Jordà, S., G. Geiger, M. Alonso & M. Kaltenbrunner, The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. In Proceedings of the 1st international conference on Tangible and embedded interaction, pp. 139-146. ACM, 2007.
8. Collins, N., C. Kiefer, M. Z. Patoli & M. White, Musical exoskeletons: Experiments with a motion capture suit. Proceedings of New Interfaces for Musical Expression (NIME), Sydney, Australia (2010).
9. Rothman, P., The Ghost: an open-source, user programmable MIDI performance controller. In Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 431-435. 2010.
10. Jenkins, M. Analog Synthesizers: Understanding, Performing, Buying. Focal Press, 2007.
11. Wanderley, M. M., & P. Depalle. Gestural control of sound synthesis. Proceedings of the IEEE 92, no. 4, 632-644, 2004.
12. Hunt, A., & M. M. Wanderley. “Mapping performer parameters to synthesis engines.” Organised Sound 7, no. 2, 97-108, 2002.
13. Goudeseune, C., Interpolated Mappings for Musical Instruments. Organised Sound, 7(2):85–96, 2002.
14. Hunt, A., M. Wanderley, and M. Paradis, The Importance Of Parameter Mapping In Electronic Instrument Design. Journal of New Music Research, Volume 32, Issue 4, page 429–440, 2003.
Categories: Uncategorized. Tags: Small Grants.