![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/825113a2645ad22651d1e43b1ef67f39.jpeg?width=720&quality=85%2C50)
9 minute read
THE AUGMENTED TROMBONE
BY DR TONY BOORER
As I stood in a crowded Birmingham Symphony Hall, overwhelmed by the bustle of excited punters and eager children, my expression was perplexed amazement. All these people had travelled to hear the don of electroacoustic music, Francis Dhomont? To my shame, as a jobbing trombonist, I had only recently discovered the term 'acousmatic sound' and its use in musique concrète, developed by the French composer Pierre Schaeffer in the 1940s. Seeing such a throng of people interested in an otherwise niche genre was like spotting Downton Abbey fans at a Star Trek convention. It was all new to me, but not to them. I was stunned and humbled. Young and not so young, 'they', 'them', 'he', 'him', 'she', 'her', 'ze', 'hir'; every conceivable pronoun, all here to experience an in-depth encounter with the mimetic and the acousmatic. The event I had come to see was BEAST FEaST 2014, Birmingham University’s Electroacoustic Music Festival. A festival I would perform in as a guest six years later and my first excursion into the experimental world of electroacoustic sound diffusion. However, this was not BEAST FEaSt 2014…this was the queue for Simon Cowell and Britain's Got Talent 2014. I had wandered into the wrong venue!
Dhomont's concert was at the Birmingham Electroacoustic Sound Theatre, known by the acronym BEAST and part of Birmingham University. When I eventually turned up at the correct venue, I found a murder of Trekkies sitting around the centre of a small, dark ambisonic theatre surrounded by a planetarium of speaker LEDs. Francis Dhomont was seated at the centre helm, his instrument a row of mixer channel faders. I was home, and the universe made sense again. I sat motionless behind the great Dhomont, my mind and senses stretched to capacity by a continuum of recorded and processed sound. An ever-evolving, electroacoustic tapestry drifted out of one hundred speakers in a three-dimensional matrix of glittering acousmatic shapes. You might have heard of 'spatial audio', especially in gaming, and the new commercial push by Dolby Atmos, but technologically speaking, it is not new. Apple and Dolby recently coined the term ‘spatial audio’, but it has existed since the 1970s under various descriptions and names, ambisonic and binaural being the most common. Small cults huddled in a central space with sound diffused around them using elevation rather than the more common flat linear front, back and sides have met in dark theatres for over 40 years. It occurred to me; wouldn't it be interesting if an acoustic instrument such as the trombone entered that space. One of the core ideas behind 'acousmatic sound' is its elevation and diffusion. As an experiential art form, it also uses acoustic prestidigitation and the obfuscation of any identifiable audio source. The performer would enter the space, their sound defused in an ambisonic arena, creating an audio experience that is both raw and processed. Psychoacoustics meets three-dimensional audio as the listener enters an environment of acousmatic discombobulation, unaware of the roaming threshold between natural and imitative sound. The experience can be disconcerting for the novice listener compared to a traditional concert setup. It often feels like a closely guarded secret confined to conferences and specialised festivals, rarely entering the domain of contemporary music conservatoires. Denis Smalley, one of the pioneers of acousmatic
soundscapes, was cynical about the idea of traditional acoustic instruments performing in an acousmatic space. Smalley believed new instruments were the only method accessible for pure electroacoustic integration. I felt otherwise. The augmentation of a traditional trombone, using sensors and 3d printing design, could create an intuitive conduit for raw electroacoustic interaction without compromising technique or tone. The investigation of current technology is an everchanging and contextual pursuit, beginning with integral, often solipsistic questioning; Why use technology when composing for the trombone? What is the purpose of electronic interaction? Is it merely a gimmick? How will using technology as a performer enhance the trombone's sonic palette? When creating music or soundscapes, is the use of technology a quest for technologies sake, a musical endeavour, or both? My interest in technology was to expand the capability and timbre of the trombone. The journey began with acoustic manipulation leading to the design of a bespoke augmentation system – the 'eBone' system. After experimenting with anything from Wii Controllers strapped on the slide to condenser microphones and variations on the performance pieces of Nicolas Collins, I designed a plunger mute with built-in sensors and a condenser microphone, 3D printed and programmed using the low language code employed by the Arduino microcontroller. I investigated the rapidly developing climate of hyper instruments, augmentation, machine learning and hybridised instruments. I analysed projects by Matthew Burtner, Sarah Reid at MIT, Stroppa, and many more as far back as tape music with Fulkerson and Dempster. However, my goal was electroacoustic interaction in a live performance environment without altering the traditional playing style of the trombonist. As in the fundamental ethos of acousmatic music, I wanted the performer to blur the line between acoustic sound and live processed sound without compromising their natural performance technique, in other words, an intuitive system with a minimal learning curve. Many works have been written for trombone and tape/CD/electronics using fixed media as the accompaniment. In my opinion, for all their musical skill, such pieces suffer without genuine electroacoustic interaction; despite the different interpretive styles of a performer, the accompaniment or backing track will always remain the same. In the fringe world of experimental electroacoustic composition, the holy grail of live performance is intuitive interaction. There must be something to communicate gesture and musical intention from the performer to the electronic processes. ‘…artists must begin to question whether it is within the spirit of the age to hang onto, and vigorously defend, their own art genre boundaries, or whether it would be more fruitful to relax their grip…and allow them to hybridise and perhaps give rise to the other creative disciplines and forms…’ (Mark Bokowiec, 2011)
As a classically trained trombonist, I wanted to push the boundaries of my art genre. I needed tools to enable the gestural interaction of raw sound and electronic processing. Research led to a series of compositions for the trombone and interactive electronics. An augmentation system grew organically out of experimentation, composition, and research, not as a contrivance but as a necessary element when bringing the trombone into an electroacoustic space. In the spirit of Bokowiec's prescient observation, the development of such an augmentation system began – combined with a fascination for modern technological advances and their unique potential for performance, sound transformation and the creation of new timbres. I wanted a transferable and portable system when conceptualising a bespoke augmentation design for the trombone. A reverse engineering concept known as hardware hacking became essential in facilitating technical skills while serving as a framework for software strategies in the visual programming language Max/MSP. I stripped back various hardware items and entered the geeky world of GitHub to peel back the bonnet of software packages and transfer them to my growing knowledge of visual programming. It is impossible to know everything, but hacking enables us to go from B and find A in a dark room. Eventually, after a series of failures, a bespoke eBone system was designed and 3D printed (Figure 1).
The eBone System
![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/f0a397f39a005504198acf8c42895470.jpeg?width=720&quality=85%2C50)
FIGURE 2. THE EPLUNGER MUTE.
EPLUNGER MUTE, FIGURE 2: A 3D printed plunger mute equipped with a built-in contact microphone, six capacitive touch buttons (mappable using MIDI), a force-sensitive resistor (FSR), accelerometer, LED feedback system and a programmable MIDI sequencer. EMCONTROLLER: A MIDI controller with six capacitive touch buttons mounted on a 3D printed mouthpiece shield, used to negate the need for a foot pedal.
An EHUB box housed an x-OSC microcontroller, enabling wireless capability and utilising open sound control (OSC). The ePlunger acts independently from other devices. It serves as the main microcontroller for the eSlide and eMController.
The eBone devices essentially informed composition, design and performance by implementing a principle of bottom-up experimentation, otherwise known as trial and error. The advantage of this approach leads to evolution rather than revolution. A gradual development grew alongside composition until the complete system integrated with the electroacoustic work, Discourses of Brexit, for augmented trombone and interactive electronics. Discourses of Brexit is an electroacoustic suite in five movements performed with the stage setup illustrated in Figure 4.
ESLIDE, FIGURE 3: Two infrared sensors are mounted on the top outer slide, near the mouthpiece receiver, the first slide brace, with a reflector mounted onto the second slide brace. These are connected to an x-OSC wireless I/O microcontroller and mapped to various electronic parameters, such as granular synthesis, delay, and SV-filter scrolling in the main eBone Max patcher.
![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/fe0c5b3c2a2f1ab559964f46d7f16812.jpeg?width=720&quality=85%2C50)
DISCOURSE I #TheQueen uses the voice sample of Her Majesty Queen Elizabeth II from her State Opening of Parliament in May 2016 to start the suite.
DISCOURSE II #MaysDream is an electroacoustic soundscape blending falsetto, full voice multiphonics with interactive processing.
DISCOURSE III #MrSpeaker (Figure 5) uses the infamous cry of John Bercow's ‘Order!’ and processes the voice sample via live pitchshifting and dynamic expression from the ePlunger mute.
DISCOURSE IV #TusksLament (Figure 6) depicts a plaintive cry based on Donald Tusk's controversial and infamous speech,
‘I've been wondering what special place in hell looks like for those who promoted Brexit without even a sketch of a plan how to carry it out…’.
As with previous movements, the interaction of live processing, speech and the expressivity of the performer are paramount to the fourth movement's texture.
![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/8f81572f9d2f7d3d91ea50a6cc42ed5c.jpeg?width=720&quality=85%2C50)
FIGURE 4. STAGE SET-UP FOR THE EBONE AUGMENTATION SYSTEM.
DISCOURSE V #ThePeople uses voice samples collated before the EU referendum in 2016. A sequencer built into the ePlunger mute controls the rate of voice samples as triggered by the performer. The full complement of the eBone system mixes with a twelve-tone structure and continues the ambiguity between tonality and post-tonality.
In October, I will focus on #MrSpeaker and #Tuskslament from 'Discourses of Brexit' and other electroacoustic experiments at the British Trombone Festival 2022. Although I will not have access to complete ambisonic sound diffusion, the work demonstrates the blending of live electronic processes with the acoustic sound of the trombone. At the festival, I will perform two movements, #MrSpeaker and #TusksLament, alongside a presentation of other examples showing how, as a nonexpert, I made the augmentation system. I hope the presentation/performance will encourage others to experiment. I will certainly point you in the direction I found, although I am sure many more avenues await exploration. ◆
![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/e138aebe9f924efc5baae4765a5854dd.jpeg?width=720&quality=85%2C50)
FIGURE 5. EXTRACT FROM #MRSPEAKER.
![](https://assets.isu.pub/document-structure/220817154531-99404042f30efd7cb309a42159b2a93e/v1/552f3993e016c34698ad34f2238b4342.jpeg?width=720&quality=85%2C50)
FIGURE 6. EXTRACT FROM #TUSKSLAMENT.