Masters dissertation a case for a notati

Page 1

A CASE FOR A NOTATION SYSTEM FOR LIVE VISUAL PERFORMANCE

by Anna Weisling

A dissertation submitted to the faculty of The School of Creative Arts in partial fulfillment of the requirements for the degree of Master of Sonic Arts

Queen’s University Belfast Sonic Arts Research Center September 2012


Acknowledgements

I would like to express my thanks first and foremost to my mother and sister, without whom I would never have made it to this point in my academic career (in my life at all if we’re honest!). Their unconditional love and support knows no bounds, and for that I am forever grateful. Through them I have learned just how immutable love can be. I would also like to thank the staff and faculty of Queen’s University Belfast, in particular my supervisors Pedro Rebelo, Franziska Schroeder, and Miguel Angel Ortiz Perez. Their support throughout the entirety of this course has been unwavering, their encouragement steadfast, and their kindness unmatched. For their support from half a world away I am forever indebted to Eric Sheffield (a source of strength, love, and support at all times), my extended family, Ben Willis, Jeff Herriott, Steve Gotcher, Heike Saynisch, Buzz Kemper, Mark Snyder, Luke DuBois, Brian Lucas, and Gregory Taylor. A special thank you to Bruce McClure, Christopher Biggs, and Atau Tanaka for their support and words of wisdom. No acknowledgements section would be complete without thanking the incredible staff at SARC, including Pearl Young, Marian Hanna, Ruth Walmsley, the technical geniuses Chris Corrigan and Craig Jackson, and Professors Michael Alcorn, Stephanie Bertet, Eric Lyon, Paul Stapleton, and Paul Wilson. Express thanks to the lovely Gascia Ouzounian, whose generosity has no limits. Lastly, I would like to thank my fellow students with whom I have enjoyed (and at times struggled through) this course: Andrew Harrison (for all the hugs, despite my reflexive recoils), Augustine Leudar, Brian Dillon, Cliodhna McCarthy, Daljit Jutla, Daniel Gomes, Dermot McBride, Edward Butt, Kenny Cremin, Koichi Richard Samuels (the original one!), Connor ó Gallachóír, Lenin Pinto, Lionel Pinto, Mark Cooley, Michael Weir, Ryan May, Seth Rozanoffand John D’Arcy. My time in Northern Ireland would have been nothing without the celebratory and conciliatory drinks, tomfoolery, and laughter that accompanied our dedicated hard work. In particular I want to extend my deepest gratitude and love to min Kjære Ole Adrian Heggli (for putting up with me when I was Thundercloud and, well, everything else), Romain Dumaine (for all the ‘stills’), and Brian Tuohy (for joining me in grumpy sitting sessions all over Belfast). You will will forever be a part of my family.


A Case for a Notation System for Live Visual Performance 1. Introduction 5 1.1 Live Visuals: An Overview........................................................................5 1.2 A Brief History of Live Visuals in Performance.......................................8 1.3 Current Practice......................................................................................12 1.4 Personal Practice....................................................................................14 1.5 The Problem............................................................................................16 2. Why Notate Live Visuals? 17 2.1 Musical instrument paradigm................................................................17 2.2 Notation .................................................................................................18 2.2.1 A Brief History of Standard Music Notation...........................19 2.2.2 A Brief History of Graphic Scoring..........................................21 2.2.3 Scoring for Visuals...................................................................24 2.3 Solving this Problem..............................................................................25 3. A Proposed System for Notating Live Visuals

31

4. Applied Visual Notation 34 4.1 Ariel........................................................................................................38 4.1.1 Program Notes..........................................................................38 4.1.2 Notation...................................................................................39 4.1.3 Discussion................................................................................40 4.2 Spaces.....................................................................................................41 4.2.1 Program Notes..........................................................................41 4.2.2. Notation..................................................................................42 4.2.3 Discussion...............................................................................43 5. Conclusions

45

6. Bibliography

47

7. Appendix A - Full Interview Transcripts

51

8. Appendix B - Interviewee Biographies

77

9. Appendix C - Full Scores

80

10. Appendix D - Sylvia Plath’s Ariel

85

3


Abstract Within this dissertation I will be putting forth a call for a new scoring system for live visuals. Within this rapidly-developing performance practice are visualists that do complex and intricate work that becomes lost after an initial performance and whose pieces are not accessible to other artists and performers. I will discuss the problems that are inherent in this deficiency, the history of similar systems across several media, explain why current practice is in need of a system of scoring for visual instruments, and put the theory into practice with two new audiovisual works that are scored. The resulting successes and shortcomings of the new system will be discussed, and future developments proposed.

4


1. Introduction Live visualists are increasingly involved in new media compositions, collaborating with musicians, dancers, artists, and composers in performative situations. This has produced a new performance methodology in which visuals become playable as an instrument, with an increasingly large vocabulary. There is not, however, a codified system within which artists can record, preserve, and share their visual compositions. Within my own work I often find my work unreproducible outside of personal practice. This is not due to a system that is overly complicated or a lack of demand for repeat performances, but rather because scores for visual compositions are nonexistent. 1.1 Live Visuals: An Overview “Projector light becomes a way to provoke or manipulate or modulate both the eyes and the ears because of the dual theaters – the one we sit in and the one that the film travels through.” -Bruce McClure The words “live visuals” are often applied to situations in which sound and visuals are happening in tandem, however loosely or tightly connected they may be. A fireworks display, for example, generally involves colorful explosions timed to music (see Figure 1), and can even include elements of the musical score it is being synchronized to. This performance tradition, at least 500 years old, is an early example of visual notation in a musical context, however limited the application may be.

5


Figure 1: Fireworks scripted to Tchaikovsky’s “1812 Overture” from the Washington Post

Live visuals can even include subjects as abstract as water fountains, which are often precisely choreographed. The direction, force, and timing of water streams can be notated, allowing for a live spectacle of visual patterns and shapes (see Figure 2).

Figure 2: Richard Halprin’s Score for water flow in a fountain in Seminary South Park, from DataIsNature.com

Perhaps most commonly brought to mind in our era of digital music and video is video jockey, or “VJ” shows, at which visual performers will mix video streams to reflect what is happening musically, often employing effects and live feeds and projecting the resulting visuals in dance clubs. However, this practice, which has been around for over 40 years, places visuals in a supportive role with clear divisions between it and the musical material. VJs respond to the sound that is being mixed by the disk jockey, essentially providing a rudimentary form of

6


music video with no deeper intentionality or crafting. Often it focuses more on the quantity of light rather than the quality of its use. As more equipment becomes available to consumers, there is an oversaturation of content with a lack of intentionality. As Luke DuBois states, “Club dance music did just fine for years with just a disco ball: I’m not sure why we suddenly need massive glowing cubes of high definition shit all over the stage to make us feel that it was a really awesome performance.” This all to say that live visuals are historically thought of as an extra element that accompanies music, when they can in fact play a much larger role. The term ‘live visuals’ will hereby be defined as “[t]he element(s) in any given performance controlled or generated by a dedicated visualist 1 who has autonomous creative control over his or her instrument.” To clarify further, the modes of control employed by the visualist can be anything from pacing, effects, and footage manipulation to OpenGL generation and live video capture; however, there must be an element of creative control throughout the entirety of the piece (i.e. pressing a space bar in order to begin a premade video does not qualify as live visuals, nor does running the iTunes Visualizer, as there is no consistent intervention of a creative hand). This can be as negligible as maintaining a variable playback speed in order to pace visuals accurately. To put it more bluntly, live visuals must be in the hands of someone that could, at any given moment, ruin everything. Luke DuBois equated live visualists to musicians: “...once they start performing they're a million miles closer to musicians than visual artists in terms of the capacity for immediate, responsive action to other performers. The entire performance A “visualist” being defined as a performer whose main focus is on the production and manipulation of video, film, or computer-generated images (read: dancers or other performers who employ a strong visual element in their art are excluded from this categorization) and to whom visuals might be considered their ‘instrument.’ 1

7


practice is akin to, and largely derived from, the instrumental performance of music, and when you're doing live visuals, you're reacting to and performing with others in the same way as if you were sitting in on a musical instrument. I think of what I do as music, even when what's happening is a projected image.” This concept of responsiveness and performative action is an important aspect of what makes an experience live and what draws artists into collaborations across multiple disciplines. Though the connection to music is easily drawn, visuals can function in the same way as any other notated discipline. 1.2 A Brief History of Live Visuals in Performance The course of visuals has run alongside the course of music for hundreds of years. Though they developed independently there is no denying that sound and video have been intertwining and intermingling since their births. Indeed, even as early as the 4th century Aristotle mused that, “Colors may mutually relate like musical concords, for their pleasantest arrangements, like those concords, mutually proportionate.” (Wilfred 1947, p247.) It is not surprising, then, that some of the first tools for the live manipulation of visual elements were dual instruments, with sound playing a significant role. The Color-Organ, for example, was a traditional organ modified by Bainbridge Bishop that simply matched pitches to colors (i.e. middle C produced the color red, and the eleven semitones remaining were mapped to the rest of the spectrum of visible color), even going as far as to blur the edges of the color produced in order to more accurately ‘blend’ the visuals (Bishop, 1893). This directly reflects the additive and subtractive properties of sound waves: When you play a chord, although the pitches may begin as separate streams of sound, the resultant sound that meets your ears is blended. As quickly as visual 8


instruments could be conceived of and made, this bond between auditory and visual perception was being drawn. The creator of the Color-Organ, upon seeing a sky full of rainbows, which could be successfully mapped to a musical chord, remarked that he was, “overcome, and felt [himself] in the presence of a great revelation, for [he] thought this wonderful display had been placed before the eyes of all humanity since the times of earliest history, and the riddle had not been rightly guessed nor understood.” (Bishop, 1893) Other developments furthered the exploration between sound and image, even as closely linked as the science of cymatics, in which modal vibrations of sound can be harnessed to produce patterns in sand or other granular particles or liquids (seen in experiments by the likes of Robert Hooke, Ernst Chladni, Hans Jenny, and even Galileo2 ). While these early observations were crucial, it wasn’t until the mid-1960s, when video equipment was becoming available to consumers (albeit at a high cost) that anything resembling the video/performance art we see today emerged. Artists such as Nam Jun Paik and Bill Etra quickly took advantage of the analog equipment available to them, utilizing equipment such as Atari’s Video Music (1976) (Bloom, 2002) and Sony’s videoPortapak (1965) (Bensinger, 1981) to perform with visuals live. It is no coincidence that this technology developed alongside analog audio equipment, propagated by the popularity of Moog synthesizers and other signal-processing hardware amongst musicians. The advent of electronics technology allowed artists to manipulate and craft sound and light in ways that had never before been possible. It is important to recognize the importance of this early analog equipment, as it allowed for certain characteristics of sound and image to be indelibly connected by oscillating voltage and connecting circuits. (Not so far from the crossover we see employed today with digital zeros and ones, matrices and vectors.)

2

http://en.wikipedia.org/wiki/Cymatics 9


Due in part to the cost, and in part to the esoteric nature of the tools, video art was initially concentrated to “higher-art” circles. Video jockeys, however, were carrying visual art into the mainstream. Popularized by groups such as The Joshua Light Show and Pink Floyd, the cross-pollination of psychedelic drugs, psychedelic music, and an increasing interest in transcendent or synaesthetic experiences laid a fertile ground for extended “live event visual amplification.” 3 No longer were musical groups limited to the standard lighting rigs found on a stage (arguably an early form of live visuals, though here in a strictly supportive role). More importantly, though, were the terminological lines being blurred by visual artists such as Tony Potts, who had a strong background in experimental film and who was considered a member of the musical group The Monochrome Set, though his instrument--visuals--was less than traditional (Addenda, et al. 2010, p22). By crediting Potts with the other instrumentalists in the band, he was rightfully recognized as a performing member of the group. These early experiments with projectors and liquid light shows, 4 which any casual music listener can experience in an updated, algorithmic display by opening their iTunes Visualizer, were a precursor to one of the most important developments in music, performance, and visualization alike: music videos. In 1981 MTV aired “Video Killed the Radio Star” by The Buggles, and music was never to be the same again. Though not the first music video by any means, the dedication of an entire television channel to the broadcasting of music videos solidified the connection between music and video in mainstream consciousness. If audio and vision had not been tied together before, it was surely symbiotic by the mid-1980s. Coupled with the increasing popularity of electronic music, MTV helped to usher in a new age of audiovisual connectivity, lush with dance-club-performing-DJs and spectacle-induced pop performances. 3

http://en.wikipedia.org/wiki/Live_event_support

Layers of colored liquid, heated and combined to produce patterns and textures, which could then be projected onto a stage or screen. 4

10


With tools in hand, the 90s saw sound artists and musicians using laptops as a musical instrument. As stated by Atau Tanaka, “In the late Nineties...all this became really broadly accessible and available, when laptops could actually do all this and you didn’t need a big studio anymore, you could just do it at home in your bedroom and then take it on stage.” Though commonplace today, Tanaka’s own use of computers as an instrument on stage was incredibly novel. Weisling: It’s kind of striking to me as someone who’s coming into this field now where I think of a computer, a laptop performance, I have a pretty strong opinion that the laptop is the instrument. But when I hear you talking about the Sixties and Seventies and Eighties, it seems almost like you were taking something that wasn’t a musical instrument and trying to get something musical out of it, whereas now my generation maybe just sees it as, “Oh, yeah, you can play the laptop, of course.” So I think in just the short amount of time there’s been such an amazing change in just how you see this piece of hardware. Tanaka: Right. Right. That’s a really good point actually. Back in our day we had to--there needed to be kind of a cognitive leap--Let’s think of this as an instrument. But now that we accept this, that’s just the beginning. That’s not the end; that’s the beginning. So that’s when we start to make good music. The early experiments in Color-Organ and cymatics took the first steps to connecting sound and vision on a visceral, physical level. The experimental, analog artists of the 60s made audiovisuals performable, editable, and manipulatable. The psychedelic performances of the 70s and the following MTV era succeeded in connecting them on a cultural plane that has seeped into every corner of our public and private lives, from advertisements to pop shows to 11


avant-garde performances. Attending a Bjork concert today might involve the use of audience members’ iPods to generate sound or visuals in real time. Purchasing an album of music might automatically include music videos. The pervasiveness of this contract between sound and light is not likely to be broken but rather continue to evolve. “Some time in the future this color-science will be recognized and adopted. It will be used with music for divine worship. It will also be employed in teaching music and art.” (Bishop, 1893) 1.3 Current Practice “Live visualists are musicians. Plain and simple.” -Luke DuBois As interest focuses and narrows on the practices of mixed-media, synaesthetic, and interactive performances, visuals are increasingly taking part in the conveyance of emotion, narrative, and content, as well as reinforcing themes and aural developments. As with any collaborative work, certain elements can work together, or work against each other. In many ways visuals can be the harbinger of distraction rather than a colluding force, and the same amount of time must be spent in the restraint of light rather than the expression of it. Visuals should allow for the incorporation and collaboration of other instruments and lend themselves to construction and manipulation within the larger scope of a piece. Any performance can be broken down into a stimulation of the senses, sent from a performer and received by an audience. Whether the signals sent are via waves of sound (changes in air pressure interpreted by the ear) or waves of light (electromagnetic energy decoded by the eye) can be discarded as irrelevant. When experiencing a performance all of a participant’s senses are manipulated, including sight, hearing, touch, even smell and taste. Bruce McClure put this relationship in eloquent terms: “The projector uses a primary light to light up the

12


room and a smaller one to create sound. 5 Both lamps are regulated gesturally for dramatic effect by prosthetic hand puppets.� Our auditory and visual sensory systems are closely linked, and with good reason: these sound waves and lightwaves share many of the same properties, including amplitude, frequency, and direction of propagation. Gestalt principles of Figure/ Ground, Proximity/Contiguity, and Closure/Continuation can all be applied equally to aural and visual psychological phenomena (Karanika, p5). What matters in terms of differentiating these systems is the tool used and the intentionality of the performer. The bending, shaping, and guiding of physical waves must be done gesturally and with intention, even in the case of images. When performing with these waves of sound and light, just as separate instruments in an orchestra convey independent messages or moods, visuals present information in a way that is very different from sound. Combined together already are several voices, an implied narrative, and an abundance of psychological power. The prevalence of film in pop culture has conditioned an audience to expect to see a story on a screen, which can have, at times, a dissenting effect on the performance. When playing visuals live a narrative is not necessarily in place; the visualist is capable of improvising with video as a musician does with notes. In situations such as these that visuals can act as punctuation, counterpoint, form and structure, and lead voices. One common application of visuals is to introduce spectacle, and the history of pop culture taking advantage of visuals in order to increase their performative prowess. Luke DuBois expanded: I think the trick to good visualists is that they make that kinetic and performative relationship very clear in their work...with the really good ones, their choices of footage and editing tempo and effects and whatnot 5

Author’s note: Mr. McClure is referring here to the optical soundtrack on a strip of film, which is read by a small light source and optical sensor and then translated into a corresponding audio signal.

13


are their “look,” just as a guitarist's choice of effects pedals and playing style define her “sound.” Pop music performance is all about spectacle; it has been for decades now, and that isn't changing. Artists will implement that spectacle in different ways, depending on their style, and the visual accoutrements of the performance practice tend to grow organically out of the music (DuBois, 2012). Jeff Herriott expressed similar thoughts: You go to see a concert and start to expect there to be some visual component; certainly with certain kinds of music, that’s always the case. Think about pop concerts in the 70’s and, you know, Genesis was famous for Peter Gabriel’s elaborate costumes and staging, and that was part of the spectacle (Herriott, 20120). There is no denying this truth; as discussed in section 2.1.2 visuals have been utilized to ‘enhance’ a viewer’s musical experience for decades. It can perhaps be argued that, in the world of popular culture at least, spectacle will always be the driving force behind the application of technology and media, whether that is in the form of larger speakers, more projectors, 3D holograms, or other pageantries. This is, however, an area of performance media that is precipitated almost entirely by monetary motivation (i.e. brighter lights and louder music equals higher ticket prices and increased sales). In the world of academia, new media is rarely the product of financial incentive (if only!) but rather exercises in artistic expression. 1.4 Personal Practice In the short amount of time that I have been performing as a live visualist, I have experienced many challenges both technologically and socially that should be addressed before continuing with this discussion. The first point that should be 14


made is this: When I began my work with visuals I was only barely performing them. Because music-video-style performances are generally easy, accessible, and appreciated by audiences, it can be tempting to continually layer on effects, pre-edit content, and map audio to video parameters with no regard for the greater meaning of the craft. The price of drag-and-drop accessibility afforded by programs such as Max/MSP/Jitter, Isadora, VJAMM, VDMX, and more, is the tendency to settle into comfortable routines and one-dimensional performances. Until I was able to accept that there would be serious learning (and, more troublingly, serious math) required to move forward, I was performing with the equivalent of a bank of music loops. This is not to imply that there is no artistry involved in disk jockeying--on the contrary, there most certainly is--however the majority of beginning DJs lack the refinement and craftsmanship that separates a hobbyist from an artist. While this kind of performance might be suitable for dance clubs, music videos, or other popular venues, its place in recital halls is questionable. The second issue that should be addressed is one of social framework within which a visual artist functions. While there are an abundance of musicians ready and eager to have a visual component within their performances, it can be difficult to find a place in the world of dance, classical music, and more academic venues. The problem is a catch-22: Pop musicians want ‘music video’ style work, which is (as stated above) an incredibly un-instrumental way of using visuals. Classical musicians often believe that live visuals will cheapen the aural experience due to only being exposed to ‘music video’ style performances. Where a visualist finds him or herself, then, is in limbo between truly and creatively playing their instrument and finding ensembles to work with. Fortunately, I have been able to work with amazing dancers, musicians, and composers who, like me, see the potential in live visuals. And although it can often be difficult to accept the idea of slow-moving, dark, or even stationary visual elements, it is important to craft visual practice with thoughtful 15


expressions of both activity and restraint, neither of which is more ‘valid’ than the other. “I hope the new frontier for live visuals will be bands. Kurt Ralske6 and I have a long-running joke that when his kids are in high school they'll be asking him for more powerful projectors for their garage band instead of louder amps. I’m sick of 'sound-only' laptop orchestras... I want chamber symphonies of projectionists, and I want visualists to develop collaborative performance practices with other visualists, instead of having to be attached to a musical group.” -Luke DuBois 1.5 The Problem The issue presented in this paper is the lack of a scoring system for this style of live visuals. The capabilities are there, the desire is there, and the performance practice is becoming more widespread. However, there is no shared language and no common notation being developed. In the next sections the histories of similar systems of notation will be explored, and a new system will be proposed.

6

American visual artist/musician, http://retnull.com/ 16


2. Why Notate Live Visuals? 2.1 Musical instrument paradigm It is my belief that visuals can (and will increasingly) be played in a musical way, with its function more recognized as instrumental. The parallels are there: form, content, counterpoint, collaboration, texture, gesture, plot, aesthetic progression, theme and variation, continuation...all representable by both sonic and visual means. The open-ended nature of visual software systems is, in some ways, problematic: visualists often code their own programs or use esoteric software that is not widely recognized, making a common language and performance system difficult. However, physical objects are being developed, hacked, and repurposed in order to give more control to visualists, resulting in a practice that is increasingly “playable” but remains unnotated. The De-constructible Visual Instrument for Creative Expression (DEVICE), a project completed in the spring of 2012 at the Sonic Arts Research Center in Belfast, is just such an object. Created from bathroom piping and enhanced through the use of several sensors, the DEVICE is a handheld instrument that drives a Max/MSP/Jitter patch and can be learned by a performer of even the most basic level (see Figure 3).

Figure 3: DEVICE schematics, piping, and example of visual result

By analyzing basic interactions such as tilting and rotating an object and the purposeful use of gesture and manipulation, a tool for ‘playing’ visuals in a 17


performative arena was produced, with capabilities evocative of musical expression. As a modular object, component parts can independently control a visual frame that is split into three sections. The twisting of a lid or component part, the protraction/retraction of inset parts, and the screwing of one object into another make use of natural object interaction and can be mapped to the universal visual parameters of control. Tilting and shaking the object have direct effects on the visuals, and the inset tubing allows for the performer to navigate through the color wheel by inserting and removing the pipe to various degrees. These parameters of color, timing, and spacing are universal to visuals in the same way that pitch, timing, and amplitude are global units of change capable with most musical instruments. The level of control becoming available for visualists creates a need for a system of scoring that is analogous to the musical instrument practice. 2.2 Notation “[...] musical notation is always in a state of change, constantly subjected to pressures which cause it to embrace innovations, to become more explicit, more flexible, or otherwise more suited to the prevailing musical style. ” –Richard Rastall It should be noted that this summary is extremely brief, touching only lightly on the milestones that have occurred over the course of several thousand years. This discussion is not meant to be a comprehensive investigation into the antecedents of current notational practice, nor is it an examination of the systems in use today, but rather a series of reference points which play an important role in the development of my own visual scoring system. Though music notation covers a wide range of practices, styles, and systems, for the purposes of this dissertation I will be adhering to the definition put forth in Kevin Lewis’ thesis titled A Historical and Analytical Examination of Graphic Systems of Notation in Twentieth-Century Music: “Any codified or organized system of scripted

18


characters or visually perceived communication that represents or implies meaning and is capable of effectively transmitting information from one party to another.” (Lewis 2010, p1) 2.2.1 A Brief History of Standard Music Notation Music notation can be traced as far back as 2,000-plus years ago, where the need for a preservation of religious hymns resulted in the relatively crude systems of neumatic notation. Perhaps not surprisingly, given the topic of this thesis, early techniques involved visual representations of pitch and form, such as the neumes seen in early scores (see Figure 4). Between the 5th and 7th centuries simple upward, downward, and arched strokes indicated vocal inflection (Haines, 2008).

Figure 4: Neumes in Rich Elliott’s “Gospel of Mark”

Diastemic neumes (as early as the year 1,000 A.D.) displayed the rise and fall of melody, designated pitch, and even dictated rhythm. These systems are strikingly open-ended, so much so that they were referred to as campo aperto, or “in the open field.” (Barksdale, 1953). While rudimentary, these systems “allowed for relatively consistent musical performances” (Lewis, 2010) and aided monks in remembering the strict traditional melodies and chants that were transmitted orally (see Figure 5).

Figure 5: Diastemic neume, manuscript from

19


These early representations of melodic shape and direction bear uncanny resemblances to the graphic notation that wouldn’t come for hundreds of years, and indeed to my first attempts at a visual scoring system. However, such an open framework, with no explicitly in terms of absolute pitch, rhythm, and dynamics would quickly prove inadequate. Religious tradition of learning through rote and memorization would not be applicable to more complicated orchestration (especially that involving multiple voices and standardized timing in the early 20th century). In fact, it was the ability to score and analyze musical material that allowed for such complex pieces to be developed, with levels of detail that would be impossible by ear alone. The development of the musical staff was monumental, and though this progression is often credited to Guido D’Arezzo (991/992-1050 A.D.) the musical staff in fact existed more than a century before. By placing neumes on spaced lines, pitches could be explicitly defined. In the 9th century the musical treatises Musica enchiriadis and Scolica enchiriadis laid out a style of musical notation that employed a variable number of lines on a staff and separate shapes relating to pitch (Haines, 2008) (see Figure 6).

Figure 6: Excerpts from Musica enchiriadis showing polyphonic Daseian notation

The staffed system not only allowed for polyphony but also the consistent transmission of music across different cultures. Hundreds of pages could be (and have been) written about the development and refinements made in the Renaissance period, however it is what came after, when composers began moving beyond the staff, that is of the most importance when supporting my own visual work.

20


Composers in the 14th and 15th centuries such as Baude Cordier were already experimenting in manipulating the staff itself to convey extramusical messages, such as his piece “Bell, Bonne, Sage” (Cordier, 1350-1400), in which the staves were shaped into the form of a heart, reflecting graphically the topic of the composition (Northwestern, 2012). This early manipulation of the most basic of structures is a precursor to the extended technique that is common today in graphic scoring. As Nicholas Cook remarks, “A score sets up a framework that identifies certain attributes of the music as “essential.” (Cook 2000, p62) However, as time moves forward, what is “essential” changes as the needs of composers, performers, and audiences grow and evolve. 2.2.2 A Brief History of Graphic Scoring “It is well known that notation has been a constant difficulty and frustration to composers, since it is a relatively inefficient and incomplete transcription of the infinite totality which a composer traditionally “hears,” and it should not be at all surprising that it continues to evolve.” –Earle Brown Between 1400 and 1650 extended scores gained traction and became increasingly popular in experimental and avant garde musical circles (Galganski, 2006). Augenmusik, as it is sometimes referred to, placed importance on not just the musical notes written, but also on how they were conveyed visually. This idea, that visual or artistic symbols could be utilized to enhance not only the piece itself but the attitude and translation of the performer, greatly influenced both performance practice and extended scoring. It placed value on an abstraction of information, not unlike literary techniques such as Ergodic text or concrete poetry (see Figures 7 and 8).

21


Figure 7: David F. Wallace uses a system of boxes and arrows to manipulate the reader’s experience.

Although more traditionally-minded composers such as Dennis DeSantis argue that, “Augenmusik is useless at best and destructive at worst. Baude Cordier, George Crumb and others were wrong to use it,” (DeSantis, 2011) this argument can easily be countered with a simple nod toward earlier notation, and indeed ‘traditional’ notation as well, which is inherently visual, no matter how transparent we have come to see it. As stated by composer Pat Munchmore, “The [traditional] system is actually brilliantly visual; higher pitches are higher on the staff, longer notes generally take up more space than shorter, simultaneous pitches are stacked together.” (Munchmore, 2011) By the beginning of the twentieth century musical notation, “Was complete: it had all the signs and directives necessary to notate the great Romantic works.” (Rastsall, p.237) As more and more information became standard on a traditional score (including the inclusion of written instructions, which is arguably indicative of serious flaws in even the most detailed systems of notation), strict reproductions of musical pieces and a rigidity which rendered pieces masterable by only the most dedicated of professional musicians became inherent in new works. Notation is charged with striking a balance between the conveyance of extratechnical desires of the composer and overnotating to the point of

22


unplayability. More and more modes of control were being notated, but the pendulum always swings two ways. As art practices developed in the Sixties and Seventies, particularly in abstract painting (with artists such as De Kooning, Rothko, and Pollock at the forefront), music did as well. Just as the visual arts shifted focus from realistic depictions of scenes to abstracted interpretations of ideas, images, and themes, music shifted its focus from pitch and duration to abstractions of such: timbre, texture, and extended technique. Composers such as Morton Feldman and John Cage crafted elaborate graphic scores, leaning heavily on notation color, shape, note density, and form in their compositions, focusing less on precision and more on expression. This radical style of composing allowed for a piece to be interpreted and reinterpreted with each new performer or performance. Gone were the days of carbon-copy performances and excruciatingly scripted music, giving way to a new era of artistic interpretation. Although similar to Augenmusik, it is important to make the distinction between Augenmusik and Graphic scoring: their notation did more than convey messages, it attempted to visually embody the musical content itself (see Figure 9).

Figure 9: "Multiamerica" by Ishmael Wadada Leo Smith.

23


2.2.3 Scoring for Visuals “Whether in an evolving species in the natural world or a cultural project like music, new uses are found for existing structures, previous transformations are transformed again, and things gradually lose their resemblance to their ancestors.” –Scott Johnson The roles that visuals can play are varied and plentiful: however they are rarely used to their full potential. Rather, they tend to be pigeonholed into the category of “music video” where they either directly reflect the narrative of the music being played (usually popular or rock music) or serve as a kind of ambient lighting scheme with no real relation to the sound whatsoever. This is an unfortunate waste of a very powerful tool. What visuals allow us to do is to buffer moments and communicate ideas to an entirely separate but connected system of perception. Just as we can play back audio samples, referencing what came before it, so can we reference back to movements, gestures, and scenes. The human sensory system is capable of processing an astounding amount of parallel information, and it is a shame to not explore the full capabilities of our perception. 7 It is no stretch to say that scores for live visualists are either severely lacking or entirely absent. It is confounding to see such vivid visual representations of aural experiences, complete with color, shape, and form, but no congruent exploration into scoring for visuals. Even as live visualists are billed as musicians in performance notes, bibliographies, and even on albums, scoring systems remain nonexistent. A search for such works resulted in only two scores for visuals, one of which is questionable as a means to reproduce a performance (see “Ichi-go Ichi-e” and Fran Lightbound’s scores below).

7

Excerpted from documentation for a visual performance from May, 2012. 24


“Ichi-go Ichi-e” by Morphogenesis: “Performed by two performers, both having equal influence on both sound and image and following pre-written score – an attempt at writing notation for audio-visual piece”

Fran Lightbound’s live scores, drawn in response to live visual art: “Exploring how placing similar score limitations on the visual artists, the range of images seen here are a result of taking different approaches to responding to the live work.”

2.3 Solving this Problem “I think about taking people on a journey.” -Mark Snyder “Visuals help expose structural and material relationships that may be very difficult for people to recognize or have a sense of based on the audio alone.” -Christopher Biggs The absence of a scoring system is problematic for several reasons, the first being that work is lost after a performance. Although it is common to see recordings of live performances involving visuals, that version of the performance is so far removed from the live experience that all of the organic, reactive liveness is stripped away from it. This ‘liveness’ makes the piece human and differentiates a 25


playable instrument from a reproduction system. Many composers who work with electronic music find that pieces that include an acoustic element, a live performer, are much less sterile or calculated, creating a richer and more satisfying experience. Christopher Biggs explained: For a recent piece (Amass for clarinet and computer 8) I wanted the amplitude of the live clarinet to control aspects of the video. I had this all working, but between the audio processing, triggered audio files, and triggered video files, I couldn’t get it running well all at once on a single laptop. The result of this is that the computer performer has to trigger almost 200 cues in the 13-14 minute piece and I did sacrifice the integral, live relationship between the performer and the video that I wanted to be apparent at moments in the work (Biggs, 2012). This is certainly not the most efficient way of composing and performing, nor is it reflective of organic performance. Another composer, Jeff Herriott, states of his practice and experience with visual artists that, “I usually want there to be some sort of live sound; I feel like you need that human element--that organic element. And I do think that in a lot of the cases that’s what visuals are lacking. If they’re there with live musicians I think that can make up for it, because of the live musicians.” These words, “organic,” “human,” and “live” highlight important considerations that must be taken in any performance. Rather than depend entirely on the presence of an instrumentalist (in the case of Ariel the instrument was a computer, in fact), allowing for creative decisions throughout the piece became a way of preserving that organic element. Secondly, there is no way to transmit performances. If there is a desire to see a piece performed in a new venue, if the original visualist is unavailable there is no way for the work to be shown. This can be solved by creating a patch that can run itself, but again this destroys the appeal of the liveness and essentially renders 8

http://www.youtube.com/watch?v=nSrLg17scF0 26


the piece into a music video rather than a live instrument performance. If a score can be developed and practiced by new performers, the accessibility and reach of new work can be expanded beyond one-time performances and single interpretations. Furthermore, the ability to re-interpret a piece of music, dance, or visual art is an important element of performance. It is the reason we attend live events rather than simply going to the cinema. There is something imparted to every piece by the performer, a retranslation of an original idea that, though subtle, is very appealing to witness. Scores also allow composers to share ideas. When working together, especially across multiple disciplines, the transmission of notated work allows individuals to view the scope of the piece in a way that practicing the piece together does not. It offers a birds-eye-view not only of the content of the piece, but the form and roles played by varying elements. Within different artistic disciplines are very distinct methods of dealing with time, space, shape, form, and movement, and collaborating with an artist outside of one’s realm of specialization is an important tool for expanding one’s artistic practice. Visuals can play a very musical role in new pieces. Christopher Biggs put it this way: “I try to establish very direct relationships between my visual and audio materials. These are often very consistent over time, especially in a work like The Ends of Histories 9 where there is [sic] about five types of sonic and five paired types of visual materials. I do think about visual motion and sonic gesture (in terms of panning, pitch, and amplitude in particular). Atau Tanaka expressed a similar thought: “Silence is important in music. But look at how much music that there is where people forget that. So it’s the same [in visuals]. A kind of silence is important in visuals, but a lot of people are going to forget that.” This point can

9(2011)

http://vimeo.com/15663756 27


not be emphasized enough: the parallel between silence and darkness, noise and light. As artists and composers, the organization and balance of extremes is what transforms noise into music and light into art. If these parallels are similar enough, as in the case of visuals and sound, then their playability should be similar as well. There are more and more ensembles embodying this very idea, such as the french group Metamkine (Herriott, 2012), Sensor Sonic Sights (Tanaka, 2012), or the group Morphogenesis. In essence, it is in the connection of materials and the intentionality of the relationship in which musicality makes itself known. By functioning with the mindset that audiovisuals are a tool as any other instrument, they can be paired or set against one another to great effect. Many composers explore this relationship, referencing gesture, form, and content as driving forces. Atau Tanaka stated, of his trio, Sensor Sonic Sites, “Three musicians in a trio, just two making sound and one’s making image. In a very parallel and similar way, we all had gestural instruments, we all act as a platform for the software...We’ve composed the pieces that we performed to have the sound go with the image, and if the movement of the image went along with the evolution of sound, it was part of the composition and mostly in the performance.” Within even the strictest of systems, what often makes an instrument or performance engaging and successful are the subtle nuances imparted by the musician or performer rather than the minutiae of the score itself. Indeed, if one were so inclined, a score could be drawn and programmed into a Max/MSP/Jitter patch that would run unaided, perhaps resulting in more predictable, flawless, and consistent performances. However, a visualist is able to adapt as any performer does, to make decisions in the moment and respond to developments that are not scripted. They can ‘feel’ changes in timing, phrase gestures, and drive a performance in new directions.

28


Beyond the spectacle that visuals and lighting can produce, (as addressed in section 1.2) is another important consideration for any composer of performer: accessibility. This is applicable in two senses: perceptual accessibility in the form of audience understanding and physical accessibility, or the availability of media via different modes of distribution. Especially prevalent in the field of electroacoustic music is a sense of un-understandability for the audience. The use of music technology, coupled with the esoteric nature of new media (think of noise, algorithms, distortion, and lack of melody, all of which are common in contemporary compositional styles) can lead to a very disconnected audience experience and a steep enjoyment curve. While electronic sounds have the ability to enter the ears of listeners with no direct connection to a mechanical, physical action which might produce it, visuals are surrounded by a contextualization that is highly abstract and instantaneously interpreted by the eyes with no need for further dissection. We accept Jackson Pollock’s paint drippings as irreplaceable and Picasso’s fragmented depiction of the human face as invaluable commentary on how we view our world. Furthermore, we enter a gallery and appreciate visual art without feeling as though we have missed out on the experience having not seen it being painted. Many composers take heed of this role visuals play. Christopher Biggs explains that, “Music with a video is much more interesting when seen outside of a live performance, on the internet for example. My music is often somewhat dissonant and noise-based and audiences are generally much more accepting of these features when they are accompanied by a visual correlate. I think folks often come to appreciate sounds that they might consider unmusical if they are accompanied by visuals.” The idea that light can occupy a space and reinforce both the performance and an audience’s own physical presence during the experience touches on the tremendous amount of psychological clout inherent in visual stimuli. Within my 29


own work I often struggle to strike a balance of intensity and atmosphere; too much visual stimulation will distract, too little will bore. If the performers can, as Bruce McClure puts it, narrow the funnel on the corporeal nature of the moment in which the audience sits, it allows them to let go and be transported to wherever the performers choose to take them. (McClure, 2012).

30


3. A Proposed System for Notating Live Visuals Music has historically involved organizing the more abstract elements of pitch and duration; until only somewhat recently has the figurative (found sounds, environmental sounds) became acceptable as music. Video perhaps has taken the opposite path, with the organization and manipulation of figurative10 imagery through captured footage or stills used and accepted before the organization of more abstract organic shapes, colors, computer generated images. However, both systems have an unshakable evolutionary set of advantages and disadvantages (think of the shock felt when faced with an unexpected flash of light or piercing noise) and many converging cultural and artistic connotations (Karanika, p3). In order for a system for scoring visuals to be developed, considerable thought must go into its readability and practicality. Ideally, the system should be capable of working in conjunction with the aural elements of a piece. With this in mind, my system makes use of transparent, overlappable scores which lay the visual notation on top of the existing musical script (see Figures 10 through 12).

Figure 11: Ariel Visual Score

Figure 10: Ariel Graphic Score

Figure 12: Ariel 10

Representing forms that are recognizably derived from life. 31


For my portfolio I commissioned two pieces, each of which is strikingly different from the other. First, a collaboration with Belfast-based composer Andrew Harrison inspired by a poem by Sylvia Plath. This piece was a fitting candidate for visuals not only due to the vivid vernacular and imagery put forth by Plath but also because of the colorful language used within the prose. The second piece created was a collaboration with American composer Eric Sheffield. Sheffield was chosen in part because of his musical background as a percussionist and electroacoustic performer. With experience in both acoustic and electronic arenas, Sheffield was able to create a piece entitled Spaces, which coupled the organic nature of instrumental performance with extended electroacoustic techniques. While Ariel is performed exclusively with laptops, Spaces requires an instrumental performer on the Glockenspiel. This was a conscious decision in order to exercise and test the usability of a visual scoring system in contrasting settings that challenge the performer to respond to realtime performative changes while maintaining the integrity of the written piece. When scoring for Ariel and Spaces I attempted to give clear direction without eliminating the flexibility of the piece itself. This flexibility is an important element in any instrument. Scores, in a general sense, are created in order to instruct and guide musicians, sometimes with strict rigidity. The nuances of performance, however, carry a great deal of potential for expression and individualization. Atau Tanaka speaks of this when dealing with musical performance: You have many different styles of performance, even on the same instrument or even on the same piece. You might take a violinist who -like Nathan Milstein who will play Bach violin suites in a very cold and calculated way. Well, then you take a bravura sort of romantic violinist like Jascha Heifetz, and it's -- his whole body is getting into it. And you have everything in between (Tanaka, 2012).

32


Ultimately, the goal of developing a visual scoring system was three-fold: to preserve pieces for future use/reference, to attempt to lay out a universal language that might be used by future live visualists, and to exemplify the musical capabilities of visuals in a performance setting.

33


4. Applied Visual Notation While music notation has the capability of polyphony and extreme precision, it should be understood that such specificity developed over the course of thousands of years. While musicians can read pitch, duration, timbre, dynamics, rhythm, and gesture nearly simultaneously, there is neither the hundreds of years of history nor the common language amongst visualists needed to support such a system. What few variables do exist, however, have been defined and notated accordingly. The following parameters were defined as universal for artists working within the realm of live visuals: Hue (RGB Values) Brightness, Contrast, and Saturation Playback Speed and Direction Frame Division (Spacing) Blur Sharpen Figure 13: Table of universal visual parameters

Visuals function, by default, in the schema of time and space. Within that framework a performer is theoretically able to ‘play’ visuals as one would play a musical instrument. While traditional musical instruments function in the spheres of pitch, time, and voicing, visuals lack these universal terms within which to perform. By isolating time, frame spacing, and color as global parameters, visuals becomes playable in a very musical way, allowing for gestures, form, and playability. The symbols used in Ariel’s and Spaces’ scores went through several revisions, but ultimately fell into the following scheme: 34


Function

Symbol

ASCII

Master Crossfade

/\ \/

Crossfade

<>

Playback Speed

|>

Blur

=

Hue

O

Whiteout

[/]

Sharpen

^

Audio-Reactive

~

Freeform

*

Null

(/)

Time (Param/Time)

: Figure 14: Table of notation

Considerations were taken regarding the notation, as it was important to be able to quickly identify the meanings of the symbols, allow for multiple “modes,” and the ability to represent the notation using a computer keyboard. Most symbols were chosen for the typographical counter 11, which allows for a numerical or text The ‘counter’ being the area of negative space enclosed within a letter of symbol. The empty center of the letter “O” or “D” for example. 11

35


variable to be dictated. For example, a crossfade of 50 percent can be displayed as “<.5>”, and a playback speed of twice normal can be written as “|2>”. Notating the crossfade between video feeds was accomplished through solid lines that traverse between the upper and lower halves of the video tracks, with the middle being 50 percent between two feeds. Slopes between values are achieved graphically and allow for the convenient depiction of frantic changes or slow crossfades. Chromakeying between Video(s) 1/2 and Video(s) 3/4 is conveyed by a hashed area between the tracks with an approximated chromakey value, i.e. “/. 5 /”. In addition, each symbol’s counter can be nullified through the use of “(/)” and can also be mapped to audio through the use of the “~” symbol. Many symbols make reference to already existing figures. The playback figure is an abstraction of a traditional “play” button on any DAW transport or tape or CD player. The blur figure, when written, looks like two waves, which is a reference to the Max/ MSP/Jitter object use to create the effect, jit.wake, while the tilde used to designate audio-reactivity is a direct parallel to Max’s use. The circle used for hue combines the traditional color wheel and a rainbow arc. Because of the connectivity between the aural and visual elements, it was decided that the visual score should be overlappable with the musical score (see Figure 15) so as to allow for congruent timing and form. Not only will the performer be able to follow the music in conjunction with their own score but also rely on the more concrete musical timing system.

36


Figure 15 - Graphic score (purple) with Visual score (green)

One additional issue with visual performance systems is voicing. Whereas traditional musicians are limited to their single instrument, electronic musicians and visual artists face the problem of multiple streams of data that can be mixed and amalgamated to seemingly infinite degrees. When designing a formal layout for the scoring system several tracks needed to be available for the scripting of individual parameter levels, content, playback speed, and mixes. However, a global staff was also needed in order to delineate master fade levels, global effect parameters, and timing. For example, in the first draft of the score for Ariel, a three-staff system was attempted, with video feeds 1 and 2 on the top-most level, and video feeds 3 and 4 on the bottom, separated by a chromakey track in the middle (see Figures 16 and 17).

Figure 16-17 - First draft of scoring system

This proved problematic, as more global parameters presented themselves in the notation, rendering a cluttered score with multiple redundancies. A new framework emerged, with video feeds 1 through 4 on the upper staff and a larger global staff underneath it (see Figure 18). 37


Figure 18 - Second draft of scoring system

This provided a fairly balanced division between areas of activity, in which the performer can easily separate individualized and global instructions and maintain the creative flow of a piece while still identifying benchmark moments. Beyond the content and instrumentation of the two pieces produced this semester was another important difference that played a major role in my development of the visual scoring system. Ariel’s musical content is notated through a Stockhausenesque graphic score, complete with visual depictions of the soundscape and component aural elements. Spaces, in contrast, is notated in a more traditional style with pitched notes on traditional staffs and other common musical symbols. 4.1 Ariel 4.1.1 Program Notes “Marshall McLuhan said, “The content of any medium is another medium.” Ariel is an audio-visual piece, a piece in two mediums, about a third medium; the content of Ariel is the poetry of Sylvia Plath.” 12 The words of Sylvia Plath were firmly in our minds when constructing this piece, from the color palette and video content to the pacing and recitation of the sounds themselves. Harnessing the strikingly visual language of the poem

12

Taken from composer Andrew Harrison’s description of Ariel 38


itself13, the piece attempts to convey the several layers of Plath’s work: a recounting of her daily horse-ride, yes, but also a metaphor for her life, the tedium of existence, and for her impending death. From the tenderness of her morning routine to the chaos of her descent into mental illness, Plath maintained extraordinary beauty in her work; we hope to exemplify some of the same. Color is specifically referenced several times, in lines such as “Stasis in darkness/ then the substanceless blue,” “White/Godiva, I unpeel,” and “Suicidal, at one with the drive/Into the red.” These explicit references, coupled with the narrative nature of the piece itself (it was heavily driven by the vocal recitations of the poem) provided a solid foundation upon which to compose a visual element. In many cases of video performance a video stream is utilized to reflect the fact that there is a live or real-time element. Although useful, I felt as thought this direct, one-to-one mapping was simply too heavy-handed for the new works. I instead opted for audio-driven elements within each piece to differing levels of intensity. For Ariel I felt that the narrative nature of the text could be highlighted through the use of amplitude tracking on the vocals only, which was then mapped to the sharpness of Video 1. Footage of a sun recurs several times throughout the piece when vocals are present, expanding and becoming more vivid as the speech increases in intensity. The decision was also made to reinforce the more gentle, acoustic sections of the piece that manifested in piano gestures with content that reflected a more literal depiction of the poem’s subject. 4.1.2 Notation Making use of the system discussed in Section 4, Ariel is scored with a great level of detail. Because the graphic score for the musical part of the piece very clearly conveys the aural elements, overall amplitude, and textual/vocal way-points, the visual notation is readable even at high density.

13

See Appendix D for the full text of the poem 39


In order to differentiate between visual content on video tracks color-coding was used, with alternating blue and red elements. Global changes to effects, pacing, or other visual parameters are given more real-estate on the score for easy legibility. The global parameters are often grouped together in the general area in which the visualist should act upon them, putting more control in the hands of the performer, who is better equipped to make performative decisions when they deem it to be appropriate. Not unlike jazz lead sheets in this sense, the composers wishes are clear but allow for a certain amount of interpretation and improvisation within the performance. 4.1.3 Discussion Reflecting on the creative process and resulting score that came from notating Ariel, it is my conclusion that live visuals benefit strongly from not only a scoring system, but also from having a “hard-copy� score. The common rehearsal technique that is often employed by collaborators in which they meet, show each other ideas, and run through material was greatly benefitted by the ability to notate what was successful and unsuccessful in both the visual and aural components of the piece. Furthermore, Harrison and I were able to look objectively at the notated material and identify problematic areas, as well as clearly understand why they were issues. When facing the potential to have Ariel performed at festivals, concerts, or conferences, the availability of a written score that can accompany a recording of the piece is invaluable. For the first time in my professional life I feel as though other individuals have access to my work and that it can be performed successfully by anyone that might be interested. Without being overdramatic, I must express how liberating and validating this simple possibility is for me as an artist.

40


Lastly, it should be stated that, even if no one is ever interested in performing this piece, and even though Andrew and I will never need to sit down and flesh out ideas again, the possibility of performing Ariel in the future is made infinitely easier and more probable by the notated score that I now have. Whereas in the past I would need to re-learn Max/MSP/Jitter patches, read countless notes and comments, and watch and rewatch video recordings of performances of the piece, I now have in my hands explicit, clear, and easily understandable directions with which I can easily refamiliarize myself with the piece. 4.2 Spaces 4.2.1 Program Notes Spaces was inspired by two versions of space: the literal distances that we perceive between remote physical objects and the figurative expanses that separate hearts and minds. The long pauses and held tones were utilized to invite both the performer and listener to reflect on these spaces as the resonances fill the physical area around them. All of the electronic sounds are relatively simple manipulations of glockenspiel recordings. As the title suggests, Spaces is primarily about silence, what is between the notes. Utilizing pitch detection, feedback, and (lack of) color as fundamental visual parameters, the video accompaniment for this piece is strikingly sparse, reflecting the nuances and atmosphere created by the soundscape. This unique piece presents a distinct sets of both challenges and appeal. Sheffield’s goal was to create a piece that explored silence as much as sound, and the sparseness of the composition is reflected heavily in the visual element. While Ariel utilized narrative video, color, and speed as primary elements of change, Spaces was approached very differently. Because it is so sound focused, the audio drives the majority of the visual elements. Detection of fundamental pitch occurs throughout the entirety of the composition and affects the visuals in varying ways. The frequency is also visibly drawn to the image, not as a 41


waveform, but rather as an oscillating element that traverses the width and height of the visual matrix (see Figure 19).

Figure 19: Spaces screen captures

This simple nod to the root of the sounds goes one step beyond one-to-one mapping while maintaining a clear connection to the sound itself. DuBois references the benefits of this technique when he speaks to his poiesis, “...Usually what I’m trying to show you is what I’m “hearing” in a visual format. I prefer, when possible, to work with live cameras because I really feel that there's something compelling to having real-time visual input manipulated with realtime sonic input.” (DuBois, 2012). 4.2.2 Notation Notationally, Spaces is very sparse, both in the musical and visual scores. There are no written instructions needed for the visualist, as the only parameters notated are matters of which direction to draw the signal input, the level of blur, and the mix between live processed video and a pre-rendered visual element. The elements of crossfade, blur, master fade, audio reactivity and freeform were, however, applied to the score. Unlike Ariel, only two video tracks were needed in the score, which made it simple to integrate into the music staff provided to me by the composer. When viewing the overlapped scores there is a seamless connection between sonic and visual scored elements, with the visual tracks nesting directly into the third staff. This allows for the visualist to easily follow the glockenspiel part as well as his or her own score without a need to flip between pages or scan a large area of 42


information. Also beneficial is the ability to time very accurately the onset of notes and visual changes, as the visualist and musician can reference the same sections of the piece simultaneously and anticipate upcoming changes.

Figure 20: Spaces score

4.2.3 Discussion Although I can not say that the score for Spaces is quite as useful as was the case for Ariel, I can nonetheless be assured that no performance of the piece by me will occur without the score in hand. I attribute the slightly less successful nature of this score to the simplicity of the piece itself, as the work that went into the notation at some point will be outweighed by the simplicity of commenting in a patch itself. That threshold is surprisingly high, however, and had any more elements of control been introduced into the piece, a score would be immensely practical. Still, it should be noted that if any outside performer had the desire to perform Spaces, having a hard copy of the score would increase both the 43


understandability and accessibility of the piece, rendering it much more pleasant an experience to learn and play.

44


5. Conclusions Within this dissertation I have put forth a call for a new visual scoring system. As contemporary composers, performers, and artists have harnessed live visuals for gestural, formal, and compositional purposes, a need for notation has arisen. Several reasons for this were identified: the preservation and transmission of pieces, the capacity for multiple performances and reinterpretations, as a means to create a tool for composition and form development, and to afford the ability to develop and share ideas among collaborators. These arguments were examined in detail and supported by multiple contemporary audiovisual composers from different areas of expertise. I have also referenced several systems of notation that have been developed for these same purposes, including traditional and nontraditional music notation as well as more obscure, object-related scripts, and drawn clear connections between those instrumental systems and scoring visuals in a similar way. The argument put forth is that live visual performance methodology offers more than enough creative control and expression to warrant scores and notation. Also addressed in this dissertation were potential problems regarding the development of such a system, not the least of which is a lack of precedent. The unavailability of a universal language when working within this framework is also problematic, though not without a solution. Finally, I have put forth a proposed system for scoring live visuals. Through this system I have learned that hard copies of visual scores have multiple benefits for composers, performers, and visualists alike. It increases the playability, allows for others to reproduce the piece, and even makes refamiliarization easier for the original composer. It is for these reasons that I hope that other live visualists will adopt, adapt, and rework this framework to fit their needs as well as continue the development of the practice.

45


Though some might say there is no need or argue that visuals are simply fancy lights that have no further purpose, I am optimistic that in the near future I will find myself in a venue performing a new piece from a visual composer that I may have never met, reading from a well-rehearsed visual score.

46


6. Bibliography Addenda, et al. 375 Wikipedians (2010). VJing. Geneva: Greyscale Publishing. 22. Barksdale, A (1953). Medieval and Renaissance Music Manuscripts: The Toledo Museum of Art; January and February, 1953. Toledo: x. 0. Baur, D., Seiffert, F., Sedlmair, M., Boring, S.. (2010). The Streams of Our Lives: Listening Histories in Context. IEEE Transactions on Visualization and Computer Graphics . 16 (6), 1119 - 1128. Bensinger, C. (1981). The Video Portapak. In: Sponsel, K. and Sundstrom, J. The Video Guide. 2nd ed. Santa Barbara, California: Video-Info Publications. 155-186. Biggs, C. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [via email] Belfast, UK/Mighigan, USA, July 31st 2012. Bishop, B (1893). A Souvenir of the Color Organ, With Some Suggestions in Regard to the Soul of the Rainbow and the Harmony of Light. New York: The Devinne Press. Bloom, S. (2002). The Incredible, Incredible Story of Atari — From a $500 Lark to a $2 Billion Business in 10 Short Years. Available: http:// www.landley.net/history/mirror/atari/museum/cut2pin.html. Last accessed 16th Jul 2012. Campbell, D., Herskovit, M., Segall, M.. (1968). The Influence of Culture on Visual Perception. In: Toch, H., Smith, C. Social Perception. New York: Van Nostrand Reinhold. 1-5. Clauss, Julien; Cousot, Stéphane; Duque, Alejandro; Joy, Jérôme; Laforet, Anne; Lauvin, Grégoire; Roquigny, Anne; Sinclair, Peter. (2012). Sonification (what, where, how, why). Available: http://locusonus.org/w/? page=Symposium+sonification. Last accessed 16th April 2012. Cook, Nicholas. (2000). “Music – A Very Short Introduction,” Oxford University Press. Cordier, B 1350-1400, Belle, Bonne, Sage. Public Domain (Werner Icking Music Archive). Music Score. CRiSAP. (2012). Creative Research into Sound Arts Practice. Available: http://www.crisap.org/index.php?improvisation. Last accessed 20th April 2012.

47


Cullen, Brian. (2010). A Portfolio of Audiovisual Compositions for the ‘new media everyday.’ Unpublished doctoral dissertation, School of Music and Sonic Arts, Queen’s University Belfast, Northern Ireland. DeSantis, D. (2011). Notation: A Manifesto. Available: http:// www.dennisdesantis.com/2011/03/01/notation-a-manifesto. Last accessed 15th Aug 2012. DuBois, R Luke. (2011). A More Perfect Union. Available: http:// perfect.lukedubois.com/. Last accessed 15th May 2012. DuBois, R. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [via email] Wisconsin, USA/New York, USA, July 5th 2012. Fry, Ben; Reas, Casey (2007). Processing: A Programming Handbook for Visual Designers and Artists. Massachusetts: MIT Press. p293. Galganski, M. "Towards a Solvent Definition of 'Augenmusik.'" Seminar Research in Music, Temple University, Philadelphia, PA. Haines, J (2008). Musical Quarterly 91 (3-4): 327-378. doi: 10.1093/ musqtl/gdp002 First published online: May 4, 2009. Herriott, J. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [in person] Whitewater, Wisconsin, USA, July 9th 2012. Karanika, M. Visual Perception: a cognitive process. Goldsmiths University, London. Kallmann, M. (2001). Object Interaction in Real-Time Virtual Environments. Unpublished doctoral dissertation, Swiss Federal Institute, Lausanne, Switzerland. Lewis, K (2010). “A Historical and Analytical Examination of Graphic Systems of Notation in Twentieth-Century Music”, MA Thesis, University of Akron. MediaArtTube (Producer). (2008, April 7) Videoplace ’88 [video]. Retrieved May 10, 2012 from http://www.youtube.com/watch?v=dmmxVA5xhuo. McClure, B. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [via email] Wisconsin, USA/New York, USA, July 11, 2012. "Medieval and Renaissance Music Manuscripts"; The Toledo Museum of Art, 1953.

48


Munchmore, P. (2011). Scoring Outside the Lines. Available: http:// opinionator.blogs.nytimes.com/2011/08/03/scoring-outside-the-lines/. Last accessed 19th Aug 2012. Nesa. (2011). Ichi-go ichi-e. Available: http://morphogenesis.eu/?p=9. Last accessed 5th Aug 2012. Northwestern Block Museum. (2012). History: Timeline. Available: http:// www.blockmuseum.northwestern.edu/picturesofmusic/pages/ history.html. Last accessed 14th Aug 2012. Pritchard, T. (2011). Fran's Playdate Remnants . Available: http:// onthestageofthepresent.posterous.com/49334169. Last accessed 10th Aug 2012. Psimikakis-Chalkokondylis, L. (2010). An investigation into the extent to which developments in notation in the 1950sand 1960s have informed current compositional practices. Unpublished Bachelors Dissertation. Guildhall School of Music and Drama in London. Rastall, R. The Notation of Western Music, St. Martin's Press (New York, 1982). Sandra Martani (2003). The theory and practice of ekphonetic notation: the man uscript Sinait. gr. 213. Plainsong and Medieval Music,12, pp15-42. Snyder, M. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [via Skype] Wisconsin, USA/ Virginia, USA, July 30th 2012. “Sonification is the use of non-speech audio to convey information data to sound”. G. Kramer, B. Walker, T. Bonebright, P. Cook, J. Flowers, N. Miner, and J. Neuhoff “Sonification report: Status of the field and research agenda,” Tech. Rep., International Community for Auditory Display, 1999, http://www.icad.org/websiteV2.0/References/nsf.html. Sontag, S. 1979, On Photography, Penguin, Harmondsworth. Tanaka, A. (2012) Interview on Contemporary Audiovisual Performance Practice. Interviewed by Anna Weisling [via Skype] Belfast, UK/London, UK, August 1st 2012. Tanaka, Atau. (2000). Musical Performance Practice on Sensor-Based Instruments. Trends in Gestural Control of Music. p389-406. Tanaka, Atau. (2010). Mapping Out Instruments, Affordances, and Mobiles. Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), Sydney, Austrailia. p88-93.

49


Thomas, W (2006). Clavilux. Alabama: Borgo Press. 17. Wilfred, T. (1947). Light and the Artist. The Journal of Aesthetics & Art Criticism. 5 (4), 247-255.

50


7. Appendix A - Full Interview Transcripts In July of 2012 several interviews were conducted with contemporary audiovisual composers from around the United States and Europe. Each composer/performer was questioned regarding several elements of new media and performance, including their own personal practices and methodologies, their opinions of the future of audiovisual performance, and how visuals are incorporated into their works. The opinions of these well-established artists are intended to provide support for my argument that visuals can be played musically. In the case of email interviews, minimal edits have been made to the author’s writing. Biographies for each interviewee can be found in Appendix B. Christopher Biggs - 7/31/2012 Anna Weisling: Can you tell me a little bit about your compositional process (including software and/or hardware you use)? Christopher Biggs: My process typically begins with information about what I am writing for in terms of instrumentation and media, in addition to what type of performance situation the piece will be presented in. This information informs duration and content. After that I typically pick a project idea I have been thinking about that fits with the media and performance situation of the work. Then I play with a variety of materials and determine what materials I might employ. After that I do pre-planning involving structure/form, concept, presentation, and technology. If I need to develop or learn any software in order to realize the ideas, I work on making it or learning it. This often generates more material and ideas. As I develop more materials, some are pushed aside and the structure, concept, presentation, and technology change accordingly. This process basically spirals until I really feel that the structure, concept, presentation, and technology are working together how I want (sometimes I just have to move forward anyway‌). I use a lot of software in this process, here are the primary software applications and hardware I use and what I use them for (some of these are just because of what I get cheapest or free from my employer): Video: Panasonic HD video camera, After Effects (add effects), Final Cut Pro X (compile video), max/msp (live processing of video and triggered pre-recorded files), processing (generating shapes and gestures, live generation of content). Audio: I am fine with various DAWs for arranging/mixing/adding effects (logic, pro tools, digital performer), Max/MSP (live processing and audio triggering), Bidule (creating effects), Peak (sound file editing), Native Instruments Komplete (synthesis, sampling, effects), Ozone (mastering), and Altiverb (reverb). Engraving: Sibelius for instrumental parts. AW: When composing a piece for mixed media do you start on the audio or visual side? CB: This is currently changing. I feel that whenever I add a significant component to the work I do, I deal with that component last. So, I definitely used to start with audio and add the video. This allowed me to learn the technique and not worry how everything would work together during each stage of the creative process. For my mixed media pieces now, any part can come first and any part can change as the other parts develop. AW: Many of your pieces are for audio alone. What prompts you to add visuals to a piece? CB: I started doing video for a variety of reasons: *Visuals help expose structural and material relationships that may be very difficult for people to recognize or have a sense of based on the audio alone, especially on a first hearing.

51


*Visuals have the ability to operate semiotically independent of a common practice (music alone can’t do this well in my view with a common practice and without one it seems basically impossible), i.e. they establish and reinforce extra-musical concepts. *Visuals attract a different (often less artistically conservative) audience than music typically does. *Music with a video is much more interesting when seen outside of a live performance, on the internet for example. *My music is often somewhat dissonant and noise-based and audiences are generally much more accepting of these features when they are accompanied by a visual correlate. I think folks often come to appreciate sounds that they might consider unmusical if they are accompanied by visuals. *Most importantly, as I have been composing I have increasingly imagined and wanted visuals with some of my work, so I have tried to develop the technique to realize these visuals. AW: In my opinion, you use light as a tool for gesture and manipulation (I'm thinking of 'Bioluminescence' and 'The Ends of Histories' in particular). Do you see direct parallels to sound in the way you use light? CB: I don’t know if I can answer this question well. I do a lot of Mickey-Mousing. I try to establish very direct relationships between my visual and audio materials. These are often very consistent over time, especially in a work like “The Ends of Histories” where there is about five types of sonic and five paired types of visual materials. I do think about visual motion and sonic gesture (in terms of panning, pitch, and amplitude in particular). The reasons for particular pairings tend to have more to do with the extramusical ideas. For example, in “The Ends…” there are various musical and visual material that abstractly represent four different beliefs about the how history will end: “big-guy-in-the-sky” religious rapture, the end of socio-economic progression, some kind of environmental catastrophe, and some type of mystical ending, such as the “end” of the Mayan calendar. AW: Visuals are increasingly becoming standard in performances, both academic and nonacademic. Why do you think this is? How do you feel about that trend? CB: I think many folks have similar feelings about the positive role visuals can play for audience reception and clarification of artistic attention. Also, popular culture in the U.S. is much more tolerant of visually experimental art than sonically experimental art in my view. I like this trend. I think sometimes this trend provides the impetus for forced collaboration (by which I mean composers may feel like they should collaborate with a visual artist or vice versa even they don’t have a good reason, timeframe for successful process, or shared vision). In terms of pedagogy I am all for such collaborations because formulating good reasons for collaboration, learning collaborative processes, and developing a shared vision are learned best by doing I think. AW: As far as I can tell, the visual elements in your works are always fixed, though very much connected to what his happening aurally. Can you shed some light on why you create visuals in a ‘tape’ fashion, instead of programming them for live performance? (Please correct me if I'm wrong on this.) CB: (You are not wrong.) I have a few pieces that involve triggered video files (such as the Ends of Histories). In the work I have done in the past I haven’t found a strong reason to make the video processing live instead of just rendering the video first. I can keep the video resolution higher, do less programming, and have a more stable performance situation with fixed media triggering. That said, for a recent piece (Amass for clarinet and computer) I wanted the amplitude of the live clarinet to control aspects of the video. I had this all working, but between the audio processing, triggered audio files, and triggered video files I couldn’t get it running well all at once on a single laptop. The result of this is that the computer performer has to trigger almost 200 cues in the 52


13-14 minute piece and I did sacrifice the integral, live relationship between the performer and the video that I wanted to be apparent at moments in the work. I just finished a collaborative project called “The Red Project” (with Eric Souther) in which the video is almost all processed in real time and the audio is almost all generated in real time and the two data streams interact. This piece is meant to be different every time and that variability is part of the concept so I think there was a strong reason to have everything be live. This work will also be made into an installation piece that processes variably in real-time. I am working on an 11-movement multimedia piece called “Biodiversity.” One of the movements will have the video controlled by the amplitude of audio signals from live performers. The first 13 minutes of the video are all rendered in real time using an application created in Processing. So, I guess what these examples show is that I want to do more live stuff with video, but I need the concept/performance situation of the work to suggest/require that the video be generated live to motivate me and I need to keep learning more creative/interesting/technological ways to do the things I want to do live. AW: As software such as Jitter and Isadora make visual performance more accessible, do you see parallels emerging in the way music and video are improvised or played? CB: Definitely. I like the example of Djing/Vjing. I also think that more and more people are going to be comfortable and excited about experiencing audiovisual relationships since technology/programming is going to continue to improve/spread and these relationships will increasingly be part of daily life. This ubiquity will help audiovisual artists in that folks will be more perceptual acute when it comes to appreciating audiovisual phenomena. Lastly, computers suggest certain performance practices in my view and since these audio and visual performances are centered around computers, certain parallels are almost definitely going to emerge. Luke DuBois - 7/5/2012 Anna Weisling: Your work often takes data and abstracts it from its original source. In some ways your pieces seem to impart a deeper meaning on numbers and plot points -- Can you speak a bit on whether or not this is intentional? Luke DuBois: Most of my pieces are portraits, but in a broader sense of the word than most people would use. So instead of looking at one person, I’m looking at something in our culture, or language, or history, or mass media, to get a better sense of who we are. So some of that stuff is more data-driven than others, but the point is that it has to go beyond data visualization to really make a point. The trick is coming up with a metaphor... something to grab onto that makes the information seem to speak. So presidential speeches become eyecharts, and dating profiles become maps, and war casualties become a string quartet. It's intentional that I try to go for a deeper meaning, though I don't really try to shove my interpretation of what that meaning is down anyone's throat. AW: You often perform visuals live, but also have a strong musical background. What parallels do you see between performing the two, if any? LD: Live visualists are musicians. Plain and simple. Before they get onstage they might be doing something we'd recognize as filmmaking or theatrical design, in the same way that a rock singer might be doing something we'd recognize as poetry, but once they start performing they're a million miles closer to musicians than visual artists in terms of the capacity for immediate, responsive action to other performers. The entire performance practice is akin to, and largely derived from, the instrumental performance of music, and when you're doing live visuals, you're reacting to and performing with others in the same way as if you were sitting in on a musical instrument. I think the trick to good visualists is that they make that kinetic and performative relationship very clear in their work... with the really good ones, their choices of footage and editing tempo and effects and whatnot are their 'look', just as a guitarist's choice of effects pedals and playing style define her 'sound'. 53


AW: Live visuals are becoming more commonplace in both academia and popular culture. What crossover do you see between pieces such as yours as compared to someone like Amon Tobin or similar pop artists utilizing visuals in performance? LD: Pop music performance is all about spectacle; it has been for decades now, and that isn't changing. Artists will implement that spectacle in different ways, depending on their style, and the visual accoutrements of the performance practice tend to grow organically out of the music. The Rolling Stones, for example, define spectacle with their bodies... they strut around the stage and try to create an outsized presence with their actions as performers; this is a pretty traditional approach, but it totally works. Psychedelic rock acts hired the Joshua Light Show in the '60s and '70s as a way to raise the notch on that spectacle, possibly as a counterweight to the fact that their music was more introspective and down-tempo... it wouldn't make sense for Jerry Garcia to throw his body around on stage the same way that Keith Richards does, but the lights and colors projected onto the players create the same kind of emotional effect. Mainstream pop vocal acts achieve spectacle with production design and choreography... if you see a Britney Spears show, there's an incredible amount of activity going on non-stop around her, amplifying every single line of the lyrics. Stadium-scale rock started experimenting with representational projections and set design in the 70's, and it's a pretty short line from Pink Floyd's The Wall to the mid-90's U2 productions with Emergency Broadcast Network... the point in both shows is to culture jam, remixing in the messages referenced in the music within the context of a live act as multimedia... it's maybe a 'smarter' solution to spectacle than 3-D extrusions of Katy Perry's décolletage, but it's basically there to serve the same purpose. People who are, essentially, performing with tools that don't have a visual presence (e.g. laptops and samplers) use live digital visuals to achieve what they can't, or don't, do with their bodies; this picks up the slack on what makes for a good 'performance'. Sometimes I think it can be overused, especially given the fact that club dance music did just fine for years with just a disco ball: I’m not sure why we suddenly need massive glowing cubes of high definition shit all over the stage to make us feel that it was a really awesome performance. But a lot of that has to do with the changing social space of live music performance, especially with electronica/IDM, which as recently as fifteen years ago was a reasonably spectacle-free genre. Here's where I start to sound really old and curmudgeonly, but when I used to go to see bands play, or go to clubs to dance, the focus was really on my friends, or my date, or the other people experiencing the show and how they related to the music. Tonight I went to see Reagan Youth, which is a recently reunited political punk band from the early 1980s... they were fucking incredible, and all they had was an American flag backdrop on stage. What bummed me out though was that a huge part of the crowd was focused on taking iPhone photos and texting during the show, rather than getting engaged with the music as it was unfolding *now*. I started doing live visuals originally as a way to amplify music without needing to make it sonically louder. It was also a way to integrate the audience, by using camera feeds to make electronic feel more human and 'live'. I think that, like most things in pop music, there will be a pendulum effect where all the overbearing sculptural production design built into large electronica shows will create a reaction towards more spartan, audience-focused presentations, where there's a really smart VJ who knows how to work the crowd in dialog with the music instead of 50 screens of stuff that looks like a 90's WinAmp plug-in. After all, you don't go to a club to watch the DJ, you go to a club because the DJ makes you dance. AW: You work for Cycling ’74 and have been at the forefront of a lot of cross-disciplinary programming (I’m thinking of objects like jit.peek~ or jit.poke~ for example). Can you share your thoughts regarding mapping audio to a visual domain and vice versa? LD: So Jitter was, and remains, a very special system. A lot of people think of it as a VJ toolkit, but under the hood, Jitter is a very data-neutral environment... the objects are passing around function pointers to matrices, not 'video', per se, so you can stash any numeric information in there and manipulate it. Being able to do media transcoding is a huge part of what I wanted out of the project, along with being able to work with symbolic datasets ([jit.linden]). If it were a 54


closed system that only understood data that was in two dimensions and had ARGB locked in as the representative information, it would be a much lesser product. I think being able to directly transcode sound to and from image is a tremendously powerful tool, but the *really* cool part of all this is that it's just the tip of the iceberg. I remember one of the first Jitter examples I worked hard on was the 3-D extruded phase vocoder... that patch really shows off what's possible, as you can add video 'effects' to a frequency-domain representation of sound. This is a classic case of something that you 'shouldn't' do, according to the rules of commercial production software, but Jitter could care less. This is true of Max in general... the program will never tell you that you can't 'map' a thing from one domain to another. AW: In my opinion, you use light as a tool for gesture and manipulation. Do you consider visuals to be your instrument? LD: I have synesthesia, so I have a particular way of thinking about visuals and music... usually what I’m trying to show you is what I’m 'hearing' in a visual format. I prefer, when possible, to work with live cameras because I really feel that there's something compelling to having real-time visual input manipulated with real-time sonic input. So light is the relevant medium, yes, but there are other things to it as well... I’m interested in how specific musicians' 'sound' influences the same visualization schemes in different ways... I’ll use the same patches with Todd Reynolds and Bora Yoon, and the output on the screen is completely different; that's because of them, not me. I’m also interested in how performers move, and capturing and playing with those gestures on a projection as a kind of short-term memory and magnifying glass. I think of what I do as music, even when what's happening is a projected image. So I guess to close the loop with my earlier point, I think the laptop computer is probably my instrument, but my 'sound' isn't really a sound at all... it's a way of presenting information in a kinetic way that you're just as likely to take in with your eyes as your ears. AW: As software such as Jitter and Isadora make visual performance more accessible, do you see parallels emerging in the way music and video are improvised or played? LD: Absolutely. I think from the tools side, Jitter and Isadora and VDMX and all those programs are having the same effect on lowering the bar of access as Ableton Live is for electronic musicians. What required thousands of dollars of whacky re-purposed video broadcast equipment in the 90s can now be done with a computer. This is huge, though we don't have the historical perspective yet to see whether we're looking at a new frontier or the artistic equivalent of a tech bubble. I think musicians have a harder time than visualists crafting a unique authorial voice using a lot of these tools, because the software has a lot of cultural bias embedded in them that VJ software doesn't... try writing a track that alternates compound meters in Ableton Live and you'll see what I’m talking about. The place the whole A/V scene converges tends to be around controllers... visualists and laptop musicians use the same repertoire of monomes and APC-40s and custom fader boxes and touchOSC configurations and whatnot, and so they can have these really excellent conversations about scalability and responsiveness and mapping without even worrying about the fact that the output is in totally different mediums. Ultimately, because they're using the same hardware, they have a lot of common ground in terms of their performance practice, which is often the starting point for collaborations that are pretty interesting. AW: What do you see in the future of live visual performance? LD: I hope the new frontier for live visuals will be bands. Kurt Ralske and I have a long-running joke that when his kids are in high school they'll be asking him for more powerful projectors for their garage band instead of louder amps. I’m sick of 'sound-only' laptop orchestras... I want chamber symphonies of projectionists, and I want visualists to develop collaborative performance practices with other visualists, instead of having to be attached to a musical group. Jeff Herriott - 7/9/2012 55


Anna Weisling: The majority of your pieces are for sound alone. What prompts you to commission or add visuals to a piece? Jeff Herriott: I guess it’s every piece is...you know, I think about who I’m writing for, what the situation is. Most of the music I write is for sound alone because a performer asks me to write something for them, and so I--and I feel like I can create sound and not visuals as much, I feel like I’m a sound person, so when I add visuals it’s in collaboration, whereas when I do sound it’s just me for the most part, or, that could be in collaboration too. So usually each piece is its own thing; a performer asks me for a piece and then I’ll do--I’ll write something for them. I started doing visuals because I just started collaborating with visual people, and once I started doing it I was interested in finding ways that we intersected. But it always is who I’m working with and most of the time I’m not working with visual people. But I’ve enjoyed it and I feel like it’s a good...you know, there’s lots of things to be explored there, and there are things that an audience seems to find interesting and people seem to find interesting, so I enjoy exploring that stuff, too. AW: Do you think that...do you feel that you are trying to give the audience a better experience? Or a different experience? JH: It’s not that I would say it’s a “better” experience, it’s just another--I mean, when you’re going to see or listen to a concert, you know, I’m trying to make it a good experience. Trying to manipulate sound, or shape, or form, or whatever it is over time, and visuals you can manipulate, you know, the same things over time. It’s just that you’re manipulating visual objects and things that you can see as opposed to things that you are hearing. So what I found interesting about collaborating with people on visuals is...they, you know, I’ve found that a lot of people, their training in visuals is not the same as my training in audio. And so their experience or how they work, whether it’s the, because of the demands of technology, or the way, you know, visual media has developed out of painting and photography or whatever--I don’t know--but they don’t think about things the way I think about them in terms of time, and I feel like it’s interesting to try and find ways to connect because we don’t think about time the same way. And so that’s what’s interesting for me, to try to find relationships between, or similar ways, or ways to come together about stuff. So I don’t think it’s a better or richer experience, it’s just an experience. On the contrary, I think that sometimes it’s better to be, um, to just do audio, then you’re only focusing on that. I do feel that sometimes the visuals have a tendency to take over the audio, you know? I don’t know what the percentage of stuff we do, that we experience, the seeing or hearing is...you know, in film music we talk about how you can get away with bad visuals but not bad sound, which would lead you to think that sound is more important, but you can also tell the same story with different, like, lots of different kinds of sounds and, so it can be done in lots of different ways. I don’t know that one stands out over another, but visuals can take over, and I don’t always want that to happen, so sometimes I don’t want to work with visuals. AW: It seems to me that visuals are becoming this kind of trend now, that everyone’s kind of adopting, both in pop culture and in academic circles. Do you think that people are growing to expect that as part of their audio experience? That there is going to be a visual element? JH: That’s a good question, and I...I think that could be the case, I think that that would be a shame if it is the case. And I hope that’s not true, but I wouldn’t be surprised if that starts to happen more. You know, you go to see a concert and start to expect there to be some visual component, um, certainly certain kinds of music, that’s always the case. You know, you go to see--think about pop concerts in the 70’s and, you know, Genesis was famous for Peter Gabriel’s elaborate costumes and staging, you know, and that was part of what people... AW: The spectacle. JH: Yeah, it was a spectacle, and I guess in some ways people--you know Pink Floyd had their giant animal balloon, or pig balloon on that tour, and you go see light shows and these giant 56


events, and that’s what they do, and it’s more possible at all kinds of levels now. I don’t think it necessarily makes it better unless you do it well, but I think--I wouldn’t be surprised if you’re right, but I hope that that’s not true. AW: Well, I find it strange that...no one would (that I know) go and sit for 45 minutes and just watch visuals with no audio whatsoever. But you, you’re kind of hanging on to that idea of, you know, you go to a concert and you sit and you listen, for 45 minutes. And of course, there’s a visual element of, you’re watching a performer, but I see it happening more and more that people just assume that there’s going to be a visual element to it. JH: Well, I guess, think of, like, going to electroacoustic concerts, where you’re just listening to tape pieces. I’ll go do that, but I don’t think a lot of people want to do that. People will sit around and watch TV for hours, obviously--now that’s narrative, narrative’s different--but, um, but I’ll sit around and listen to that stuff. But I prefer to go see a performer. Performing. You know, in almost all cases if I’m going to go to a concert and someone’s performing, then I--I don’t think it’s weird to just sit and watch someone perform. I do think that going and listening to a recording, as a unit, is a little strange. In this whole electroacoustic things with diffusion and 20-speaker systems, that’s an experience that I can’t have at home--I mean, I guess you can at SARC, but-basically we can’t have those experiences, and so that is--plus it’s people getting together to do that, and I don’t think that general audiences are going to get particularly into that, and if we had amazing, like could we go to the movie theater but instead of watching a film just hear, you know, have a 20-speaker setup and sound flying around? No, people wouldn’t go to that. But I think performers make the difference, I think people will continue to go see concerts without visuals if there’s performers. But I think you’re right, too, that people are getting more interested in going and whether “expecting both” is the right term or not, I, as I said I hope that’s not the case, I don’t think it’s just because I’m an old dude, I think it’s because there’s room for everything. I told you about my friend Craig a lot, because he and I analyze things to death, that’s kind of what we do when we get together, we just, you know, you criticize me for overanalyzing--comment on my ability to overanalyze-AW: [Laughs] It’s an impressive ability! JH: It’s what I kind of, I try to think, “What is the best environment?” Or how things should be perceived, or how things should be experienced. And...some things, lots of things can be experienced different ways, and I try to do it the best way possible. You know, that’s why you get things on record and listen to them on record or listen to them in this environment or while you’re driving around, some things sound better in the car, some things sound better here, and you try to save it so that you don’t spoil it. AW: I want to talk about Barn Work, because I feel like the audio in that piece was clearly kind of designed with the idea that there were going to be visuals involved. Am I wrong in assuming that the project was audiovisual? JH: No, it was absolutely--that one was the first one that was, well, we did two pieces, but that one...I felt the integration was the best, probably, of all the things I’ve worked on, in that environment. AW: Did you find yourself approaching the audio differently, knowing that there was going to be a visual component? JH: Yeah, I left space. AW: Left space? JH: Yeah, left more space.

57


AW: Why? JH: So that there’d be time for the visuals to do it’s own thing. It reminds me, Cort Lippe, my mentor at Buffalo, when he would compose for instruments and electronics he talked about writing instrumental pieces and leaving space, because he would actually write the instrumental part and then put the electronics to the instrumental part. I don’t do it that way, I’ve always done them simultaneously, but he composed these, uh, the instrumental parts--I mean it’s not completely true, but he talked about leaving space in his instrumental parts so that when he would add the electronics the electronics could come through and then there wouldn’t be too much stuff. I mean his pieces have a ton of stuff anyway, but, partly because he can just make the computer make any sound he wants, but, so I was thinking about that when I was working on that, was how do I leave space so that sometimes you just, all you’re really doing is watching the visuals, you know? And then other times you’re listening, the sound is kind of punctuation. In other cases the sounds is more intense, but I left more space in that piece. I was also, just before that I had written these miniatures that were very much about space, so it wasn’t hard for me to do that... AW: What were--what was this? JH: The miniatures were really pretty sparse--I was about to say “really sparse” but I changed, I added “pretty” to modify--it’s--they’re sparse. They’re basically 50-seconds to two-minute things that work together that have, that are mostly just about pace and they’ll each have one or two ideas. One idea usually. It’s for three instruments and electronics, and, each of the instruments will usually do one thing in each miniature, sometimes two... AW: When you say “thing,” do you...? JH: Like, play a note. Or, play a gesture. [Sings three tones] Something like that. So, they each have one thing or two little things that they do, gestures that they play, and it’s about their relationship between them, some harmonic connections, there’s a lot of microtonality, and then it’s about pace. So, the 8 miniatures were designed to be slightly different speeds, so as a group you would kind of have this back and forth between a little faster, a little slower, very open. So it’s more just about listening to sound and pace and for--and they’re also designed so that you could play them in any order, or play fewer than all eight, you can leave silence in between them as long as you wanted. That was the original conception...most people don’t leave super-long silences because they’re not interested in doing that in performance, but, it was played on the radio, actually, wunder[unintelligible] radio, which is the German group that’s very into this kind of stuff. And they played it for, like, a day, where it streamed, and they he left like, 15-minute pauses between each one. So you hear one 1-minute movement and then you hear a fifteen-minute pause. And then you hear another one and then you hear a 15-minute pause. And then after all of them had been played through, he’d play through all of them [back to back], and then they’d start over. Um, which I thought was a pretty cool way to do it. I was, this was probably just prior to that or pretty close to just prior to that, but I was working very much with the idea of faster, slower, and a few shapes that sort of overlapped one another. So, barn work, I was thinking of the visuals as being one more of those layers in a way, that would kind of cycle around so that there was space for everything to find itself, and times when it would work together and times when they would kind of be sitting and one would be moving and the other one wouldn’t be moving, almost, because then it sort of plays with the speed and tempo of experience. AW: I like this idea of kind of a shared parameter within which you can work with audio and visuals and it looks like you kind of worked with time as the shared, you know, pacing and time and space... JH: That’s the main thing that I work with. Form and time and space. I don’t know if they’re completely interchangeable, but, space I guess makes, I mean there’s like aural space, but,

58


certainly visual space--I mean, I didn’t do the video, but I had a lot to do with the editing of it. And deciding sort of which things would work together. AW: I like the overlap, you know, that there are sort of shared units of measurement, or units of change that you can apply to different, you know, stimuli, like sound and image. JH: I think that’s important to make some connection between the things. To find some, you have to find some connection between the stuff, or else they’re just happening simultaneously. I mean, you can do that, and some people do, they’ll have just things going on at the same time and that’s interesting. It can be. I think sometimes it’s also just a way to do something and not work as hard at it. You have to work to find connections between things, you don’t have to work as hard to just throw two things up and see what--but sometimes you throw things up at the wall. That’s how I generate material, is I’ll just throw things up simultaneously and see what happens and then once you find it you work within it. But some of the other stuff that I’ve done, like the stuff I’ve done with Matt [Sintchak] where we still have video artists working with us but we were going to do more improvisation within as opposed to re-compose. There we’re trying to figure out what kinds of things are happening, visually, and then make connections to them, but that’s harder, also because it’s not completely under my control, you know, whereas I felt like in barn work I was much more in control of what was going on. I’ve not had a--I’d love to have a partner, like a visual artist partner, that I could work with in the same kind of context, that, where we could say, “These are the kinds of things we’re working on, this is the kind of thing we’re working on, let’s find the connections.” Go back and forth, go back and forth, go back and forth, because I don’t think I’m ever going to be doing that myself. It’s just, I don’t have the training and I don’t have the time and the inclination, and I’m not a particularly good visual artist, so I don’t...I wouldn’t be surprised if I’m not able to actually do that very well. Because I don’t have an ability to, like, see well. I mean I can see well, but I can’t draw, I can’t imagine, I can’t even really recreate spaces really well. So I don’t think that that’s a strength of mine. So I think I’m better off working with people that can do that anyway. AW: Sometimes I think that people decide that they’re just going to do everything, and then they end up doing everything kind of...shit. JH: Right! AW: And so I appreciate when people cut themselves off. I like that, you know, you’ve got this kind of intentionality in barn work that works really well, and then you touch a bit on the idea of, like, live visuals, and how that intentionality can be lost and, while you can just throw things up and some things work, some things don’t. How do you feel working in live settings with visual artists, when they’re just trying to respond to you or lead where things are going? Do you think it’s successful, or do you think it’s kind of hit and miss? JH: Well, I think it’s hit and miss. I enjoy it, personally. But I’m kind of doing the same thing aurally, and...I think that improvisation is, you know I remember when I was in grad school I would do these improvisation workshops and exercises, and if you actually just listen to the stuff people would create after the fact, most of the time it would suck. it would be okay with some interesting moments, and then there would be lots of suck. Now, obviously there could be great improvisations, and I think that there’s something about--improvisations I think tend to be more vital and worthwhile when you’re there. You talked about going to a performance. I can’t quite explain what it is, and I sort of have a theory that I haven’t completely figured out (I don’t think I ever will), but it has something to do with the fact that when you’re in a space you’re listening to the room, you’re affected by the number of people in the room, you’re affected by the way the light looks in the room, you’re affected by the eye of the person you happen to catch across the way, you’re affected by the lunch you had, you know, and there’s some sort of collective in the space that improvisation in those environments often captures, and it will feel vital while you’re there, when it’s rich and full and 59


people are in that space, and as soon as you leave that space you listen to the recording even and go, “Well that was pretty boring,” or, “Those guys were really wankers, it was a bunch of wankerdom,” “They’re sort of wasting everyone’s time,” and “They didn’t do anything interesting.” But at the time it feels more vital. I think improvisation sometimes is that, I think sometimes it’s interesting. By itself, I think people have the tendency to do too much when they’re improvising, they tend to play too often, especially people that don’t get together. You know, if you rehearse and you get good at what you’re doing it’s more likely to be interesting over time. The thing that’s difficult in improvising with visuals is that...you can’t...it’s almost like you can’t know what exactly is going on. It’s hard to--I can evaluate sound because I can hear everything, but this other component I can’t evaluate it as easily at the same time, because it’s, maybe I can’t see it, maybe I have to look at my machine, because my interface isn’t perfect, so I have to look at it. It’s hard to tell if it’s completely successful. It would probably be a good idea to listen to this stuff more after the fact, to evaluate whether or not it’s working. I’ve tried to do that some with Matt in our working relationship, and that’s been helpful, but I think the reality is probably we just need to practice more with people. Although that might go against the spirit of the concept, I don’t know. AW: Well, I think that kind of--if you get any improvisers together that are musicians there’s going to be a common language. I mean, skill level will vary, I’m sure, but there’s going to be recognition of, “Oh, this person’s doing that, I’m going to continue this pattern,” or “Oh, I see what they’re doing...” and there’s this kind of shared history. And then if you throw in another discipline like visuals (unless the visual artist is also a musician) there just kind of...they’re just responding to what they think is happening. JH: Well I think the most effective stuff are often in that context would be things where the visuals have some sort of live triggers. Where it’s at least responding, and then our job as electronic improvisers compared to instrumental improvisers is that we have to set up algorithms, we have to set up parameters that can do their own thing. I mean, Matt, when he’s playing his saxophone he’s making every single choice, he’s never letting something determined that he’s not thinking about. I mean, he’s playing faster than he can think, but he’s at least doing it by feel. But it’s completely trained and he knows exactly what he’s doing, whereas what I’m doing electronically involves algorithms and processes that I’ve defined based on the kinds of things I want to generate. So even if I have some control over the kinds of things that I’m doing, which I do have some, I’m still relying on its response a bit. And randomness. Because that’s how you build these--you know, you build in an element of randomness, as you know. In that case, if you get a bunch of people doing electronics together it can be a whole bunch of randomness and then it’s probably not going to be that cohesive. So I think some triggers in an audio environment, if I have a live player and I’m taking their feed then I’ve got that live trigger that makes some connection between the kinds of things we’re doing. I think visuals probably would benefit dramatically (if they’re completely live) would benefit very much from a similar integration. That’s why most of the stuff that I’ve done with Matt has not been live. Then we’re just really playing along with a video that was maybe improvised in its creation or that has some element of freedom, and I mean we did some things a couple years ago where I was controlling some of the visuals but mostly I was just controlling color tone and it was because I wasn’t doing anything electronically or musically. I was just free to do a few other things, and I was just kind of making processes happen. I think that’s probably the biggest issue with people, when everyone’s in their own space electronics need to be more coordinated, networked together, so that they have some shared response to something or a shared idea of, “This timbre is going to generate this color,” or some linkage that allows you to work with it. AW: I was reading an article by Atau Tanaka, and he was talking about how, when you’re performing live a lot of times you’re actually thinking in this, he called it, “negative time,” where you’re just trying to see what’s coming next and make choices then in that, you know, “negative time” to respond to that. And I think that can determine what makes, you know, good 60


improvisations work really well, is that people are predicting and kind of understanding where things are going and making choices in real time based on where they’re going, and I think visuals can work well when you take that into account. If you know where something is going or you can predict it, but that takes a lot of practice and familiarity. JH: And I think what his, that concept of negative time, then you’re not thinking of, “what can I add to what I’m hearing,” because that’s usually people’s thought: “What can I add to what I’m hearing that will make it better?” And then, so people add stuff all the time. And then there's too much adding, because somebody else is, like, everybody’s doing that. So we’re all trying to add to this pot, instead of that negative time concept, like, where is it going to be a minute from now, then you may add to that or you may just help it get there or you may not. I think it, you know, I’ve tended to do more of the, “Do I have something interesting to say? If not, don’t.” Depends on what I’m doing, if I’m the only person adding electronics in an environment where there’s two instrumental performers and me, then usually what I’ll do is--I’m more likely to keep a layer going, but not a lot, just because they sort of expect it, and then I’ll add more foreground layers or something, if at times, if it makes sense. Depends on how many improvisers. you get too many improvisers and it goes to shit real fast. Have you done any improvisation with any other visual artists at the same time? Have you tried to do-AW: Just Eric Allin. And...yeah, I mean. It’s hard. it’s really hard. JH: Yeah. I would think. Because you can sort of work against one another. AW: And, you know, there’s just no...there’s no language, you know? There’s no--like the idea of a gesture that one person could start and another person could finish becomes so much more difficult when you’re, you know, first of all you’re using two separate systems. Unless you’re sharing a patch on a computer, and you know, visually how do you, I mean, how do you translate the idea of a gesture. JH: Yeah. If you were using a single patch and you each had, like, two control or three controls and the other person had three controls, then there could be...that could be interesting, I could see that working at some point. AW: Yeah, in the interaction design thing that I made for class, that was kind of the intent, was “Can you make a visual instrument, map it to some parameters, and then make, you know, an orchestra of sorts, you know?” Get five of these things and give them to people and see what they make. I don’t know if that is, uh, successful idea or not. JH: Well, if you get five people that work together, that know what they’re doing, that know what their role is, it could be. I mean, it’s sort of like five good musicians aren’t necessarily going to make a good group, but the right five musicians could make a great group. I was, you know, I told you before about this Metamkine thing that I’ve seen, French group, two visual artists and one sound guy. I saw them two years ago, and the two visual--the thing that was really fascinating about the visual guys, you talk about like common language or whatever, they each had the same instruments, essentially. They each had a film projector, they each had a mirror, they each had some, um, like things that they could hang in front of their stuff, and then their visuals were reflected and combined in the center. So they each essentially had their visual instrument. And the strips were playing, and you know they would play with the speed of the strips, and they would burn them a little bit, and they would hold stuff up and just create...I mean, it was extraordinarily abstract. Pure light and color and balance and whatever else. And sometimes one person is the only one and sometimes it’s both. The guy who had sound also had like flash bangs every once in a while, so that would change the space up, and the sound guy also was just using, like, you know, CD players and tape decks and crap like that. It was pretty funny. And some little sound generation things. But, that was, it was great, one of the best things I’ve ever seen, and the thing that was so exciting about it was, they were really limited to what they could do, they knew how to work their instruments, and they absolutely blended together and made these things that were more than the sum of their parts. But they’re also working with projections, situation you guys, you and Eric, you’re working in like a single stream to a thing. I suppose if you--if you each 61


had your own projectors and you were balancing or blending, which is essentially what they were doing, your stuff probably would look like just big garbage, because it was just crap on top of one another that’s not integrated. You have to start with the right material, I mean and this was all in their development of what the material was that they were using, and how they would approach it. How to do that in a simultaneous like, visual one goes through visual 2 goes through visual three...it’s tough, at least with sound there’s space to put things in. You know, visually it’s not the same space. Because you’re thinking in frames of visuals as opposed to tones that happen in time. AW: Yeah, and...I mean, the capabilities of visuals being kind of “playable,” are all sort of software-driven, you know? That’s what’s kind of opened this whole world of live visual performance. I mean, Eric Allin and I can’t afford to buy a bunch of projectors and take them apart and burn film... JH: But this is the thing with these dudes. You talk about how it’s not playable. Their stuff was playable because they held stuff in front of projection. By actually holding things in front of projection and moving mirrors around, visuals were playable. They did play, and it wasn’t software. And I had to do electronic improvisations but I almost never do electronic improvs by themselves. I almost always want to have somebody there. I’ve enjoyed hearing Greg Taylor’s stuff, I enjoy hearing people do purely electronic stuff. But I don’t, I don’t tend to do it myself. I usually want there to be some sort of live sound, this is probably like, I feel like you need that human element... AW: Organic... JH: That organic element. And I do think that in a lot of the cases that’s what visuals are lacking. If they’re there with live musicians I think that can make up for it, and because it’s the live musician, but, um, that was what was so amazing about seeing this Metamkine group, is that it was absolutely performative. And in fact, the audio wasn’t that performative, because it was just a dude, you couldn’t see, he was standing at the--I mean, you could see him, but he wasn’t doing anything, he was just standing at the front, like, a guy, laptop cafe-style. Whereas the guys on stage were like, you were watching them swing their projectors and do stuff. It was great, like, they were really, they were performing. You could see it, you could feel it, and that made it vital. There was some organic element. I don’t think there has to be, “Everything has to be performable.” I think you can have things that are running in the background, processes, but I think there has to be something, it doesn’t have to be, but I try to make sure there is some sort of human, organic element in all the things that I bring to the table. Whether it’s the visual part, the live electronic part, the performer. The easiest, which is why I tend to do it, is just get a player. Because you can see a player. And everyone knows what a player does. If I were more technologically astute I probably could do it as a, you know, electronic musician, but I can’t. I can’t get this stuff [gesturing with his arms] to be as interesting. When I wave my hands around it’s not, I haven’t found a reason that that’s better than using a slider. So I haven’t done it. AW: Yeah, a lot of these interaction design instruments were this kind of...I can make sound do this by doing this gesture, and it made it very performable and interesting to watch, but it was also kind of like, that’s a lot of work to do something that could be done with a fader or a knob. And you know, you’ve got to go through that process of overdesigning and doing all this crazy spectacle stuff in order to get somewhere that, you know, maybe in the future this will be really usable. JH: Well, and part of what we don’t, you know, like, why is a saxophone cool to listen to? Well, what it does really well is it plays pitch material really well, and our musical environment still very much relies on pitched material, you know, computers can do more with pulling apart sound and recombining in all of these different ways, but that isn’t yet on a grand scale, what we respond to as listeners. Whereas the things a saxophone can do, or a violin or whatever, are still very closely connected--I mean these are amazing instruments that were built up over years to do something, a single thing, really well. And, whereas our electronic things, we haven't, I still don’t think we’ve done that. I still feel like I need to make, I mean as you know my patch has gotten, in some ways, simpler and simpler in the sense that it’s just got a couple of things that it does and I try to just do 62


those things, and then I try to build into those things randomization. But I try to, because I can only control a couple things at a time anyway, or one thing at a time. So, um, so I shouldn’t try to do too many things. And I think even in visuals, like, if we just, you know I thought that one of the most effective things I’ve seen of yours was the thing that had, was the exit crafting. And one of the reasons it was effective was just because, you could see the form and structure. That was clear, you know? And, you made a simple nod to, you know, with one, two, three, in your form, in your structure. And then made it, like that alone gives you license to fuck around because at least we have the context for where it’s going. I don’t know if I’ve mentioned this to you, but I’ve always, um, do you know this guy Yehuda Yannay? He teaches, or he used to teach at Milwaukee, UW-Milwaukee? AW: No. JH: Before Chris Burns, he was teaching, he was the main comm. guy. He retired, he’s still around, and I’ve gotten to know him over the years and he comes to concerts, and one of the things that I always appreciate about him, every time he hears a piece of mine he always compliments me on the fact that he can tell what’s going on, and that he has an idea of what’s happening because he can, the structure, the form are coherent, you know? So this is something that I try to do, I try to have two, three things, four things, whatever those things are, and then I try to manipulate them in some sort of time so that something has happened. Doesn’t tell a story, doesn’t, you know, comment on politics, but it just has some sort of relationship that is, uh, comprehensible. And I feel like a lot of things just aren’t comprehensible. And electronics, in particular, because we can do anything it’s easy to become incomprehensible. And visuals is probably even more so. You know, you can do anything. AW: And you have to tread the line between, you know, making something live, as simple as using a live feed, but that’s just like, I don’t know, it’s just the lazy, “Oh, want to make something look live? Introduce live feed.” So, you know, do you want to be so abstract that no one knows what’s going on and doesn’t see any connection, or do you want to map things one-to-one so that every time there’s a loud sound something happens. JH: Right. AW: Either extreme is stupid. JH: How do you decide what those relationships are. And what, um, what I try--you know, aurally what I usually try to do is have several different things that can connect to a sound, and then pick which ones I want to use at which times or sometimes I’ll turn it on, sometimes I won’t. That’s it, that’s the game, it’s like what connections do you want to make so that they are discernible, and then how often do you do it? I loved Luke’s [DuBois] thing, when those guys performed here with the...he just had, you know, a joystick. There you go. Simple. Bruce McClure - 7/11/2012 Anna Weisling: In my opinion, you use light as a tool for gesture and manipulation, and have described your visual techniques in musical terms (“Glissando of light along the register of optical sound shading...” for example). Do you see direct parallels to sound in the way you use light? Bruce McClure: Yes, projector light becomes a way to provoke or manipulate or modulate both the eyes and the ears because of the dual theaters – the one we sit in and the one that the film travels through. The projector uses a primary light to light up the room and a smaller one to create sound. Both lamps are regulated gesturally for dramatic effect in by prosthetic hand puppets. When writing or talking about my work I try to avoid suggestions of synesthesia (even though it is interesting to learn from the Wikipedia entry: ‘Yet another recently identified type, visual motion → sound synesthesia, involves hearing sounds in response to visual motion and flicker.)

63


preferring the concrete attitude and separation of eyes that see from the ears that hear. The projector’s layout has a film path where picture and sound are also separated. In this case it is typically measured by 26 frames. I don’t hear color or see melodies. What happens in the brain must be reported as ideas. In the example you gave where I used the word ‘glissando’ I was referring to the optical sound head’s reading of the ‘sound track,’ the registry of sound on film as a limbic structure that is converted by the optical sound system into sound energy. I used ‘glissando’ to describe the sliding of the film, in a continuous reading, caused by the steady focus of a light beam modified by filmic fluctuations of light and dark on a photoelectric cell. The sliding of light provides waves that are received by the ear - a mechanical assembly of bones. This is distinct from the light from the main lens that is shuttered by a rotary blade providing distinct instances of light which together with the dark plays games with the chemical functioning of the eye. AW: When speaking of your pieces you often speak of light as a very tangible, malleable substance, and your work is arguably more about the light than the picture it is producing. Can you talk a bit about your philosophy regarding visuals? BM: I wouldn’t make any claims to a coherent philosophy although my kind of projection gets you into and out of principals pretty quickly. I have tried to reduce the role of the film strip by taping it to itself in the form of a loop. The loops I use are about 200 frames in circumference which in projector time is about 8 seconds. The projector, meanwhile, has been customized to liberate it from a role of fidelity and has been made active in its handling of the film. The influence of the camera has been almost removed. I do not use them but I have adopted the photography of others like foundlings. Because I am pushing the projection apparatus upstage the camera’s gestures on the film plane are somewhat superfluous. In ‘Pie Pellicane Jesu Domine,’ however, I decided to pay tribute to the other side of the film plane, to what was beyond the aperture, by using bird footage photographed for a documentary in 1930’s. This series of loop piece constructions scramble, tease and frustrate the audience’s attempt to look through the projector back onto to a remote island in the North Atlantic. It does not pretend to fool them. The light in my projections enters to occupy the room rather than being [an] invitation to some other place. I am happy when the light vibrates with the force of the sound, sometimes sympathetic with it sometimes pulling away creating a kind of chop. The picture rectangle always remains or is implied by the idea of cinema. AW: Your work often seems to explore the possibilities held in relatively limited materials or techniques. Do you consciously limit yourself in this way? BM: Do I limit materials? ‘Ventriloquent Agitators’ is a work for four projectors each fitted with metal inserts in their film shoe assemblies as many as 16 guitar effects pedals, each with two, three or more knob controls, a ring modulator and mixer. All of the sound equipment has a continuum of adjustment as does the focus knob on the projector. Any control knob is limitless in its turning. No mater how much I appreciate the idea of limiting the materials I have never been able to go to one of my screenings with only DVD in my pocket. I have to carry my own equipment. Right now it’s two projectors and the sound equipment. I understand the idea of holding something down with your knees in order to free your hands to hog tie livestock. I’m thinking of my use of loops for the past 5 years that have plus or minus 200 frames. Given two to four projectors loaded with loops with wave forms of varying lengths and a common origin it can take quite a while for them to return to their original configuration. Combine with that the ways in which I alter the pathways of each loop, sound, light intensity and focus for example, and then there is no end to it. When I put my glasses on and I look at work using a camera I think to myself, “Oh another ‘x’ minutes of view finder photography with supportive dialogue and music.” Isn’t that a limitation? Take away the sound track and you have the presence of a chorus of intestinal maneuvers in the dark.

64


I’m doing a documentary of how many different kinds of rectangles there are and the sounds that accompany each. Despite the fragility of film loops I think my work is endless. AW: You say that “wreckage is often more interesting than structure.” How do you feel that applies to the form or progression of a piece? BM: That is something I culled from Gregoire Muller’s interview with Robert Smithson, in The Collected Writings (ed. by Jack Flam). The interview is called “…The Earth, Subject to Cataclysms, is a Cruel Master.” (1971). Smithson was talking about some insight Levi-Strauss had and his suggestion that we should change the study of anthropology into “entropology.” A study that devotes itself to the process of disintegration in highly developed structures. My performances take place in a structure or a re-structuring of the cinemagraphic apparatus that favors the audience’s side of the picture – Which seat should I choose? All of them are good! plane in a perspective that opens onto incidents that defy recording. My commitment is to break the reigning legacy of medium specificity and engage any subjectivity willing to redefine the object creating a perfect version of it in the thought process. The wreckage could be described as all that happens after I turn on the machinery and plow the furrowed planes. AW: Would you consider (1/24) frames to be the smallest unit of measurement within which you work? As, say, a musician might consider a pitched note the foundation for a piece? Peter Kubelka is said to have described the limit of his measure as a frame. Are you are suggesting here speed, 24 frames per second, the speed at which the projector moves as the unit? Or time rather than real estate? I do not tamper with the speed of the motor. Using film as something of a redundant element I shutter the existing shutter between the lamp and the film gate. Meanwhile I have two to four projectors landing on the same rectangle on the screen, also redundant; each with the same patterned loop or a strident one. All these overlays result in apparent changes to speed. I compare it to the wagon wheel in a western that appears to be turning in the wrong direction with respect to the rider. I use a synchronizer (a tool for counting frames) and measure out my patterns of base and emulsion in frames. Frames are an analogue to photography and unite the machinery of the camera and the projector. I do not use a camera to make frames of base and emulsion instead I tape and bleach. Even when I try as hard as I can to lay a piece of tape at the midpoint of a sprocket hole across the film – one limit of a film frame I can never hit it exactly right so that when I project the results there are always aberrations at the top and bottom of the frame. My alignment of the tape is guided by reflexes not machinery. I would say that my unit of measure does not respect film frames but approximates them. Meanwhile the optical sound track has no respect for frame lines giving the film a continuous reading. That means that not only do we hear the boundaries between on interval of base and emulsion but also the splice and any incidentals that befall the surface of the film. If you are referring to what I call the “common factors of 24” that family of loops I have been using almost exclusively for maybe four years, then I would quickly say that 24 was the number I mistakenly measured between the optical axis and the sound head. What I wanted next was to break up that interval into patterns of on base laid out in the interval. I thought of the factors of 24: 1, 2, 3, 4, 6, 8, 12 and made loops based on these numbers. For example – (1) 1 base (‘clear loop), (2) one base to one emulsion, (3) one base to two emulsion . . . I cannot go on with this with respect to your question but in the end the idea of placing the tape is fundamental. The film surface is really open and I try to land the tape at the frame lines. It’s a turkey shoot.

65


AW: Your pieces vary considerably in length. Does there come a point at which you feel a piece is ‘finished?’ If so, how do you know? BM: I like to work with a fairly wide panorama of time. Loops on separate machines take time to resolve themselves into recognizable identities moving against one another. Combining all the variables we need time to make a picture of it. If you put your foot in it only briefly it’s a shock but if you stay in for a while some things become more familiar and in turn set off other extraordinary events. Duration is based on how much time I have for a performance. Usually it is between 45 minutes and an hour so that’s how long I work it. I think of them as samples from a much larger entity. Personally I grow tired after about an hour and need a break. Without a pressurized cabin your time above a certain altitude is limited before you start getting sick. AW: It has been said that you “beat [people] down with light and sound.” Can you speak a bit about the physicality of your work? BM: Sensory projectiles from a miniaturized theater escape to occupy the room around us. These projectiles come directly from the performance context compounded by chance and possess dramaticity and a psychology inherent to them. The frequency of the light and sound may be intended to frustrate passive consumption of the expected. During a performance the funnel narrows on ourselves and then we are set free. Mark Snyder - 7/30/2012 Anna Weisling: You come from a mostly musical background, and yet you create all of your own visuals to accompany your pieces. How did you start working with video? Mark Snyder: I started with, it was the same year I met Jeff, a festival in Memphis, and this lady, Beth Wyman, that’ll be coming from Maine...in this old cotton warehouse, it was just her and a bass clarinet and some shit she had done in iMovie. And I was like, “Shit this is so good, I need to do this.” So I did some stuff with iMovie and it sucked. I showed John Bauer, who is one of my composition professors there and he was like, “Man, I wouldn’t show this to anybody, Mark, it’s pretty bad.” So I haven’t. [Laughs] It was really bad. But then, the next thing I did was the Opium, which was sort of like, for me, sort of this comical story about being on opium. It works, for that it just became a part of the orchestration of the piece, and that’s how I got into it, and it was mainly because of, just, nobody really takes the time to sit back and listen to music anymore. And I can go ahead and say, “Listen, the way it’s supposed to be is you’re supposed to sit back in a chair and concentrate, enjoy my music the way I have thought you should enjoy it.” But, you know, nobody gives a shit what I want. And that’s to help engage people a little bit more to get my point across. You’ll listen to my music, but I want everybody to listen to music. AW: So do you think you’re kind of appeasing the audience? MS: I want people to like it! I mean, I want to like it too, and my hope is that people will like what I want to do, but...yeah, I mean, these people that are like, “It’s my art, who cares if I listen or watch or whatever...” don’t make us sit through it, you son of a bitch! Don’t put it on as a sort of presentation if you don’t care. If you don’t care, the whack off to your stuff in your own room, but don’t force me to enjoy it and then tell me I’m stupid because I don’t understand. But that’s just my opinion, they’re more than welcome to do that, but it doesn’t seem to make any sense. So, yeah, I...no, I think about taking people on a journey. So audience interaction is very important to me. Do I want to...Am I a sellout so that I’ll play in a cover rock band here in Fredericksburg to make some money? No. But, take the Messy record. There are times when I wanted to be really active, and I didn’t. I pulled back because I knew if I was I could get refused on ambient music sites. Yeah, I did curtail it for that. And it worked, I was able to get the reviews, but...then do some of your composer colleagues go, “Oh, it just doesn’t do anything.” Well, yeah, they say that. But from that perspective I don’t really care. I just want to get out there and hope that it helps people or people enjoy it and it does something for them. 66


AW: What do you think the visuals add to your pieces? Do they convey something that the audio doesn’t? MS: Well, to me what I have to work with is sound artisty...I have sound, but I also have sort of vibrations. I mean, I have touch as well. Feel, if you will. This also gives me something for sight. Another way to manipulate the listener. Like with sound design, I mean, what you can do to people is sort of trick them and put them in a different space. it helps sometimes to put them in a different space. At moment, at times, too, it focuses their attention in one direction, so if I want to localize something in a different area I can’t because of the main focus being on the video. But I think it adds another dimension to the sound. If i do my job right. AW: I get a lot of people, it’s usually a kind of split opinion that visuals can either enhance the experience and add to it, but a lot of people just get frustrated and say, “I can’t divide my attention between the visuals and the sound, and it’s drawing me out of the experience in fact.” Do you find it difficult to navigate that sometimes? MS: Not really. I mean, to me, and I’m not going to say this about anyone else, but I’m very sensitive to what’s going on visually and how it ties in musically. You have to talk to me to know what the hell’s going on, but when you think about the layers, there are parts of my family that are in there, there’s pictures of what i’m doing, how i’m doing it, and it’s all incorporated in there. Almost like Jackson Pollock’s cigarettes ended up in his paintings, it’s like there are parts of my being in there. To me it makes it a lot more organic. So, I mean it goes back to that whole thing, “You’ve got to enjoy my shit the way I want you to,” which is, in a perfect world, I have you in a room with a huge screen with me there playing live and sort of surrounding you with all the sound. But for practicality’s sake I put on a CD and just listen to it. You can devour my music any way you want to. Some people say it stands on its own, other people have said, “Oh my God, if you do not see it with the visuals you’ve missed a lot of the piece.” I, of course, wrote it to be its complete form, but if people just like the video, they can like just the video. If they like just the music they can like just the music. AW: When writing pieces, do you begin with audio, the visuals, or both? MS: I always start with the music. AW: Really? MS: Really. AW: Because your pieces seem so connected, audio and visual. They seem so connected to each other. MS: But, see, it’s always thought out. As the process is going along it’s always--there’s some sort of inspiration, but if you look at what’s been the most popular piece, like Harvey, I just came home and I played the thing, and then I orchestrated around it. But the next part was the video; a lot of these pieces that I’ve just done, just finished up, was just me going back to, “I’m just going to play an instrument, and I’m going to play it into effects, and I’m going to try to make something beautiful.” And then after I make that thing with just a melody, then I orchestrate it. And that’s with the sound and the visuals. So I knew what it was about, because I came home from my talk with Shawn about it, and it was that whole emotional experience...The thing that got me the most was the kids. And so my kids drew the pictures. And with that I orchestrated, if you want to say, their pictures, so that it’s all weaved in. So that’s how I do it. It’s no different than, say, “Oh, well you added the cellos here, you added the horns here, you added that...” To me, the visuals are part of the orchestration process. AW: Do you see parallels between, like, almost using visuals as an instrument? An additional layer in your compositional process?

67


MS: Yeah! Well that’s how I see it. That’s how i’ve always seen it. To me it’s, it’s all devised as a live show. Jay Baxner wrote a review at some point, it was either of my show or the CD, but you never lose sight of the fact that that’s the singular instrumentalist that’s up stage, that’s where the attention is. It’s still about the human that’s up there producing all of this. Your goal is that the human that’s making it...I mean, the videos, the sound that seems to come out of the speaker, is a direct result of what’s coming out of this human. AW: As far as i can tell, your visuals are always fixed, right? MS: Yes, ma’am. AW: Why do you create the visuals in this kind of “tape” fashion? Instead of making them a part of the live--responding to what’s happening aurally? MS: I like beer. [Laughs] And that means I like to play in bars, so I can share my art with people that also like beer. Sometimes, beer is served next to liquor. And when people get liquor, people get fucked up, and they shout things and all sorts of nasty-ass stuff. So, therefore...Malmo actually started, the first piece i did in this way, where amplitude did brightness and the pitch that i did put color filters on. It worked, but then I was in the University of Memphis and I was in charge of teaching theory. There was this girl from Michigan, tough, tattoos, all sorts of lip rings and eye rings, and I was going through the process of everything I needed to do to make this piece...and she was like, “You know, Window’s Media Player has been doing that shit for a long fucking time.” And I was like, you know? She’s right. I guess, who cares? And I had played it where there was so much noise going on, I mean it looked like the video was a fricking fireworks display. So, if you get the opportunity to play it in a concert hall where everybody’s really quiet it works out well. It’s the same reason that everybody’s like, “Well, you use Max/MSP, why don’t you use, like, Fiddle or Bonk as a following option?” Hell no! Because if something happens...that place that had the trolleys that would go by, somebody was playing lizamander by Russell Pinkston. Well a trolley goes by, steel on steel, that’s throwing every frequency out. So that’s why I don’t do it. Now, like, what you and Eric do, he’s plugged in. You plug that guitar in direct you’re not going to have those issues. If video is triggered off of other things you’re not going to have those issues. But if I’ve got a lot of my stuff up, I don’t know what’s going to happen. AW: So it’s practicality? MS: But, here, the thing of it is, is, I’ve got to find a way how to do it. Part of it is, I’m a strict classical musician. I do not like to improv. As you know. I’ve forced myself to improv by the time I was 40. I still don’t like to do it, but it’s one of those things where, if I’m going to be a teacher i have to put myself in every--Like, I had to do a Kickstarter, for fuck’s sake. I did not like it, but if I’m going to tell students, “This is what you need to do, this is what everybody’s doing to make money as an artist now” I need to do it, so I had the experience so that I could share that with them. This piece that you’ll see this Fall, the harp and the accordion will actually control what’s going on with the video. AW: Really? MS: I know. Big steps for Marky. I think now, with a dump of a house that I need to fix and teach 5 classes in the fall, so it’s insane, so it might end up just being fixed, but the goal is for it to be completely, for the video to react in some way to what we’re doing. AW: your pieces, I think, are so emotionally driven, at least for me. They just tell such an emotional story, and really take their time and are so lovely that I feel introducing a live element to the visuals maybe would sometimes work against your aesthetic. I mean, introducing something live always, for me, makes it feel more frenetic and chaotic...do you think you’re going to have trouble with that? If fixed video works so well, why mess with it?

68


MS: Wanting to do something different. I’m working with other people now. Which has been fucking terrible in some respects, I mean, you may have met Becky, she was in the electronic music class in the Fall, but we’re in the rock band together, I mean, she’s an amazing musician, computer science and music, and I’m hopefully going to get this whole record just with the two of us doing these things. I’m going to play tuba and drum set and clarinet and all this other stuff, but, I want to grow. I don’t want to stay--not that i don’t like the stuff I’ve done before, but it’s time for me to do something else. I don’t know what it’s going to be, but it’s going to be whatever I start doing. but i think it’ll sound...I think I did my ambient thing, and I’ll still keep ambient elements, but I’ll probably try to incorporate more of my weird shit, too. And having a really good musician there that can really play their ass off helps in that endeavor. AW: I would imagine so! MS: I don’t think that element will change it that much. I think it will help to extend...if you’re in the audience if I make certain elements of the film interact with what we’re doing on stage, it should hopefully connect the video even more to the performer and to what’s going on. I mean, I could totally fail! It could be like that thing John Bauer said, “Oh my God, don’t show that to anybody!” And maybe that’s what happens. But I’d like to think that it’s going to work and it’s going to work well. Now, I’m really happy because we’re going to premier it at Third Practice at University of Richmond that has that, i mean, I think their speakers cost more than the University of Mary-Washington’s budget. it’s just that nice, and they’re littered with like 20 of them around, plus the screen is to die for. AW: I’m exploring the idea of using visuals musically, so I’m going to attempt to score a couple pieces, the visual element of a couple pieces. So it’s half interviews with people, gathering support for my argument about visuals being used musically, and kind of enhancing the aural experience, and then getting a couple new pieces that I’ll do visuals for and they’ll actually be scored. And we’ll see if that works, I’m going to guess that it won’t. But it’s a good exercise in futility! MS: Why won’t it work. AW: I just don’t know if it’s really...worth it? I mean...first of all, there aren’t very many people that perform visuals live with musicians. So, there’s no way to really get a universal scoring system. So what I’m doing is only going to be usable for me. So does it really make sense to be doing it? Probably not. MS: I don’t know...I mean, I think people will take it and use it. AW: I hope so. That’s my hope. Because I find myself with these pieces that no one else could ever do. Because there’s not score for it, there’s no way for anyone else to know what I’m doing when, so it’s kind of like if I’m not there these things will never get played again. Which is kind of a shame. MS: Well, there’s that too, but the bigger issue is people with then...it will give them the reaction of, “Oh my God, I want to do this.” AW: What I’m kind of working out now is, audio has these universal parameters of pitch and time and tempo and rhythm, and so what would the equivalent of visuals be? You know, if you’re going to write a scoring system, you’ve always got brightness, you’ve always got contrast, you’ve always got saturation, you’ve got frame rate and playback speed...so how can I take those elements and make them musical and write them down? And we’ll see... MS: Well, yeah, I mean, at least to me it’s all the same. You’ve got accents, right? And an accent can be...I mean, if there’s nothing on the screen it’s the same as silence. AW: Right!

69


MS: All music is just organizing sounds around silence. And the same is true for visuals, organizing light around dark. At the most global sort of thing. There’s pitch, amplitude, and they’re all related. And they’re durational art forms. You look at it and walk away. You have to be there, you have to be engaged, and you have to invest in it. Of course, nobody wants to invest in anything that’s not Facebook at this point...I’m hoping that your research shows that if you develop a scoring system it can be important. You just have to make the instrument playable for other people, and then just make a score. Atau Tanaka - 8/1/2012 Anna Weisling: You were one of the first artists to bring performance to computers, interacting with digital systems. How do you think performance has changed now that laptops are so ubiquitous? Atau Tanaka: Right. Well, first I need to correct that. I'm not one of the first. But I was one of the earlier ones. There was -- In fact, this history stretches back quite a long way, all the way to the Sixties. So I like to use the experiments in art and technology in New York as a good example where composers like John Cage, well, they weren't working directly with computers, but the idea to bring electronics and perform with them live on stage, you know, certainly goes back at least that far. AW: Sure. AT: And so -- And if you look at a setup like what Cage had in the 9 Evenings, the setup is a table full of wires and equipment and stuff, and it sort of looks like a present-day laptop electronic performance. So in some ways some of those, you know, the Sixties were an important time already for that. Now, then when I started coming up in the late Eighties and early Nineties, the computing power wasn't the same as it was today. You needed to get on a mainframe computer if you wanted to do any powerful signal processing, and personal computers weren't powerful enough to do sound in real time yet. And certainly, you know, laptop computers, yeah, weren't anywhere near that sort of capacity. But we did have the possibility to perform with computer systems in the form of MIDI synthesizers. So the Yamaha DX7 was an important, you know, computer music performance instrument at the time because it was a synthesizer unlike any commercial synthesizer at the time. And so that was an important instrument. And then when we got to being able to do -- Well, so then the idea to do real-time signal processing on computers that could go on stage before it went to the laptop came through having work-station type computers -AW: Right. AT: -- that we hold hauled on stage with us. And that might include the NeXT machine. I never actually performed live with a NeXT machine, but friends at IRCAM did. And then eventually, slowly, we were able to get desktop computers doing signal processing in real time. And then by the late Nineties, you know, around '98, '99 or so, with the introduction of the G3 PowerBooks by Apple we were able to start to run MSP, you know, live on a laptop. So now that's the technical side. The musical side is a kind of culture of performance had to be translated from existing instruments, from acoustical instruments through electrical instruments, like electric guitars. And even, you know, the synthesizers we were using certainly electronic music, musicians performed already with live synthesizers on stage, so the idea to use a DSX7, a Yamaha DX7 on stage was nothing revolutionary. But I think it had to do with what you did with them or the

70


controls or the way you articulated sound with them because the Yamaha DX7 was a keyboard instrument and you could just perform on a keyboard with it. But what we were interested in were what we called alternate controllers and different ways -- different interfaces to perform with sound. Now, this would go on to become a field that we know today as NIME, you know, New Interfaces for Musical Expression. But NIME started as a conference in 2001, but that's not when the idea was invented, you know. So there was certain number of us, and, you know, I certainly wasn't the only one, who were performing with sensor systems with new musical instruments live on stage, yeah, in the Nineties already. A huge part of this though is STEIM. So I don't know if in your dissertation you're covering the work of Michel Waisvisz, Nic Collins, and the artists who went through STEIM in Amsterdam, but this was the studio really where they were doing it already at that time, since the Eighties, and making software like LiSa, control interfaces like the Sensor Lab, and programming languages like Spyder to run on those things. So that's why I'm certainly not the first one. There was already, you know, the work of Michel Waisvisz, who is really a pioneer in this field and the studio like STEIM where young artists like myself could go to get into a community of people performing computer music live. And so then this, you know, it's in the late Nineties when all this became, well, really broadly accessible and available, when laptops could actually do all this and you didn't need a big studio anymore, you could just do it at home in your bedroom and then take it on stage. And so that whole period of early laptop music I think was really interesting from a performance point of view, but it was really different because it was about laptop performance and not necessarily about sensors and new instruments. It was about thinking of the laptop itself as a performance tool. And in many ways the laptop scene was a little bit the opposite. It was almost like antiperformance in that you could perform on a laptop, but it wasn't about gesture, it was about just being on stage and doing it live, but what you experienced live was the fact that you could have a very immobile performer. You didn't know whether if he was checking his e-mail or changing a frequency on his EQ or whatever. And some people criticized laptop music for that, but that's I think precisely what laptop music wanted to do to challenge the audience. In some ways it was like going back to Musique Concrete and the idea of studio based electro-acoustic music that was performed in concert with orchestras of loudspeakers and things. I'm thinking of the GRM. That was a way to challenge traditional performance where people had to actually physically operate musical instruments and so forth. So in some ways, you know, my approach to performance even though it involves really kind of innovative and cutting-edge interfaces, musically ultimately it's sort of traditional because it places value on the performer, the performer's physical gestures and that being the kind of conduit and vehicle for musical performance. Now, all this them became kind of democratized with the arrival of the Nintendo Wii controller and all the accelerometers and stuff on mobile phones and stuff. And so anyone can do this now. And they're sort of consumer, you know, there are games and stuff that allow you to do this. And this is really fantastic. So it's opened up the consciousness of people of the audience -AW: Yeah. AT: -- you know, in what they're seeing when they go to a concert. And you have to imagine back in the Nineties we were doing this, it was really exotic for an audience to go to a concert and see people using sensors and waving their arms around and controlling computer-generated sound. These days it's a no brainer. But at the same time, the fact that anyone can do it doesn't mean that everyone does it. And the fact that there are loads of programs and demos out there that will make sound out of, you know, 71


hook up Nintendo Wii or a Bluetooth as an OSC touch controller to make sound, it doesn't automatically mean that a musical movement has started, and it doesn't automatically mean that good music is being done. So now that we have the possibility to perform there, there still needs to be -- we still need to transmit a culture of what visceral gestural performance on these interfaces is and how to do it. AW: Yeah. You -- I mean your performances in particular are very physical and there's a very, like, visual element and a visual appeal to watching someone like you perform, even when you're using, you know, two iPods or whatever in your hands, you're using your whole body really. So even if it's a sound performance, there's a very visual element to that. Do you take that into account when you're performing? AT: I think it's important to me. But I'm not taking-- I'm not choreographing a performance just to be visually interesting. AW: Okay. AT: So in that sense I do move my whole body. But I'm certainly not a dancer. It's nice to know that my performances are visually compelling, but I don't plan it that way. Right? So in that sense I'm not at theater director. I am a performing musician. And if I move around and that is--it can be seen, that's one of the ways that the music is communicated. But it is about the music first. And what you're seeing is a very natural intuitive and organic process, not a plan to be visually spectacular. And I think, once again, if we describe it in this way, all music can be seen this way. You have many different styles of performance, even on the same instrument or even on the same piece. You might take a violinist who -- like Nathan Milstein who will play Bach violin suites in a very cold and calculated way. Well, then you take a bravura sort of romantic violinist like Jascha Heifetz, and it's -- his whole body is getting into it. And you have everything in between. And that's where, yeah, I mean, a piano is a good example. You don't need to do anything but to strike the piano note in order for it to produce sound. But then when pianists dig in with their body and lean in and also lift off with their wrist and their arms, that part of the gesture isn't mechanically essential to producing sound on the piano, but it's really important to produce musical phrasing. AW: But now with something like a lot of the new instruments that maybe you use or more contemporary performers use, there isn't that, you know, history of how to play it, there's not that kind of pedagogy, and, you know, it's not a violin that's been basically the same for hundreds of years. So how do you find yourself making or figuring out the gestures to play these new instruments? AT: Well -AW: They just kind of come to you? AT: I think this lack of history is, indeed, one of the problems for a performer and for the audience. The audience without having seen such a thing sort of have to figure it out on the spot. And then for the musician, yeah, there isn't a sort of history and a gestural vocabulary to build on. We're starting to get that. I mean, the field exists now. And so maybe that's the phase, the era that we're coming into now. And I think part of the interest for me has been, well, actually we never have enough time with technology because it changes all the time, but at the same time I've stuck with the BioMuse over all these years in the same way as to say well, you know, it takes 20 years to learn a violin; well,

72


I've played the BioMuse for 20 years now, so now I'm just getting to the point where I can reflect on that history, and some of this experience then can be transmitted to young performers. AW: Yeah. It's kind of striking to me as someone who's kind of coming into this field now where I think of a computer, a laptop performance, you know, I have a pretty strong opinion that the laptop is the instrument. But when I hear you talking about the Sixties and Seventies and Eighties, and it seems almost like you were taking something that wasn't a musical instrument and trying to get something musical out of it, whereas now my generation maybe just sees it as oh, yeah, you can play the laptop, of course. So I think in just the short amount of time there's been such an amazing change in just how you see this piece of hardware. AT: Right. Right. That's a really good point actually. So, I mean, back in our day we had to -there needed to be kind of a cognitive leap, let's think of this as an instrument. AW: Yeah. AT: You know. But other instruments have gone through this as well. Turntables, for example. AW: Sure. AT: But now that we accept this, that's just the beginning. You know, that's not the end; that's the beginning. So that's when we start to make good music. And the fact that people have accepted the idea, okay, so we haven't achieved our goal, we've only gotten to the entry point of saying okay, now that we all can accept that these things are instruments, how do we make good music. AW: Yeah. Well, I guess moving a little bit back towards this visuals, as I always do, I think, and maybe I'm wrong, that visuals are kind of becoming more standard in performances in the academic world and just, you know, in the pop culture, where any time you go to see a musical event, the audience is kind of expecting to have some visual element. Do you agree with that? AT: Yeah. I think these things come and go. AW: Really? AT: It's fads, you know. AW: Um-hum. AT: And, again, it's probably because there's a critical mass that's built up of people doing it, and that critical mass is built up because it's become possible, you know. You know, back in the days while we were using programs like, well, there was an early program called Videodelic that Eric Wenger did, the guy who wrote MetaSynth. And there was also a program, software from STEIM called Imagine that they developed there. And yeah, so these were pretty advanced systems for visual performance back then. But that was just at the beginning of that it was even possible. AW: Sure. AT: You didn't get it all on one computer. You had a separate computer for visuals that was then connected by MIDI cable or by OSC from the computer that was doing the sound. You know. And then there's a phase when people -- that becomes exciting and more and more people do it, and always -- there's always the, sort of the people who are half a step ahead of the curve and saying, oh, now it's so trendy, you know, the anti-anti-people. AW: “I was doing this before it was cool.�?

73


AT: Yeah. So, for example, big festival like the Festival Presence Electonique in Paris organized by GRM, well, you know, Christian Zanesi, who was the curator of that, was very interested over the years in saying okay, well, let's not do visuals with electronics. AW: Interesting. AT: So there's both, I think. AW: Well, I, at least in the festivals I've gone to, it seems every year -- and I've only gone in the past five years -- but it seems every year there's more and more on the program that either has live visuals or a visual element or it's done with video. And I'm kind of wondering is this a reflection on people don't want to sit and listen to electronic music type pieces in the dark, are people bored, or is this just a trend, do you think? AT: Well, I think it's important to think about it. So there's no one answer. And sometimes people will just throw a video camera with something-- There's several different cases. Like, for example, instead of listening to tape music in the dark, okay, there's some visuals that go on, that could be a visual piece you also generated somehow so that's linked. It could be just eye candy. Or sometimes if there's a performance on a laptop or something or live coding, for example, in live coding you display the code. So that actually has a real function, to show what code is being programmed. Or in some laptop where in smaller interface stuff where the gesture is hidden behind the laptop screen or it's just not -- with respect to the stage, people will very quickly throw a video camera up. AW: AT: live feed. AT: Yeah, a live feed. Now, you see that in other kinds of music as well. That's been happening in stadium rock for years now, right? AW: Right. AT: Performers are so far away in a rock stadium that, you know, they have a live feed so people farther away can see. But I think in each case, again, it's an artistic decision to think about why you're doing that. AW: Yeah. AT: And so, for example, I was a curator this year at the BEAM Festival at Brunel, the Brunel Electronic and Analogue Music Festival at a performance at the Reactable. And so Carlos, the Reactable performer came, you know, and he's very, very gestural and very dynamic. And then the question came okay, well, we can project the Reactable screen, you know, big, camera on it, so we can see what objects he's moving around because the Reactable's horizontal, and it's sort of limited size. And that question came up, and I actually as curator said well, hold on, let's say no, let's not do it. Because the hall was really big, but medium size, and actually, the Reactable is not a small instrument. You know, it's a whole table top. So I said no, let's not do it, that will focus people on the performer, on the place where it's being performed, and they will be able to see what they see. And Sergi Jorda, the inventor of the Reactable, was of a similar opinion. So we both decided in that case, you know, to say oh well, so many people just automatically do a big screen version, let's think about it. And in this case we decided no, we don't need it. AW: Yeah. Working with classically trained musicians who are doing more electro-acoustic or experimental stuff I often run into the problem of discussing with them what's the thrust of this, what are we doing, and often I'll just get the response well, just do whatever you want and just project it wherever you want or project it on us or who cares, whatever. Which in some ways is all right, let's throw something up against a wall and see what works. But in another way, it's kind of like if you're going to take this seriously and take me seriously, what I'm doing seriously, I'd like

74


there to be some meaning behind this. And it seems sometimes people just want the iTunes Visualizer to go along with their music, which is not something I'm not really interested in doing. AT: Well, this problem is just, is always in art, you know, there's this, it's this sort of more-isbetter problem. AW: Right. Yeah. AT: It's where -- We're particularly victimized by this in technological works because technology, yeah, bigger is better, faster is better, more is better. But hold on, no, that's not necessarily the case. And people just aren't accustomed to questioning that, and I think it's important to do it. And there's an added thing in that in music we're really not equipped to think conceptually in the way the visual artists are. And so musicians are not as often asked to think in that way, analytically or conceptually about their work. So it's just a question of being accustomed to that or not. AW: Well, and, you know, maybe I'm wrong, but it seems to me like if you go see a good jazz group, a lot of times it really is the silence or the restraint that makes something better than just good, you know, something transcendent in the silence and in people holding back and leaving space. And with visuals I feel like -- that people just assume you're being lazy or something's gone wrong, oh, you're not showing a whole display of fireworks, something must be wrong. I don't know if that can ever be kind of reconciled, like, this idea of restraint in visuals. Maybe in the academic world. AT: Well, definitely. I mean, silence is important in music. But look at how much music that there is where people forget that. So it's the same. A kind of silence is important in visuals, but a lot of people are going to forget that. But restraint is a hugely important thing I think visuals. And if you think about Derek Jarman's films, that's what it's all about, you know. So yes, I think it's important. You're right to think about this. AW: I'm glad to hear you say that. Well, I guess my last question is just kind of about the software side of things. I feel like Jitter, Isadora, and all these programs for live performance are making visuals so much more accessible. Do think it's realistic to say at that there are parallels emerging in the way music and video are improvised together in performance? AT: Parallels? That would be interesting. Yeah. AW: I mean, we talked a little bit about the silence, like, restraint, and I think that can be applied certainly to both systems. Do you think there are more parallels? AT: Yeah. Well, I think the fact that you can take a single programming environment like Max/ MSP, generate sound and generate visuals from that, means that there's a convergence or potential convergence that you can either use or not use. This means a couple of things. This means that since it's in the same environment, you can treat the data in a visual way, the same data in a visual sonic way, and so there's an immediate coherence there. Or the programming paradigm is the same, then even if the data might not be the common, the way that they are treated can be similar, and then in that way you create parallels. That's where the trio I had called Sensor Sonic Sites had a visual component. And there were parallels. And I think we did leave them as parallels rather than making them intersect. That -You know, it was a trio, with myself and Laurent Dailleau on Theremin and Cecile Babiole on visuals and, you know, Laurent and I played sound, and Cecile played visuals. But three music in a trio, just two making sound and one's making images. And, again, in a very parallel and similar way, we all had gestural instruments, we all act as a platform for this software. But there wasn't a

75


network cable connecting. That's what people would ask us, oh, so it's a sound and image performance, you know, is there a network cable, you know, connecting the three of you? Well, no. Oh, are the visuals generated by the sound spectrum and stuff? No. But we've composed the pieces that we performed to have the sound go with the image, and if the movement of the image went along with the evolution of sound, it was part of the composition and mostly in the performance and the performance communication in the ensemble. So, again, it was ensemble communication. So yeah, there were parallels, and we kept them parallel, and they didn't intersect through any algorithm or network connection. AW: Yeah. That's interesting. The playability, I think one of the most difficult things about visuals is there is no physical instrument, you know, unless you make one, so those parallels are really up to the performer performing and making it playable, which can be very difficult, I'm finding. AT: Do you have -- Have you seen some of the videos that -AW: No. I'm going to have to look this up because I really like this. I mean, it sounds really great. AT: The high res ones I haven't even put them online. AW: Okay. AT: So, yeah, I'll just send you -- I'll send them to you directly. AW: That would be great. All right. I'll e-mail you about that. Well, I don't want to take up too much more of your time. I really appreciate you doing this for me. AT: No, it's great that you're interested. And you're asking the right questions. So it's a real pleasure. AW: Great talking to you. Thank you again.

76


8. Appendix B - Interviewee Biographies Jeff Herriott (JH) - University of Wisconsin-Whitewater, Wisconsin, USA Jeff Herriot studied composition with Cort Lippe at the University of Buffalo, from which he received the Ph.D. in 2003. Jeff previously completed the M.M. in composition at Florida International University in Miami, where he studied with Orlando Jacito Garcia and Fredrick Kaufman. Jeff's works have been performed and commissioned by ensembles and players including Michael Lowenstern, Guido Arbonelli, Arraymusic, The Syracuse Society for New Music, The Glass Orchestra, and Champ d'Action, and have been heard at a number of different festivals and venues. 14 Atau Tanaka (AT) - Goldsmiths University of London, United Kingdom Atau Tanaka holds the Chair of Digital Media at Newcastle University, and is Director of Culture Lab. He has conducted research at IRCAM (Centre Pompidou), has been Artistic Ambassador for Apple France, and was the first artist to become researcher at Sony Computer Science Laboratory (CSL) Paris. His research in this area covers biosignal interfaces, networked performance, and mobile locative media. With the democratization of these technologies, he looks at the societal impact of creative practice with digital media in major projects for the Research Councils UK (RCUK) Digital Economy Hub, Social Inclusion through the Digital Economy (SiDE). 15 Bruce McClure (BM) - Brooklyn, New York, USA Bruce McClure is a licensed architect living in Brooklyn, NY. In 1994 he began working with stroboscopic discs as an entry to cinematic pursuits. Since 1995 his film and live projector performances have been exhibited at numerous venues and festivals around the world, including the Rotterdam International Film Festival, the Toronto International Film Festival, the New York Film Festival’s “Views of the Avant-Garde,” the Whitney Biennial, the Walker Art Center, the Wexner Center for the Arts, as well as in the UK, Italy, Australia, and elsewhere. Locally, he has performed at Chicago Filmmakers and at the Film Studies Center at the University of Chicago.16 Christopher Biggs - Western Michigan University, Michigan, USA Chris Biggs is a composer and multimedia artist residing in Kalamazoo, MI, where he serves as the Assistant Professor of Digital Composition at Western Michigan University. Chris' recent work focuses on the integration of live instruments with digital audio and video. Chris’ work has been presented across the United States and Europe, as well as in Latin America and Asia. His music is regularly performed on conferences, festivals, and recitals. Chris is a co-founder 14

http://www.uww.edu/cac/music/faculty/indpages/jeff.html

15

http://www.ataut.net/site/biography

16

http://www.whitelightcinema.com/McClure.html 77


and board member of the Kansas City Electronic Music and Arts Alliance (KcEMA). Chris has received degrees from the University of Missouri-Kansas City (D.M.A.), the University of Arizona (M.M), and American University (B.A.). He has studied music composition with James Mobberley, Paul Rudy, Chen Yi, Zhou Long, Joao Pedro Oliveira, Dan Asia, and Craig Walsh. 17 Mark Snyder - University of Mary Washington, Virginia, USA Mark Snyder is a composer, performer, producer, songwriter, video artist and teacher living in Fredericksburg Virginia. Mark's multimedia compositions have been described as "expansive, expressive, extremely human.� Dr. Snyder is Assistant Professor of Music at the University of Mary Washington teaching courses in electronic music, composition and theory. He earned his D.M.A. from the University of Memphis, an M.M. from Ohio University and a B.A. from Mary Washington College. He is a member of the American Society of Composers, Authors and Publishers (ASCAP), the Audio Engineering Society (AES) and The National Academy of Recording Arts and Sciences (NARAS). 18 Luke DuBois - Polytechnic Institute of New York University, New York, USA R. Luke DuBois is a composer, artist, and performer who explores the temporal, verbal, and visual structures of cultural and personal ephemera. He holds a doctorate in music composition from Columbia University, and has lectured and taught worldwide on interactive sound and video performance. He has collaborated on interactive performance, installation, and music production work with many artists and organizations including Toni Dove, Matthew Ritchie, Todd Reynolds, Michael Joaquin Grey, Elliott Sharp, Michael Gordon, Maya Lin, Bang on a Can, Engine27, Harvestworks, and LEMUR, and was the director of the Princeton Laptop Orchestra for its 2007 season. He teaches at the Brooklyn Experimental Media Center, and is on the Board of Directors of the ISSUE Project Room. His records are available on Caipirinha/Sire, Liquid Sky, C74, and Cantaloupe Music. His artwork is represented by bitforms gallery in New York City. 19

17

http://www.christopherbiggsmusic.com/cb/info/about.html

18

http://marklsnyder.com/Mark/Mark_Snyder.html

19

http://www.poly.edu/user/rdubois 78


9. Appendix C - Scores spaces for extended range glockenspiel, electronic sounds, and video performer Performance notes: Each page represents approximately one minute of time. The glockenspiel performer should hold a bow in one hand and a mallet in the other. The hardness of the mallet is at the performer’s discretion, though it is recommended to not use one that produces an overly bright tone. Solid note heads followed by lines indicate bowing, with the length of the lines approximating duration. X’s indicate mallet strikes. An X directly in front of a solid note head should be performed as a mallet strike that seamlessly flows into a bowed tone. Erratic lines indicate a smooth bowing of the felt under the bars, which should be accomplished within a single bow length. If the felt is not easily accessible to the performer, than another part of the instrument that produces a sort of hushed white noise when bowed will suffice. The glockenspiel performer should play the notes throughout as though they were written an octave lower (i.e. the first pitch is the lowest A on the instrument), however the pitches will sound an octave higher than they appear on the page (accounting for the fact that glockenspiel notation is usually written two octaves lower than it sounds). The pitches in the tape part are for reference and do not correspond to the correct octave in which they sound.

79


9. Appendix C - Full Scores Ariel (See Included Scores for Hard Copy)

Graphic Score

Visual Score

Overlaid Scores

80


Spaces (See Included Scores for Hard Copy

81


82


83


84


10. Appendix D - Sylvia Plath’s Ariel Stasis in darkness. Then the substanceless blue Pour of tor and distances. God's lioness, How one we grow, Pivot of heels and knees! - The furrow Splits and passes, sister to The brown arc Of the neck I cannot catch, Nigger-eye Berries cast dark Hooks Black sweet blood mouthfuls, Shadows. Something else Hauls me through air Thighs, hair; Flakes from my heels. White Godiva, I unpeel Dead hands, dead stringencies. And now I Foam to wheat, a glitter of seas. The child's cry Melts in the wall. And I Am the arrow, The dew that flies Suicidal, at one with the drive Into the red Eye, the cauldron of morning.

85


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.