Pierre Boulez
Anthèmes 2
pour violon et dispositif électronique (1997)
Technical Manual
UE 31160b ISMN M-008-07840-8 UPC 8-03452-00000-0 ISBN 978-3-7024-3244-7
© Copyright 1997 by Universal Edition A.G., Wien Oeuvre réalisée dans les studios de l´IRCAM Assistant musical: Andrew Gerzso
1
1. Introduction FL F
Overview
violin
Anthèmes 2 is a composition for violin and live electronics. The violin is equipped with a microphone used both for amplification and sound pickup for processing by the computer. The amplified violin sound is sent to two speakers to the left and right of the violinist and is also projected in the concert hall – together with the processed sounds – using a sound spatialization system which serves to create a virtual sound space surrounding the audience. The computer processing involves the transformation of the live sound of the violin – using the various techniques described below – as well as sample sequencing. The processed sound is always sent to the spatialization system. The three elements – amplification, processing and spatialization – constitute the electronic part of the piece.
L
R FR
FL
ML
audience
MR
BL BL
BR FL B
transformation sampling
FL, F, FR, etc.
spatialization
Figure 2 - Frontal Disposition L&R
amplification
F
FL F
violin
L
FL
FR
R
FL
FR audience
ML
audience
MR
ML
BL
L
violin
R
MR
BR B
Figure 1 - Overview audience
BL BL
BR FL B
In the performance hall, the violinist may be placed either on a stage facing the audience or in the middle of the audience.
Figure 3 - Central Disposition
2 Philosophy of This Manual This manual contains no reference to any specific technology for the electro-acoustic realization of Anthèmes 2. A distinction is made between the principles and processes necessary for the electro-acoustic realization of the piece and the specific means (i.e. the hardware and software technology available) used for the implementation of the piece. Any manual making reference to any specific technology would soon be outdated. (For those interested, however, this manual contains at the end an appendix with notes on the implementation of the first versions of Anthèmes 2 using technology developed at IRCAM.) How to Use This Manual The score with both the violin and electronics parts contains a series of cues each of which is associated with one or more directives concerning the electronic part. At each cue this score indicates what type of processing is to be used and what the appropriate parameters are. This manual indicates how the processing modules are to be connected together and how the resulting sound is to be sent to the spatialization system. In addition, the manual provides additional data and the description of processes of an algorithmic nature. The manual is organized in three main sections: Section I – Technologies Needed: Contains a description of the amplification, sound processing and spatialization technologies needed. Section II – Processes and Data: Contains for each section of the piece the ‘patch’ of processing modules and possibly additional parameter data or the description of algorithmic processes. Section III – Performance Guidelines. In addition, the manual contains two appendices containing historical and implementation notes.
• Frequency Shifter + Delay: This combined module takes the input signal and sends it to a frequency shifter whose output is then sent to a delay module. Main parameters: frequency shift in Hz denoted as ‘Shift Freq.’ in the score. Note that positive frequencies shift up and negative frequencies shift down. Delay time denoted as ‘Delay’ in the score is in milliseconds (msec). (If delay is zero this module becomes a frequency shifter only.) Each frequency-shifter/delay may have a level denoted as ‘Level’ in the score in dB where 0dB is maximum level. Number needed: 6.
IN
freq
msec
level
frequency shifter
delay
X
OUT
Figure 4 - Frequency Shifter / Delay
• Ring Modulation + Comb Filter: This combined module takes the input signal and sends it to two different ring modulators. The outputs of the two modulators are mixed and sent to a comb filter. Main parameters: the two ring modulation frequencies denoted as ‘RM-freq1’, ‘RM-freq2’ in the score, and the comb filter notch spacing frequency denoted as ‘Notch Freq’ in the score, and the comb filter notch width denoted as ‘Notch Width’ in the score on a scale of 0 to 1 (usually set to as narrow as possible while avoiding excessive ringing). Number needed: 1 combined module. freq notch freq
ring modulator
+
IN
notch width
level
X
comb filter
OUT
ring modulator freq
Figure 5 - Ring Modulator / Comb
2. Technologies Needed Amplification The violin is amplified in order to create a dynamic and timbral balance between the violin and the processed sounds. A contact microphone of the best quality placed on the instrument should be used and not a microphone on a stand in front of the violinist. Placing the microphone on the violin itself guarantees a pickup position which remains fixed and gives the violinist the greatest freedom of movement during performance. Violin Sound Transformations and Samples The violin sound is processed in real time using digital signal processing (DSP) modules. The piece uses standard (unless described otherwise) digital signal processing modules that are listed below. The main control parameters are described as well as the total number of modules needed for the piece.
• Infinite Reverberation: This module reverberates a sound with a very long decay time giving the impression of a sustained (infinite) sound. There should be no ringing or modulation in the sustained reverberated sound. Main parameter: reverberation decay time (denoted as ‘Reverb. Time’ in the score) in seconds (typically between 3 and 60). Number needed: 2
IN
decay seconds
level
infinite reverb
X
OUT
Figure 6 - Infinite Reverberation
• Harmonizer + Delay: This combined module takes the input signal and sends it to up to four harmonizers whose output is then sent to a delay module. Note that two output schemes are needed: 1) all four harmonizer/delay units are summed (mixed) and then sent to one output, 2) the first two harmonizer/delay units are sent to individual outputs.
3 Main parameters: transposition interval in half steps denoted as ‘Transp.’ in the score (where positive values transpose up and negative values transpose down) and delay time denoted as ‘Delay’ in the score in milliseconds. If the delay is zero this module becomes a harmonizer only. Each harmonizer/delay module has a level denoted as ‘Level’ in the score which is in dB where 0dB is maximum level. Number needed: 4.
IN
transposition delay msec
level
harmoniser 1
X
transposition delay msec
level
harmoniser 2
X
transposition delay msec
level
harmoniser 3
X
transposition delay msec
level
harmoniser 4
X
OUT 1
OUT 2
+
OUT 1-4
When DSP units are combined, they are separated with a hyphen. For example, S-IR signifies a sampler whose output is sent to an infinite reverberation module.
Figure 7 - Harmonizer
Sometimes several modules of the same kind are used together as one functional unit. In that case their outputs are mixed together in order to produce one output (for which there is a global level control) that will be sent either to another module or to the spatialization system. DSP module level
+
IN
above, the following symbols are used both in the DSP patch for each section and in the score: • Frequency shifter with or without delay: - FS: one frequency shifter without delay. - FSD: 1 frequency shifter with delay. - 6 FS: 6 six frequency shifters without delay. - 6 FSD: 6 frequency shifters with delay. • Ring modulation with comb filter: - 2 RM/C: 2 ring modulators mixed to one comb filter. • Infinite reverberation: - IR: 1 infinite reverberation. • Harmonizer with or without delay: - HR: one harmonizer without delay. - 2 HR: two harmonizers without delay. - 4 HR: four harmonizers without delay. - HRD: one harmonizer with delay. - 4 HRD: four harmonizers with delay. • Sampler - S: one sampler voice.
X
OUT
DSP module
Figure 7a - Mixing Modules
• Sampler: This module is used for playing sequences of pre-recorded sound samples. The sampler should contain the following collections of violin samples: - Pizzicati with hard attack played forte (called ‘pizz’). - Pizzicati with hard attack played forte (called ‘pizz doux’; the attack here is softened in the sampler in the attack portion of the envelope) - Long notes played mezzo-forte (called ‘long’). - Long notes played piano with lead mute (called ‘long lead mute’) - Short notes played arco fortissimo (called ‘arco’). - Long notes made of single sine waves (called ‘sinus’). In order to designate the DSP and sampling units described
Spatialization This manual is written at a time (2005) when it seems clear that loudspeakers as we know them will probably be replaced by sound diffusion systems (such as Wave Field Synthesis) equipped with panels capable of simulating loudspeaker positions. This is why this manual makes reference to perceptual positions in space and not to the physical location of individual loudspeakers. The amplified violin sound together with the processed sound is projected in the space surrounding the audience using a sound spatialization system. The amplified violin, transformed violin and sequences of violin samples are the possible inputs, called sources hereafter, to the spatialization system which should be capable of processing up to six independent sound sources. For each source three parameters are specified: 1) The direction of the source (i.e. where the source’s sound is perceived to be coming from), 2) the presence of the source (whether it is perceived to be ‘near’ or ‘far’), 3) the presence of the virtual room in which the source is projected. It is important to be able to change any of these three parameters (especially direction and source presence) dynamically and accurately in time. Direction The system should be capable of placing any given source anywhere on the 360 degrees of an imaginary circle surrounding the audience. Six main directions are used: front left (FL), front right (FR), middle right (MR), back right (BR), back left (BL), and middle left (ML). Two other positions are used as well: front (F) and back (B).
4
sources :
src1
src2
src3
src4
src5
src6
Sometimes the direction of the spatialization is to be chosen at random. Here is a summary of the cases encountered in the score: • R: indicates that one of the six main positions (FL, FR, MR, R, BL, ML) should be chosen at random. • R [ ]: indicates that one of the positions enclosed in the bracket should be chosen at random. So, for example: R [ML, FL, FR, MR] indicates that one of the four positions ML, FL, FR, MR should be chosen at random. • R every 525msec: indicates that one of the six main positions should be chosen at random every 525msec until further indication.
spatialization system
••• perceived listening positions :
F FR
FL
audience
ML
seconds. Again, the reverberation time of the low, middle and high frequencies can be obtained using the scalers given above. In this example the low, middle and high frequency times become respectively 2.0, 2.0 and 1.0sec. Since the scalers never change and the values can be easily deduced, the spatialization directive omits this information explicitly.
MR
The following notation: BL BL
BR
ML
FL
R: B
F BR
MR
FR
B F
Figure 8 - Spatialization
Source and Virtual Room Presence Intuitively, the source presence parameter controls whether the source sounds ‘near’ or ‘far’. The perceptual presence of the room implies a reverberation level and decay time. In practice, both the source and room presence can be described completely with four parameters: direct sound level, early reflections level, reverberation level and reverberation time. It should be noted that the reverberation time is different for the low, middle and high frequencies. These can be obtained using the following three scalers which are constant throughout the piece (low: 1.0, middle: 1.0, high 0.5).
indicates that one of the two spatialization paths: B -> BL -> ML -> FL -> F OR B -> BR -> MR -> FR -> F should be chosen at random. The direct sound level may also change dynamically in time. The directive:
F/–11 -> –3 in 5sec/–18/–17/2.0 indicates that the direct sound level goes from –11 to –3dB in 5 seconds with all other parameters remaining constant. A dynamic direction directive may be combined with a dynamic direct sound level directive so, for example, the following directive:
R every 525msec /–11 -> –3 in 5sec/–18/–17/2.0 Notation A spatialization directive in the score has the following structure: direction (which may be static or dynamic), direct sound level, early reflections level, reverberation level and reverberation time. (The levels given describe the energy associated to the time sections of the room impulse response. The time sections are 1) direct: 0–20msec, 2) early: 20–40msec, and 3) reverb: 40msec to the end of response.) For example, the spatialization directive:
F/–11/–18/–17/2.0 means the sound is heard coming from the front with a direct sound level of –11dB, an early reflections level of –18dB, a reverberation level of –17dB and a reverberation time of 2.0
indicates that one of the six main positions should be chosen at random every 525msec and that the direct sound level goes from –11 to –3dB in 5 seconds with the last three parameters remaining constant. The direction may also consist of rotation through 360 degrees. The following directive:
in 18"
indicates that the sound source makes a complete rotation of the 360 degrees in a clockwise movement in 18 seconds, and continues to rotate at that speed until further indication. The source may start the rotation from any point on the circle.
5 The directive: F in 18"
indicates means the same as above except that the rotation starts at position F.
are used as well as the spatialization source the output is directed to (indicated with src1, src2 etc. for ‘source 1’, ‘source 2’ etc.). Recall that the values for the parameters of the DSP modules, sampling and spatialization are given in the score. Where sampling sequences or spatialization patterns are generated in an algorithmic fashion in real time, the algorithm is given here in pseudo code or plain English. Introduction – Libre
The directive: 500msec
500msec
OR
R
See Figure 9 for the DSP patch. This section requires a score following system for cues 3–9. violin
indicates that either a clockwise or a counter clockwise rotation should be chosen at random and that in either case the full rotation should take 500msec. Another kind of continuous spatialization consists of back and forth sweeps between two positions. For example, the directive:
S
16" F BL
BR
indicates that the sweep should start at position BL, sweep through all the points clockwise until position BR, then sweep counter clockwise back to position BL. The total duration of the BL -> F -> BR -> F -> BL sweep should be 16 seconds. The sweeping movement should continue until further indication. Synchronization of the Violin and Electronics The score contains cues serving to indicate when one or more electronic sound processes (transformation, sampling, spatialization etc.) are to be started and stopped, and when parameters and data for the modules should be set. In certain sections of the piece (I, II, VI.3 for example) the cues can be triggered manually (button or mouse click, for example) but in others (such as cues 3 to 9 in the Introduction – ‘Libre’, for example) the triggering of the cues should be automated via a system capable of following the performer in real time. The sections which need an automated score following system are: Introduction (‘Libre’), I and VI.1. The automation is mainly needed for guaranteeing the synchronization with the rubatos of the violinist.
src1
IR
src2
IR
S
src3
FS
src4
Figure 9 Libre
Sections /I, I/II, II/III, III/IV, IV/V and V/VI.1 – Libre See Figure 10 for the DSP patch which is the same for all these sections. violin
src1
HR
src2
HR
src3
2 RM/C
3. Processes and Data For each section of the piece the DSP ‘patch’ is given which indicates which processes (transformation, sampling etc.)
IR
Figure 10 Sections /I, I/II, II/III, III/IV, IV/V, V/VI.1 – Libre
src4
src5
6 I – Très Lent
III – Lent See Figure 13 for the DSP patch.
See Figure 11 for the DSP patch.
violin
violin
2 RM/C
src1
FS
src2
4HR
src3
S
src4
src1
4 HR
S
src2
S
IR
FS
src3
src4 S
Figure 11 I - Très Lent
IR
src5
Figure 13 III - Lent
This section makes use of two different kinds of processes which generate musical material in real time. These will be called the ‘chaotic’ and ‘cloud’ processes. II – Rapide, dynamique
‘Chaotic’ Process This process (like the ‘cloud process’ below) is used in bars 5–33 and again in bars 43–58 and consists of a series of cycles. One cycle is made up of a number of note events followed by a number of rest events. The process uses the following data:
See Figure 12 for the DSP patch. violin
4HRD
src1
6FSD
src2
S
src3
S
IR
src4
Figure 12 II - Rapide, dynamique
The spatialization of the frequency-shifter/delay (6FSD) and the sampler (S) are interdependent. The position of the 6FSD source changes randomly every 525msec. The sampler’s source uses the last position used by 6FSD. When the position of FSD6 changes, the last position of FSD6 is the new position for S and so forth. The dynamic curve of the six frequency-shifter/delays is constant: 0, –5, –10, –15, –20, –25dB.
• a set of pitches • the number of note events in one cycle • the number of rest events in one cycle • the event duration (which is the same for note events or rest events) • a constant set of dynamics (0 –6 –9 –12) in dB Each cycle begins with a number of note events. Each note event is generated as follows: • a random process chooses 0 or 1 weighted 3:1 in favor of 0 • if the random choice is 0, a grace note followed by a note will be generated in the following manner: - for the grace note: › choose at random a note from the set of pitches › choose at random a dynamic from the set of dynamics then subtract 9dB › set the duration to 40% of the event duration › choose at random a ‘pizz’ or ‘arco’ sample - for the note: › choose at random a note from the set of pitches › choose at random a dynamic from the set of dynamics then subtract 18dB › set the duration to 60% of the event duration › choose at random a ‘pizz’ or ‘long’ sample - play the grace note and note
7 • if the random choice is 1, a note will be generated in the following manner: › choose at random a note from the set of pitches › choose at random a dynamic from the set of dynamics then subtract 24dB › set the duration to the event duration › play the note with the ‘pizz doux’ sample The cycle ends with a number of rest events where the process does nothing for a time equal to the event duration multiplied by the number of rest events. Then the cycle begins again with the note events and so on. The process is stopped on cue. This section uses TWO superposed processes of this kind in parallel. The first process uses the following parameters: • the set of pitches given at the appropriate cue in the score • number of note events in one cycle: 9 • number of rest events in one cycle: 3 • event duration: 200msec The second process uses the following parameters: • the set of pitches given at the appropriate cue in the score • number of note events in one cycle: 11 • number of rest events in one cycle: 4 • event duration: 175msec ‘Cloud’ Process • This process (like the ‘chaotic process’ above) is used in bars 5–33 and again in bars 43–58. It uses the following data: • a set of pitches • the number of pitches in the set Each time the process is triggered at the appropriate cue, it executes the following steps a number of times equal to twice the number of pitches in the pitch set: • choose at random a note from the set of pitches • play the note with both the ‘pizz doux’ (with duration of 200msec) and the ‘long’ (with duration of 1000msec) samples. • wait for 20msec
Spatialization The spatialization of the violin sound is done using position sequences. The amount of time the violin remains at any given position is determined by the basic unit duration of 250msec which is multiplied by a number of units. So using the first set of data in the score, for example, the sequence of units 7 2 1 4 etc. becomes 1750, 500, 250, 1000msec, and the sequence of position and durations becomes: BR 1750, FR 500, FL 250, FR 1000 … etc. Six sequences are triggered during this section at cues 1,2,3 and cues 10,11,12. ‘Cloud’ Process This process is similar to the one used in section III. Here it is used in bars 12–24. It uses the following data: a set of pitches. Each time the process is triggered at the appropriate cue, it executes the following steps every 100msec for 2 seconds: • choose at random a note from the set of pitches • play the note with both the ‘pizz’ (with duration of 200msec) and the ‘long’ (with duration of 1000msec) samples. At the end of 2 seconds the process stops.
V – Très lent See Figure 15 for the DSP patch. This is the same as the patch for III – ‘Lent’ except for the ring-comb module which is absent. This section makes use of the two processes used in III. See notes on that section. Only the pitch sets (in the score) differ from that section.
violin
The process then stops. IV – Agité, instable See Figure 14 for the DSP patch.
FS
src1
4HR
src2
S
src3
violin
src1
S S
IR
IR
src2
Figure 15 V - Très lent S
IR
Figure 14 IV - Agité, instable
src3
src4
8 VI.1 – Allant
VI.3 – Calme
See Figure 16 for the DSP patch.
See Figure 18 for the DSP patch. violin
violin
S
IR
src1
S
IR
src2
src1
src2
S
Figure 18 VI.3 - Calme S
src3
‘Cluster’ Process This section uses two sample cluster processes (similar to the ones used in sections III, IV & V) played simultaneously. The musical material is based on a main pitch and a set of pitches constructed around the main pitch.
Figure 16 VI.1 - Allant
This section requires a score following system. The spatialization process at all the places marked ‘moitiécrins/moitié bois’ (bars 6, 12, 17 etc.) is always the same.
The first process creates a cluster from the set of pitches. Each time the process is triggered at the appropriate cue, it executes the following steps every 100msec for 2 seconds: • choose at random a note from the set of pitches • play the note with both the ‘pizz’ and the ‘long’ samples with durations of 200 & 1000msec respectively.
VI.2
At the end of 2 seconds the process stops.
See Figure 17 for the DSP patch.
The second process creates a burst of the main pitch referred to above. Each time the process is triggered at the appropriate cue, it executes the following steps every 100msec for 2 seconds: • take pitch • play the note with both the ‘pizz’ and the ‘long’ samples with durations of 200 & 1000msec respectively.
violin
FS
src1
4HRD
src2
At the end of 2 seconds the process stops.
4. Performance Guidelines S
IR
src3
S
src4
2RM/C
src5
IR
src6
Figure 17 VI.2 - Calme, régulier/Agité/Brusque/Calme, retenu
The following comments are based on the experience gathered through approximately fifty performances of Anthèmes 2 to date (2005) in a great variety of halls. As a general rule, the amplified violin and the electronics should be equally balanced. This is essential to the spirit of the piece and especially to the sections where it is desired that the listener should not be able to distinguish the live violin from the electronic sounds. However, the natural violin sound doesn’t blend well with the electronic sounds. By amplifying the violin just enough to change its timbre (without making it louder) one can achieve a satisfactory mix.
9 The level of amplification can be set while having the violinist play without electronics the excerpts of section I for the sustained passages and of section II for the pizzicatos. The same excerpts can then be played with electronics. It may be necessary to add some reverberation to give the violin sound more body. • Introduction – Libre The level of the infinite reverberation (of the violin and samplers) at the end of the decrescendo should remain loud enough so that the cutoff at cue 11 by the violin can be heard clearly. • Sections /I, I/II, II/III, III/IV, IV/V and V/VI.1 – Libre The infinite reverberation should be clearly audible for approximately 5 seconds after the violinist finishes the harmonic at the end before the decrescendo starts. • I – Très lent Particular attention should be given to balancing the sampler dyads with the live violin sound as of bar 2. • II – Rapide, dynamique In this section the balance should be such that the listener is not able to distinguish the live violin sound from the electronic sounds. The confusion between the two is deliberate. • III – Lent During the ‘chaotic’ processes the balance should be such that the listener is not able to distinguish the live violin sound from the electronic sounds. The ‘cloud’ processes should be distinctly heard but remain in the background. Particular attention should be given to balancing the sampler dyads with the live violin sound as of bar 36. • IV – Agité, instable The spatialization of the violin should be forceful. The sampler sequences starting at bars 1 and 25 as well as the ‘cloud’ processes starting at bar 12 (cue 5) should be distinctly heard but remain in the background. • V – Très lent Same remarks apply as for III regarding the two processes. Particular attention should be given to balancing the sampler dyads with the live violin sound in bar 1, then as of the end of bar 20. • VI.1 – Allant The level of the spatialization at all the places marked ‘moitié-crins/moitié bois’ (bars 6, 12, 17 etc.) should be such that the listener is not able to distinguish the live violin from the electronic part which should be forceful. • VI.2 Four different kinds of musical material are present in this section. In order of appearance they are: Calme, régulier: The sound of the frequency shifter should be light and not too aggressive.
Agité: The harmonizer’s level should be such that the listener is not able to distinguish the live violin sound from the harmonizer. The sampler chords should be distinctly heard but remain in the background. Brusque: Both the solo violin and the sampler sequences should be forceful and aggressive. Calme, retenu: The violin arpeggios should be clear but light. The sinus chords, 2RM/C and IR should be well blended. The 2RM/C’s resonance should be clearly heard but not overbearing. • VI.3 – Calme The two sampler clusters with infinite reverberation should be distinctly heard but remain in the background. At the end of the decrescendo starting the second half of bar 209, the clusters should remain loud enough so that the cutoff at cue 31 by the violin is clearly heard.
5. Historical Notes The original version of Anthèmes for unaccompanied violin was performed for the first time on the 19th of November, 1991 during a concert in honor of Alfred Schlee, former director of Universal Edition and long-time friend of Pierre Boulez. The score of this version of the piece was published by Universal Edition and corresponds to a version slightly modified in May 1992. The musical origin of Anthèmes is to be found in an unused part of one of the earliest versions of ...explosante-fixe... . If one compares portions of the score of Anthèmes (the ‘rapide’ pizzicato section towards the beginning of the piece, for example) with the violin part of the Originel movement in ...explosante-fixe..., one can find traces of Anthèmes. In Originel the musical texture of the writing is somewhat uniform and therefore unsuitable as material for a solo piece. Therefore, one of the aims in modifying this initial material was to make a score where the writing was more ‘differentiated’ and the ‘figures more characteristic’ according to Boulez. If one examines the score of Anthèmes in its totality one finds little resemblance with the Originel violin score. The habit of taking a small fragment of an existing score and developing it, frequently beyond recognition, can be found elsewhere in Boulez’s work: Derive I is derived from Répons which in turn is related to Messagesquisse; parts of an early version of Notations found their way into Pli selon pli. This practice is in keeping with Boulez’s more general approach to musical composition which involves taking a small musical idea and making it ‘proliferate’. Typical also in Anthèmes is Boulez’s habit of creating a small number of families of musical writing from which the piece is created in a sort of braided fashion. A musical family will typically be based on a type of writing (based on rules, a method of proliferation, or a principle of generation) which guarantees the family’s musical identity and cohesion. Strands of the material corresponding to a given family can be then
10 found woven throughout the composition. This approach is very clear in ...explosante-fixe..., for example. All of the material in the first movement of the piece (Transitoire VII) is based on seven families each of whose identities can be easily heard. In Anthèmes this practice is less obvious due mainly to the brevity of the piece but present nonetheless. In 1995 Pierre Boulez decided to compose an electro-acoustic version of the piece called Anthèmes 2. The realization of this version was confided to Andrew Gerzso who had already done the electro-acoustic realizations for Répons (1981), Dialogue de l’ombre double (1986) and ...explosante-fixe... (1991). In keeping with the spirit of these three compositions, Anthèmes 2 also takes a ‘live’ approach, that is one in which all the electronic material is generated in ‘real time’, during the performance. (In other words, there is no pre-recorded material which is simply played back during the concert.) The point of departure for this new project was the May 1992 version of the score. The first question that needed to be dealt with was how to coordinate the performance of the soloist with that of the computer. In Répons this coordination is done manually with the computer operator following the score and conductor and starting the appropriate program at the right time. In ...explosante-fixe... the coordination is completely automatic using what is called a ‘score follower’. With this approach the computer ‘listens’ to the soloist and compares what the soloist is playing with the score (which has been previously stored in its memory) in order to establish the precise moment for triggering modifications of the sound, using modules which affect the pitch, timbre, timing and spatial location of what is played by the soloist. Therefore, in the preparatory work for Anthèmes 2, a number of experiments were made to establish the different musical parameters of the violin (pitch, dynamics, time, etc.) which could be detected for use in the score following. Then followed a large number of sketches aimed at choosing the types of interaction that could exist between the violin and the computer. A natural consequence of this was that as the work advanced section by section, the piece was progressively re-written to varying degrees in order to take advantage of the new musical possibilities offered by the inclusion of electronics. It soon became clear that the electronics would fulfill three roles: 1) to modify and extend the structure of the sound of the violin, 2) to modify and extend the structure of the families of musical writing mentioned above, 3) to create a spatial element which enables the musical material to be projected in space. One example of the first role can be found in the treatment of the harmonics played by the violin. Viewed in its simplest form, the principle of the harmonic relies on the specific resonance pattern of a string in order to produce the desired harmonic. In the electronic treatment, the harmonic sound of the violin is first transposed, then sent through a module to enrich the spectrum which, in turn, is then sent through a resonant structure whose main resonance pitch is the same as the desired harmonic. In this way the electronics is used to enrich the spectrum of the instrument while respecting the
basic principle of the harmonic on the violin. An example of the extension of the musical families can be found in the pizzicato section near the beginning of the piece. This section, written in the form of a canon, is based on the idea of shifting a musical structure over time in a very precise way. The electronic part extends this principle by using transposition modules combined with time delay modules which together multiply the number of musical lines, each of which is transposed and shifted in time (just as in a canon). Furthermore, the transposition and delay patterns are composed in such a way as to clarify or blur the original musical line. The use of space in Anthèmes 2 goes beyond Boulez’s use of space in Répons, Dialogue de l’ombre double or …explosante-fixe… In these pieces spatialization is used, for example, to articulate the structure of a musical phrase (as in Dialogue), a chord (as in Répons) or a musical process (as in …explosante-fixe…). In all cases the role is that of articulation, that is, outlining, describing and clarifying the structure of a musical idea. In these pieces there is also a very literal correspondence between the spatial location of the sound one hears and the position of the speaker itself. Anthèmes 2 on the other hand uses a system based on a perceptual approach to spatial hearing, which enables the listener to hear sounds clearly in this or that position in space, independently of the position and number of speakers used. The system can also be used for creating foreground/background effects. This latter feature is particularly useful for clarifying or blurring the musical material by projecting the sound to the foreground or the background of the musical listening space. The first performance of Anthèmes 2 took place at the Donaueschingen Festival in October, 1997. In 1999 and 2000 the composition was recorded and mixed by Deutsche Grammophon in the presence of the composer. Both the premiere and the recording were played by the violinist Hae Sun Kang of the Ensemble Intercontemporain.
6. Implementation Notes These notes cover the period from the premiere of Anthèmes 2 (1997) to the time at which this manual was written (2005). The first version of the piece was implemented on a NeXT computer equipped with three ISPW real time processing boards (designed and built at IRCAM) each of which contained two Intel i860 DSP processors running at a sampling rate of 32kHz. The software used was a variant of IRCAM’s Max programming language, Max 0.26. The computer was also connected to an AKAI S-2000 sampler. The samples used were played by Hae Sun Kang and Jeanne-Marie Conquer of the Ensemble Intercontemporain. Anthèmes 2 was the first of Boulez’s compositions to use IRCAM’s Spatialisateur, a sound spatialization library running in Max. The output of the Spatialisateur went to six loudspeakers with the F & B positions being produced virtually by the Spatialisateur.
11
FL F
violin L
R 2 FR
ML 1
ML 6
audience
BL 5
MR 3
BR 4
The second version of the piece was implemented using another variant of the Max language called jMax which featured an architecture where the DSP part ran in a kernel written in C, and the graphics interface part ran in a Java environment. The program ran at a sampling rate of 44.1kHz on two Silicon Graphics Octane bi-processor computers. One computer ran the sound transformation and sampling programs and the other ran the spatialization programs. The control synchronization was achieved using a MIDI connection between the two machines. This version, which was not just ported from the NeXT version but rather completely rewritten, was used for the Deutsche Grammophon recording as well as numerous performances thereafter until 2004. The most recent version (2005) maintains the two computer architecture with the program (written in IRCAM and Cycling74’s Max/MSP 4.5) running on two laptop Apple 1.5GHz G4 computers connected via Ethernet for control synchronization. The Spatialisateur library is used for the spatialization. An AKAI Z8 is used for the sampling. This version also uses IRCAM’s audio based score following system in the critical sections mentioned above.
FL B
Figure 19 - Frontal Disposition with Loudspeakers
Andrew Gerzso Paris 2005