resonancia de diabolus
HARMONICS OF HORROR
resonancia de diabolus
- defintion of resonancia the phenomenon of increased amplitude that occurs when the frequency of a periodically applied force is equal or close to a natural frequency of the system on which it acts.
CONTENTS Musical Harmony
Non Linear Analogues
Types of Harmony
Missing Fundamental
Binaural Sound
Spectrograms
Soundscapes
Diabolus in Musica
Emotional Influence
“ The main goal of the modular
synthesizer solstice and equinox series is to simply provide a periodic space for the exploration and awareness of modular synthesizer-produced activity and sound
“
In its literal definition, “to modulate” means to change or adapt, and “modulated” referred to something lowered, lessened, reduced, or regulated.
Decisively true to its title, the essence of this type of music is to simplify sound into its most basic elements or timbres through the experimentation or invention with modified electronic equipment and machines. These subsequent reductions are then layered, mixed or modified to create their own symphonies of electronically induced sound. In no way a new genre (the modern modular synthesizers date back to the early 1960s), today’s electronic/synth music is a cross-section of different technologies and methods for sound creation and always continuing to find new ways to blow your mind and broaden the range of non-instrument based music.
HARMONIC HORROR
Rather than recording music that is then played through a listening device, or shooting a film to be viewed, these forms of performance embrace the very concept of the mediums presented. The medium becomes the tool in which to create the performance; rather than being a static presentation, the musicians and artists begin with the raw ingredients to then present something new, something dynamic and ever-changing.
MODULAR SYNTHS
This modulation of the medium forces us to take pause and reflect on the components rather than the finished product. In a culture that is saturated by images and sound bytes, with Pandora Radio and Youtube, it is refreshing to hear and see something so exposed, unedited and purposefully imperfect.
HARMONIC HORROR
MODULAR SYNTHS
Musical Harmony Harmony is the composite product when individual musical voices group together to form a cohesive whole. Think of an orchestra: the flute player may be playing one note, the violinist plays a different note, and the trombonist plays yet a different note. But when their individual parts are heard together, harmony is created. Harmony is typically analyzed as a series of chords. In that hypothetical orchestra, let’s say that the flutist was playing a high A, the violinist bowed a C#, and the trombonist sustained an F#. Together, those three notes comprise an F# minor triad. Therefore, even though each instrumentalist was only playing a single note, together they played an F# minor chord.
Harmony can be fully scripted by a composer, or it can be outlined by a composer and fully expressed by the players performing the music. The orchestral scenario described is an example of harmony that’s tightly scripted by a composer—he or she has assigned specific notes many single-note instruments, and those notes combine to form chords. This is common practice in the European tradition of classical music. A unison is considered a harmonic interval, just like a fifth or a third, but is unique in that it is two identical notes produced together. The unison, as a component of harmony, is important, especially in orchestration. [7] In pop music, unison singing is usually called doubling, a technique The Beatles used in many of their earlier recordings. As a type of harmony, singing in unison or playing the same notes, often using different musical instruments, at the same time is commonly called monophonic harmonization.
“ Music creates order out of chaos: for rhythm imposes unanimity upon the divergent, melody imposes continuity upon the disjointed, and harmony imposes compatibility upon the incongruous.” - Yehudi Menuhin
Diatonic harmony TYPES OF MUSICAL HARMONY This is music where the notes and chords all trace back to a master scale. So if you’re in the key of Ab major, all the notes and chords you play will be drawn from the seven notes comprising the Ab major scale. And if you’re not sure what key you’re in, check the “key signature”—the list of sharps and flats that appears at the beginning of each system of musical notation. Diatonic harmony can be found in everything from ancient Greek instrumentals to Renaissance chorales to contemporary pop hits.
Non-diatonic harmony TYPES OF MUSICAL HARMONY Non-diatonic harmony introduces notes that aren’t all part of the same master scale. This form of harmony is completely idiomatic to jazz, but it appears in all forms of music. Let’s say you’re in the key of Ab major and you play a Bb7 chord. That chord contains the note D, which is definitely not in the Ab major scale. It sounds a bit edgy, but it also tends to be quite memorable. “Somebody to Love” by Queen is a good example of this. When Freddie Mercury sings “I’ve just gotta get out of this prison cell” the word “out” falls on a Bb chord in the key of Ab. But non-diatonic harmony is not a new concept. The preludes and fugues of Johann Sebastian Bach are roughly 400 years old, but they remain a master tutorial in the melding of non-diatonic notes with traditional key signatures.
Atonal harmony TYPES OF MUSICAL HARMONY This form of harmony doesn’t have a tonal center: it isn’t built on a scale that’s major or minor, or that has an identifiable root. In classical music, atonal music was largely the brainchild of composer Arnold Schoenberg. Schoenberg personally disliked the term “atonal” and described his technique as “twelve-tone music” where all twelve of the pitches used in Western music were equal in the harmonic language. Atonal harmony also became popular in the free jazz movement spurred by players like Ornette Coleman and Don Cherry.
Binaural Sound A harmonic sound is said to have a missing fundamental, suppressed fundamental, or phantom fundamental when its overtones suggest a fundamental frequency but the sound lacks a component at the fundamental frequency itself. The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity implied by the relationship between the higher harmonics; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone. A low pitch (also known as the pitch of the missing fundamental or virtual pitch) can sometimes be heard when there is no apparent source or component of that frequency. This perception is due to the brain interpreting repetition patterns that are present. It was once thought that this effect was because the missing fundamental was replaced by distortions introduced by the physics of the ear. However, experiments subsequently showed that when a noise was added that would have masked these distortions had they been present, listeners still heard a pitch corresponding to the missing fundamental, as reported by J. C. R. Licklider in 1954. It is now widely accepted that the brain processes the information present in the overtones to calculate the fundamental frequency.
Soundscapes The feeling of horror within movies or games relies on the audience’s perception of a tense atmosphere often achieved through sound accompanied by the on-screen drama guiding it’s emotional experience throughout the scene or game-play sequence. These progressions are often crafted through an a priori knowledge of how a scene or game-play sequence will play-out, and the intended emotional patterns a game director wants to transmit. The appropriate design of sound becomes even more challenging once the scenery and the general context is autonomously generated by an algorithm. Towards realizing sound-based affective interaction in games this paper explores the creation of computational models capable of ranking short audio pieces based on crowd-sourced annotations of tension, arousal and valence. Affect models are trained via preference learning on over a thousand annotations with the use of support vector machines, whose inputs are low-level features extracted from the audio assets of a comprehensive sound library. Audio is often associated with classical or contemporary musical pieces. The reality however is that audio can be more than just “music”, but a meticulous crafted sonority that complements visual and interactive experiences, often described as audiovisual metaphors. Sound design is an important part of both film and digital games where sound designers fine tune the intended emotional experience, through expert knowledge, to the exact imagery on-screen.
Digital games, especially those in the horror genre, rely on game audio as it enhances the player experience; the soundscapes created by game audio are capable of immersing players into the virtual world. Although some arguments can be made that digital games already apply some form of procedural audio, such as the sounds of player actions in the background of multiplayer games (Garner and Grimshaw 2014), much more could be accomplished by orchestrating between virtual levels and the sounds played therein. Several professional tools such as the sound middleware of UDK (Epic Games, 2004) provide procedural sound components, albeit very simple (i.e. variations of notes in a specific scale). This shows an increasing commercial interest in sound as a procedurally generatable game facet. Fear and tension are the primary emotions elicited by the genre of horror, a peculiar characteristic for media whose sole purpose is to entertain. The audience is often lead into tense and fearful situations, meticulously crafted by the authors using a narrative progression and a combination of visual and auditory stimuli. Designers can also guide the level generation process, by defining an intended progression of tension, which the level generator and sonification will adhere to.
GENERATING SOUNDSCAPE BUILDING ATMOSPHERE
Emotional influence A system can be described as nonlinear when the output from it is not proportional to the input into it. Many acoustic systems are linear within certain parameters, and nonlinear beyond them. For instance, if you turn up your stereo volume too much, at some point, you will experience a loss in fidelity. A variety of animals, including humans, produce what in the bioacoustic literature are referred to as vocalizations with nonlinear attributes. Such nonlinearities include: noise and deterministic chaos, sidebands and subharmonics, and abrupt amplitude and frequency transitions. Nonlinearities are commonly produced when animals are under duress, such as the fear screams produced when animals are attacked by predators. If nonlinearities are used by humans, and other vertebrates, to capture a receiver’s attention, we might expect them to be also used by film score composers and audio engineers to manipulate the emotions of those watching a film. Previous work has focused on the relationship between emotion and the temporal and frequency characteristics of music and film soundtracks and we know that the dramatic sad music that makes us cry in a film soundtrack sounds very different from the music in an action/adventure film with a throbbing low-frequency beat that keeps us on the edge of our seats.
Non Linear Analogues King Kong, the 1933 classic horror movie, saw the first use of recorded animal sounds that were subsequently manipulated to produce non-linear sounds, the scientists said. The pitch and timbre of the animal calls were changed by the manipulation of the play back medium. This idea has been used many times in certain films depicting prehistoric, alien or otherwise monstrous characters. The natural sounds may be difficult to synthesise, which is why film makers resort to real recordings that they manipulate to make them sound more non-linear.
A notable early exception was in Alfred Hitchcock’s 1963 film The Birds. Here, the director used an electronic instrument, the trautonium, to create a horrifying avian language rather than use recorded bird calls.
Musical composers also use non-linear sound to emphasise evocative emotions. The use of the music of 20th Century composer Krysztof Penderecki in the The Exorcist and The Shining “inspired the use of noise techniques as a style marker of horror genre films�, the researchers said.
Missing Fundamental A harmonic sound is said to have a missing fundamental, suppressed fundamental, or phantom fundamental when its overtones suggest a fundamental frequency but the sound lacks a component at the fundamental frequency itself. The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity implied by the relationship between the higher harmonics; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone. A low pitch (also known as the pitch of the missing fundamental or virtual pitch) can sometimes be heard when there is no apparent source or component of that frequency. This perception is due to the brain interpreting repetition patterns that are present. It was once thought that this effect was because the missing fundamental was replaced by distortions introduced by the physics of the ear. However, experiments subsequently showed that when a noise was added that would have masked these distortions had they been present, listeners still heard a pitch corresponding to the missing fundamental, as reported by J. C. R. Licklider in 1954. It is now widely accepted that the brain processes the information present in the overtones to calculate the fundamental frequency.
Spectrograms
A spectrogram is a detailed view of audio, able to represent time, frequency, and amplitude all on one graph. A spectrogram can visually reveal broadband, electrical, or intermittent noise in audio, and can allow you to easily isolate those audio problems by sight. In audio software, we’re accustomed to seeing a waveform that displays changes in a signal’s amplitude over time. A spectrogram, however, displays changes in the frequencies in a signal over time. Amplitude is then represented on a third dimension with variable brightness or colour.
The key to successful audio restoration lies in your ability to correctly analyze the situation—much like a doctor recognizing symptoms that point to a certain illness. Constantly training your ear to distinguish the noises and audio events that need to be corrected can be a lifelong endeavor. Fortunately, as explained previously, spectrogram technology makes this task easier by representing those audio events visually.
Seeing Sound
Spectrograms keep time on the X axis but place frequency on the Y axis. Amplitude is also represented as a sort of heat map or scale of colour saturation. Spectrograms were originally produced as black and white diagrams on paper by a device called a sound spectrograph, whereas nowadays they are created by software and can be any range of colours imaginable. Spectrograms map out sound in a similar way to a musical score, only mapping frequency rather than musical notes. Seeing frequency energy distributed over time in this way allows us to clearly distinguish each of the sound elements in a recording, and their harmonic structure. This is especially useful in acoustic studies when analysing sounds such as bird song and musical instruments. Clearly spectrograms can tell us a lot about the acoustic elements of a sound, but they are not just used for scientific studies. Audio editing is most often performed with waveforms as it’s easier to make cuts or process a selected time range. When editing software uses spectrograms however, it opens up a whole new realm of possibilities! With this spectral editing, we are able to look into the microscopic details of a sound and apply processes to very specific time and frequency ranges.
MAPPING MUSICAL FREQUENCY ALONG AXIS
In musical terms, augmented means the same as ‘stretched out’, while diminished can be thought of as ‘squeezed’. An augmented chord comprises notes that are spaced apart at wider intervals than those of a regular triad, while a diminished chord is so called because it features narrower intervals than the standard version, making it more compact.
MANIPULATING SOUND TO CREATE HORROR MIDDEN HORRORS IN MUSIC
The source for your anxiety is elusive familiar sounds in unusual ways
Augmented chords have the unique distinction of not appearing when a major scale is harmonised. This means that when you stack alternate notes from a major scale on top of each other to form triads, you end up with major triads (C, F, G in the case of C major), minor triads (Dm, Em, Am) and even a diminished triad (Bdim), but never an augmented triad.
Even during the Baroque and Classical eras, as the Catholic Church’s influence over cultural customs faded, composers continued to eschew the devil’s interval. In the odd passages when the tritones appeared.
Diminished chords are a favourite of horror movie score writers thanks to their somewhat spooky, foreboding sound, and very effective for use in transitions, as well as for creating anticipation or a feeling of tension. They also often turn up when songwriters want to shift to from one key to another.
Their use was technical: to create— and quickly resolve—tension. Then suddenly, at the dawn of the Romantic era of classical music, there it is, in Act 2 of Beethoven’s 1805 opera Fidelio. As the scene opens in a dungeon, the kettle drums rumble menacingly—tuned in the devil’s interval.
Something akin to obsession followed, as composers used tritones probed the darker corners of nature and humanity. Perhaps the best-known comes from “Danse macabre” by Camille Saint-Saëns. Franz Listz, a Hungarian composer and piano virtuoso, wielded tritones with ghastly gusto—for example, to evoke Dante’s descent into hell in the opening notes of his “Dante Sonata”.
“ Nothing exists without music, for the universe itself is said to have been framed by a kind of harmony of sounds, and the heaven itself revolves under the tone of that harmony.� - Isidore of Seville