Aramis Hallicrafters BrandMagazine

Page 1

the

HEAR RING magazine

1


2


3


WE FIRMLY BELIEVE THAT TECHNOLOGY CAN BE

WARM 4


Anyone has at least once broken the ice with a question about musical taste, and it usually leads to further discussion, it really helps with creating a bond. Sound, more than any other sense connects us with ourself and with others, it is surprisingly powerful by itself. Beyond that power, technology makes it stronger and provides new ways of self expression that favour communication, it is the mean to the end. It doesn’t however need to be only that: technology can have the heart and warmth its contents do, it can be more than a box and take active part in the formation of what it frames. Technology and sound will develop their relationship symbiotically, technology will be warmer and warmer while sound will reach new, never-before-seen landscapes.

5


6


CONTENTS 10

How Music Bonds Us Together

18

You ain’t heard nothing’ yet

26

Soundscapes in the past

36

From Russia With Love

44

Hearing the colours

58

America’s Music Innovator

67

From sound to logo

70

Endangered Sounds

7


8


9


10


HOW MUSIC BONDS US TOGETHER According to new research, music helps synchronize our bodies and our brains By Jill Suttie

A

t GGSC’s recent awe conference, Melanie DeMore led the audience in a group sing as part of the day’s activities. Judging from participant responses, it was clear that something magical happened: We all felt closer and more connected because of that experience of singing together. Why is singing such a powerful social glue? Most of us hear music from the moment we are born, often via lullabies, and through many of the most important occasions in our lives, from graduations to weddings to funerals. There is something about music that seems to bring us closer to each other and help us come together as a community. There’s little question that humans are wired for music. Researchers recently discovered that we have a dedicated part of our brain for processing music, supporting the theory that it has a special, important function in our lives. Listening to music and singing together has been shown in several studies to directly impact neuro-chemicals in the brain, many of which play a role in closeness and connection. Now new research suggests that playing music or singing together may be particularly potent in bringing about social closeness through the release of endorphins.

In one study, researchers found that performing music—through singing, drumming, and dancing—all resulted in participants having higher pain thresholds (a proxy measure for increased endorphin release in the brain) in comparison to listening to music alone. In addition, the performance of music resulted in greater positive emotion, suggesting one pathway through which people feel closer to one another when playing music together is through endorphin release. In another study, researchers compared the effects of singing together in a small choir (20-80 people) versus a larger choir (232 people) on measures of closeness and on pain thresholds. The researchers found that both choir groups increased their pain threshold levels after singing; however, the larger group experienced bigger changes in social closeness after singing than the smaller group. This suggested to the researchers that endorphins produced in singing can act to draw large groups together quickly. Music has also been linked to dopamine release, involved in regulating mood and craving behavior, which seems to predict music’s ability to bring us pleasure. Coupled with the effects on endorphins, music seems to make us feel good and connect with others, perhaps particularly when we make music ourselves.

11


But music is more than just a common pleasure. New studies reveal how it can work to create a sense of group identity. In a series of ingenious studies, researchers Chris Loerch and Nathan Arbuckle studied how musical reactivity—how much one is affected by listening to music—is tied to group processes, such as one’s sense of belonging to a group, positive associations with ingroup members, bias toward outgroup members, and responses to group threat in various populations. The researchers found that “musical reactivity is causally related to…basic social motivations” and that “reactivity to music is related to markers of successful group living.” In other words, music makes us affiliate with groups. But how does music do this? Some researchers believe that it’s the rhythm in music that helps us to synch up our brains and coordinate

12

our body movements with others, and that’s how the effects can be translated to a whole group. Research supports this thesis, by showing how coordinating movement through music increases our sense of community and prosocial behavior. Indeed, one study found two year olds synchronized their body movements to a drumbeat—more accurately to a human they could see than to a drum machine. Explore how singing together makes us healthier and more connected. This tendency to synchronize seems to become only more important as we grow. In another study, adults listened to one of three types of music—rhythmic music, nonrhythmic music, or “white noise”—and then engaged in a task that involved cooperating and coordinating their movements. Those who listened to rhythmic music finished the tasks more efficiently than those who listened to the other types of sound, suggesting that rhythm in music promotes behaviors that are linked to social cohesion.


In another study, people seated side by side and asked to rock at a comfortable rate tended to coordinate better without music, but felt closer to one another when they did synchronize while listening to music. In a study by Scott Wiltermuth and Chip Heath of Stanford University, those who listened to music and coordinated their movements to the music were able to cooperate better and act more generously toward others when participating in economic games together (even in situations requiring personal loss for the good of the group, such as in the Public Goods Game).

Melanie DeMore Musician and GGSC activity leader

13


All of this evidence helps confirm music’s place in augmenting our social relationships. Perhaps that’s why, when you want people to bond, music is a natural resource for making that happen. Whether at concerts, social events, or awe conferences, music can help us connect, cooperate, and care for each other. This suggests that, if we want to have a more harmonious society, we would do well to continue to include music in our—and our children’s— lives.

14


musical reactivity is causally related to… basic social motivations

15


16


17


YOU AIN’T

HEARD NOTHING YET How one sentence uttered by Al Jolson changed the movie industry By Michael Freedland

18


H

ollywood lives by its clichés, and there had been catalytic events there before, all of which had been called revolutionary. But what happened at the Warner Theatre on Broadway, on 6 October 1927, really was a revolution. One sentence uttered on screen that night 80 years ago changed the movie industry as it had never been changed before – and perhaps would never be altered quite so excitingly again. Colour, widescreen, and television all made huge impacts on the movie industry. But The Jazz Singer was altogether different. Al Jolson calling to the orchestra, “Wait a minute, wait a minute I tell yer, you ain’t heard nothin’ yet” not only marked the arrival of what from that moment on became known as the talkies, it instantly – and I do mean instantly – killed off the silent cinema. These were not exactly the first words heard coming from a screen, but they were the ones that made Hollywood realise that films without sound had to be sent to the scrapheap. Jolson, then the most popular star on Broadway, had already been the first actor whose voice was heard on film –or rather via a film. It happened a year before, when Warner Bros, then a small studio on the verge of bankruptcy, thought it might get out of hock by doing something that had been experimented with for years, but which, until then, no one had been able to perfect. Thomas Alva Edison, the inventor of the gramophone, had been trying to find a way of being able to hear as well as see films as long before as the turn of the 20th century, but couldn’t get either the amplification or the synchronisation right. Other studios had experimented and given up. But Sam Warner, the technical genius of the mogul brothers, had linked up with the Western Electric company to adopt a system called Vitaphone – in which sound, recorded at 33 1/3 rpm (21 years later, it would be the speed used for long-playing records) on 16in discs, was synchronised with a projector.

19


The film they chose was Don Juan with John Barrymore. There would be no human voices, but every time a carriage rumbled on the cobblestones, you would hear the rattling of the wheels; when swords were fenced, there was the sound of steel against steel. More than that, the New York Philharmonic played the background music. Warners thought they had a sensation on their hands. They hadn’t. The critics hummed and hawed. Others dismissed it as something for the fairgrounds. What did, however, make people sit up and think was a series of shorts in the same programme. Giovanni Martinelli, star of the Metropolitan Opera, sang “Vesti La Giubba” from Il Pagliacci. Fritz Kreisler played his violin; and Jolson performed in a sequence called “A Plantation Act”, in which he sang “Rocka-Bye Your Baby with a Dixie Melody”. It was sufficiently successful for Warners to decide to make a full-length movie – although with songs, not dialogue.

20


H

ollywood lives by its clichés, and there had been catalytic events there before, all of which had been called revolutionary. But what happened at the Warner Theatre on Broadway, on 6 October 1927, really was a revolution. One sentence uttered on screen that night 80 years ago changed the movie industry as it had never been changed before – and perhaps would never be altered quite so excitingly again. Colour, widescreen, and television all made huge impacts on the movie industry. But The Jazz Singer was altogether different. Al Jolson calling to the orchestra, “Wait a minute, wait a minute I tell yer, you ain’t heard nothin’ yet” not only marked the arrival of what from that moment on became known as the talkies, it instantly – and I do mean instantly – killed off the silent cinema. These were not exactly the first words heard coming from a screen, but they were the ones that made Hollywood realise that films without sound had to be sent to the scrapheap.

Jolson, then the most popular star on Broadway, had already been the first actor whose voice was heard on film –or rather via a film. It happened a year before, when Warner Bros, then a small studio on the verge of bankruptcy, thought it might get out of hock by doing something that had been experimented with for years, but which, until then, no one had been able to perfect. Thomas Alva Edison, the inventor of the gramophone, had been trying to find a way of being able to hear as well as see films as long before as the turn of the 20th century, but couldn’t get either the amplification or the synchronisation right. Other studios had experimented and given up. But Sam Warner, the technical genius of the mogul brothers, had linked up with the Western Electric company to adopt a system called Vitaphone – in which sound, recorded at 33 1/3 rpm (21 years later, it would be the speed used for long-playing records) on 16in discs, was synchronised with a projector.

21


John Gilbert and Greta Garbo in Woman of Brio, 1928

22


The film they chose was Don Juan with John Barrymore. There would be no human voices, but every time a carriage rumbled on the cobblestones, you would hear the rattling of the wheels; when swords were fenced, there was the sound of steel against steel. More than that, the New York Philharmonic played the background music. Warners thought they had a sensation on their hands. They hadn’t. The critics hummed and hawed. Others dismissed it as something for the fairgrounds. What did, however, make people sit up and think was a series of shorts in the same programme. Giovanni Martinelli, star of the Metropolitan Opera, sang “Vesti La Giubba” from Il Pagliacci. Fritz Kreisler played his violin; and Jolson performed in a sequence called “A Plantation Act”, in which he sang “Rocka-Bye Your Baby with a Dixie Melody”. It was sufficiently successful for Warners to decide to make a full-length movie – although with songs, not dialogue.

could dance from the back of the stalls to the proscenium arch. Warner Bros were delighted. But what they hadn’t realised was that Jolson, the man who had to get so close to his audience that he could touch them, couldn’t be confined to a camera lens and a microphone. The mike was switched on and, as the orchestra struck up the opening chords of his first number, “Toot Toot Tootsie, Goo’ Bye”, with the (amazing this) sound of china and cutlery being moved behind him, Jolson broke in with those historic words, “Wait a minute, wait a minute, I tell yer, you ain’t heard nothin’ yet.”

They decided on The Jazz Singer, which was already a highly successful Broadway show starring a young comedian called George Jessel. Jack Warner tried to sign Jessel up for the film role, but the actor turned it down instantly. “You don’t expect me to risk my whole career on this crazy new invention,” he said. Eddie Cantor, then the darling of the famous Ziegfeld Follies, was approached, but he said no for the same reason. That was when Warner approached Jolson, whom he had previously decided was both too successful and, therefore, too expensive. Jolson, however, was fascinated. He decided that the plot was almost his own life-story – about the son of a synagogue cantor who chooses the stage, rather than wearing his father’s prayer-shawl for a living. He was also intrigued by another factor – he had always wanted to be the first to do things: he had been the first big star to entertain the troops in the First World War (a feat he repeated in the Second World War), he had been the first to take Broadway shows on tour, and the first to have a runway that effectively sliced the auditorium of a theatre in two, so that he

23


The technicians were stunned. But Sam Warner realised the potential of it. If people could take those few spoken words, maybe they would want more. He immediately ordered a new scene to be added, in which Jolson, playing Jackie Rabinowitz, cantor’s son who had become Jack Robin, stage star, tells his mother how he’s going to buy her a new house, a new black dress and take her to Coney Island. He sings “Blue Skies” to her until the scene comes to an abrupt end (and the sound sequence, too) when his father enters and orders such degenerate behaviour to cease. The word “stop” was the last heard on the soundtrack, but it was enough. The scene and the movie couldn’t have been a bigger hit if cinemas showing it had given away 10-dollar bills with every ticket. The critics raved. “Talking Pictures Sensation”, shouted the showbiz bible, Variety. The money flowed in and the other studios tried to find ways of bailing out. Overnight, panic set in. Every other studio looked at their schedules and began either cancelling future projects or changing them into talkies – a scene in the classic musical Singin’ in the Rain captured the situation beautifully. Microphones were hidden in bushes or in the leading ladies’ corsages, and the equipment constantly got out of sync. But the talkies had come to stay. Other studios came up with their own systems before eventually the idea of a separate “gramophone” playing the soundtrack gave way to a standardised optical soundtrack on the film itself. The one big tragedy was that Sam Warner, whose idea it had all been, died from a mastoid infection on the day of the movie’s premiere.

24

Jolson went on to make a second semi-talkie, The Singing Fool, in which he sang his terrible, but historic, “Sonny Boy” number. MGM followed soon afterwards with The Lights of New York. The musical was well and truly conceived, born and thriving. And so was the gangster movie – people thrilled to the sound of guns being fired. Look, if you get the chance, at the number of times the camera in those early films focused on a telephone. You just knew it was going to ring – as did the box office tills. As for Jolson himself, he had created something of a monster for himself. The man who had been the toast of Broadway started concentrating on making more movies. Few were any good, and his acting was even worse. But he would have the last laugh. There was one other little factor: he was the first performer to be offered a slice of the action of a film, reputed to be 25 per cent of the take. He loved the idea of the film but was frustrated that he had no speaking role, that audiences who heard him sing would have to “suffer” from having to read titles between each line of dialogue. Yet he agreed: he thought that the combination of another “first” to his roster and the chance to play to millions of cinemagoers throughout the world made it exciting enough.


Eddie Cantor in “Rose Marie” 1936

25


SOUNDSCAPES SOUNDSCAPES IN IN THE THE PAST PAST Adding a new dimension to our archaeological picture of ancient picture of ancient cultures cultures BYPrimeau Kristy E.ePrimeau David E. Wittxx BY Kristy E. David E.eWittxx

26



P

icture an archaeological site, what comes to mind? Sandstone walls, standing in the desert heat? Stonehenge, watching over a grassy field? When thinking about archaeological sites, we tend to conceive of them as dead silent – empty ruins left by past cultures. But this isn’t how the people who lived in and used these sites would have experienced them. Residents would have heard others speaking and laughing, babies crying, people working, dogs barking and music such as drumming. These sounds could be heard from close by, and perhaps coming from distant locations as well. Putting sound back into the archaeological landscape is an important part of understanding how people lived, what they valued, how they shaped their identities and experienced the world and their place in it. This growing field is called acoustic archaeology, or archaeoacoustics. By considering the sounds heard by people moving through the landscape, we’re able to more fully understand their culture, and thus better relate to them as human beings. We recently modeled an ancient soundscape at the landscape level for the first time. What can our ears tell us about the way the Anasazi, or ancestral Puebloan, people lived in New Mexico’s Chaco Canyon more than a thousand years ago? Modeling ancient sound Chaco Canyon was the center of ancestral Puebloan civilization. It’s famous for its great houses – large, multistoried structures, some the size of football fields – built and used from approximately A.D. 850-1150. Archaeologists have studied how the Ancestral Puebloans built the structures of Chaco Canyon and placed them in relation to each other and to astronomical alignments. To add a new dimension to our understanding of this time and place, we investigated how sounds were experienced at these sites.

28


The Ancient Southwest

ni rotnaC eiddE 6391 �eiraM esoR“

The Ancestral Puebloans were an ancient Native American culture that spanned the presentday Four Corners region of the United States, comprising southeastern Utah, northeastern Arizona, northwestern New Mexico, and southwestern Colorado. The Ancestral Puebloans are believed to have developed, at least in part, from the Oshara Tradition. They lived in a range of structures that included small family pit houses, larger structures to house clans, grand pueblos, and cliff-sited dwellings for defense. The Ancestral Puebloans possessed a complex network that stretched across the Colorado Plateau linking hundreds of communities and population

29


Statue “Flute”

30


We wanted to know how a listener would have experienced a sound from a specific distance away from whatever was producing it. To explore sound physics and its application to archaeology, we first developed an Excel spreadsheet. Our calculations described linear sound profiles, similar to a line-of-sight analysis; this took into account a straight path between the person or instrument making the noise and the person hearing it. However, this approach was limited because the results applied to only one listener standing at a very specific location a set distance away. Our research truly blossomed when we wondered if we could apply the same sound physics calculations to an entire landscape simultaneously. We turned to a type of computer program called Geographic Information Systems (GIS) that allows us to model the world in three dimensions. The software package we used, ESRI’s ArcGIS, offers anyone the option to create customized tools, such as the Soundshed Analysis Tool we created, to do calculations or create geographical data and images. The Soundshed Analysis Tool is derived from an earlier modeling script “SPreAD-GIS” developed by environmental scientist Sarah Reed to measure the impact of noise on natural environments, such as national forests. That tool was itself adapted from SPreAD, or “the System for the Prediction of Acoustic Detectability,” a method the U.S. Forest Service devised in 1980 to predict the impact of noise on outdoor recreation. The Soundshed Analysis Tool requires seven input variables, a study location and elevation data. Variables include the sound source height, frequency of the sound source, sound pressure level of the source, the measurement distance from the source, air temperature, relative humidity and the ambient sound pressure level of the study location. We gathered this information from a variety of sources: open-source elevation data, archaeological research, paleoclimatological research and historical climate data. We also gathered from the relevant literature the decibel levels of crowds, individuals and the conch trumpet instrument ancestral Puebloans used.

31


Once the input variables are entered, it takes the Soundshed tool less than 10 minutes to crunch through this complex math for every point on the landscape within two miles of the spot where the sound is produced. Our model then creates images that show where and how sound spreads across the landscape. This gives us a way to visualize the sounds people would have experienced as they moved through the landscape, going about their day. Who could hear what, where We focused on culturally relevant sounds and how they would have spread throughout the Chacoan landscape. These could be the voices of people, the sound of domestic animals like dogs and turkeys, the creation of stone tools or the sound of musical instruments. Within the American Southwest, these instruments include bone flutes, whistles, foot drums, copper bells and conch shell trumpets. Soundshed maps reveal that a person standing at either of two neighboring great houses, Pueblo Alto and New Alto, located approximately 500 feet from each other, can hear a person shouting or speaking to a group at the other site.

32

The patterns differ between the two maps because the terrain differs slightly between the two locations, and because the structures themselves block sound. A third map models someone blowing a conch shell trumpet from immediately north of Casa Rinconada, a large ceremonial structure, at dawn on the summer solstice. The sound spreads throughout the canyon, traveling to a number of mesa top shrines that often marked sacred locations and high points on the landscape. Perhaps audibility influenced the positioning of the shrines so ritual events occurring at Casa Rinconada could be heard? Investigating how sound interacts with the built environment can reveal details about the importance of ritual. It can show us if sound was considered important by the ancestral Puebloan people, especially if shrines are consistently found in locations where people could hear rituals that were performed at a distance.


The future of archaeoacoustics Our research presents a first step in the archaeoacoustic study of landscapes. Now we hope to expand our research by visiting Chaco Canyon to perform sound studies and record measurements in the field. We also plan to apply our model to other cultures, geographic areas and time periods. Acoustic studies combined with other archaeological research contribute to a more holistic understanding of past cultures. The field has grown as more researchers expand their multidisciplinary pursuits, combining other fields of study with their archaeological approach. For example, advances in geography, physics, psychology, computer programming and other fields made our acoustic study possible. Previously, the study of archaeoacoustics at the landscape level had been out of reach due to technological limitations and a lack of tools. It is only now that computer processing power has caught up to our dreams. Modeling tools like this one also offer the added benefit of allowing us to study what people heard at a site in any place or time without the need to travel to those locations. Instead, researchers can apply existing data found through a literature search, or measure the sound levels of noises or musical instruments to use as model inputs. This opens up new areas to be explored and studied. Sound modeling can help researchers ask questions, and help everyone understand and relate to the ways that other people experienced their world. A sound model opens a new door into our understanding of the past.

33


�

To add a new dimension to our understanding of this time and place, we investigated how sounds were experienced at these sites.

34

�



FROM RUSSIA WITH LOVE The Strange Tale of Clara Rockmore and Lon Theremin

36


T

here is something about the Theremin, both its sound and the manner of its playing, that is almost comedic. An all-electric musical saw, its over-familiar, spooked warble has become a staple of B-movie sound effects. A “good vibration” quickly reached for as shorthand for the uncanny, curdling quickly into cliché or cute eccentricity. The Theremin was and is however the sound of the future, albeit the sound of the future as first heard in the technologically-optimistic Soviet Russia of the 1920’s. Whether in Miklós Rózsa’s scores for Spellbound and The Lost Weekend, Bernard Hermann’s work in The Day the Earth Stood Still (or indeed Jimmy Page’s diabolic dabblings the eldritch tones of the Theremin have served the movies well as a signifier that something is amiss.

37


Nonetheless, unlike other early electronic instruments such as the Telharmonium, Trautonium and even the Ondes Martenot, the sounds of LÊon Theremin’s infernal circuitry are anything but forgotten, in fact they border on ubiquity. Put into commercial production first (unsuccessfully) by RCA and then by Robert Moog, the Theremin, the hot new sound of yesteryear, borders on the pervasive.

Lev Termen demonstrating Termenvox, c. December 1927

38


Nonetheless, unlike other early electronic instruments such as the Telharmonium, Trautonium and even the Ondes Martenot, the sounds of Léon Theremin’s infernal circuitry are anything but forgotten, in fact they border on ubiquity. Put into commercial production first (unsuccessfully) by RCA and then by Robert Moog, the Theremin, the hot new sound of yesteryear, borders on the pervasive. In popular music the Theremin’s antenna has been used and abused, by the Beach Boys, to evoke the carefree pleasures of the Californian summer and by Hawkwind (and Lothar and the Hand People, amongst many others) to summon UFO’s. Contemporary exponents include Dorit Chrysler and Radiohead to name but two. For serious Thereminophiles however the extraordinary expressivity of the instrument commends it to the classical repertoire. Watching Clara Rockmore tease sounds out of the ether whilest her sister (the accomplished pianist Nadia Reisenberg) accompanies her in a loud floral print dress, the viewer almost feels as if they are intruding. Rockmore’s face is lined with concentration or lost in ecstasy as her painted nails pluck at the air, summoning a remembered violin. There is something dated but also strangely timeless about the tones that she evokes. Taped at home in front of a small audience, Dr Thomas Ray, Robert Moog and her nephew, the broadcaster Bob Sherman, this performance feels like a swansong (though Clara would survive her sister by fifteen years). To some degree the Bel-Canto style of her playing cannot but help recall the schmaltz and bathos of the classic Hollywood string section

- it is simply the nature of the instrument. Quivering with emotion, Rockmore summons up melancholy ghosts, a suburban medium miming escape, swimming through a thin syrup of emotional ectoplasm. To contemporary ears it might seem a little histrionic, but there can be no doubting its sincerity. The Theremin had started life as a by-product of Soviet research into proximity detectors. In 1920 the young physicist, Lev Sergeyevich Termen (later to be known as Léon Theremin), demonstrated his invention to his amazed professor and fellow students. Trying to remember the notes on the cello from Camille Saint-Saëns’ Le Cygne, Theremin showed that the difference between the frequencies of two radio oscillators (one fixed, the other controlled by the proximity of the performer) could be turned into audio signals. The demonstration was musical but the principle went on to have other applications. After a favorable reception at various electronics conferences, Theremin was asked to present his “Termenvox” to Lenin, who was so impressed he began to take lessons. Theremin would eventually be dispatched around the world as an ambassador for Soviet technology and its latest extraordinary invention, the first viable electronic instrument. In 1915, five years prior to Theremin’s performance, a four-year-old Lithuanian girl, Clara Reisenberg, had been the youngest student ever to be admitted to the Saint Petersburg Imperial Conservatory. She was so small that she played her audition standing on a table but her perfect pitch and precocious talent won her a place under the violinist Leopold Auer.

39


After the revolution, Clara’s family fled Russia, their passage in part paid for by concerts that she would perform with her older sister Nadia. The Reisenbergs eventually arrived at Ellis Island on December 19, 1921 and Clara resumed her studies with Auer in New York. Unfortunately, shortly before her American debut she developed a problem with her arm. The arthritic condition, exacerbated by childhood malnutrition and obsessive practicing, eventually forced her to abandon the violin. By 1928 Léon Theremin had acquired an American patent on his device and sold the commercial rights to RCA. Although the Theremin was not a commercial success in America (the timing of its release, at the end of The Crash, probably did not help its prospects) the Theremin won many admirers, including Clara. When the two met she demonstrated such aptitude for the notoriously difficult instrument that he presented her with an RCA model as a gift. Observing that she was a violinist, Theremin even offered to reverse the poles of the device so that she could produce vibrato with her left hand, as she was accustomed. She declined, saying, “other people who will follow should play it the way you invented it.” A few years later, despite having rejected Theremin’s proposal of marriage in favor of attorney Robert Rockmore, Clara was to persuade

the inventor to build her a custom version extending its range and sensitivity. Theremin was later to write of Clara that she played liked an angel. Following recitals with her sister, Clara Rockmore would go on to become the acknowledged virtuoso of the instrument, completing three coast to coast tours with Paul Robeson and making orchestral appearances in Philadelphia, Toronto and New York, where she premiered a concerto written especially for her with Stokowski conducting. Theremin, meanwhile, had disappeared. In 1938 he left New York under somewhat mysterious circumstances. Whether because he was homesick, anxious about the coming war, in financial difficulties or kidnapped by the KGB, he had in fact been put to work in a secret laboratory in the Gulag, as rumors of his execution were widely circulated. Unknown to his friends in the West, Theremin (long thought dead) would continue to work for the KGB, building surveillance devices until 1966. The most successful of these (“The Thing”) had been discovered in a carved wooden seal that Soviet schoolchildren had presented as a “gesture of friendship” to the American ambassador in Moscow in 1945. The listening device had been in place, and fully functional, for 7 years before its discovery.


Lev Theremin and Lydia Kavina, 1976

As early as 1962 Rockmore and her husband had a clandestine meeting with Theremin in Moscow. Sadly, a year later, Bob Rockmore was to die after slipping on ice, a loss from which Clara never fully recovered. Theremin would eventually retire from the KGB to take up a position at the Moscow Conservatoire but in 1967, “exposed” by the New York Times (vii), he was summarily dismissed. The director stating that, “electricity is not good for music; electricity is to be used for electrocution”. Interest in the Theremin was growing worldwide with successful record releases in the 70’s and 80’s, initially supervised by Bob Moog. In 1991, Rockmore and Theremin (the latter aged 96) would be formally reunited in New York, at the arrangement of filmmaker Steven M. Martin whose Theremin: An Electronic Odyssey (1993) sparked a further revival of interest in the instrument. Tragically, the film received its premiere the same year as Léon Theremin’s death.

41


Clara Rockmore died at the age of 87 in 1998, but the fascination of a subsequent generation with the warbling, haunted tone of the Theremin persists. Today Lydia Kavina, the granddaughter of Léon Theremin’s first cousin, continues to perform the classical Theremin repertoire, which includes works by Varèse, Martinů, Percy Grainger and Christian Wolf. That Kavina’s repertoire also includes Theremin scores by Rózsa and collaborations with Messer Chups (not to mention appearances on the soundtracks of Tim Burton’s Ed Wood, David Cronenberg’s eXistenZ and Brad Anderson’s Machinist) suggests that the Theremin’s quirky musical vocabulary of swooping glissandi and quivering portamenti will continue to be heard long after the sounds of other electronic instruments of its vintage have been forgotten.

42


Clara Rockmore's Lost Theremin

43


HEARING

44


THE COLOURS By Michelle Z. Donahue

From prosthetics to pharmaceuticals, humans have been using technology to alter their physical and mental capabilities for thousands of years. Now, with our rapid advances in technology, some people are embracing human augmentation as a means of expressing themselves and experiencing the world in a totally different way. Neil Harbisson, 33, is one of these people. The artist was born with achromato psia, or complete color-blindness. Far from a disability, Harbisson considers his natural world-view to be an asset, though he did want to be able to understand different dimensions to sight. Over the last 13 years, he has been able to “hear� visible and invisible wavelengths of light. An antenna-like sensor implanted in his head translates different wavelengths into vibrations on his skull, which he then perceives as sound. Often called the world’s first official cyborg, after the British government permitted him to wear his headgear in his passport photo, Harbisson says that such technological augmentation is a natural.

45


Why did you create this sense for yourself? My aim was never to overcome anything. Seeing in greyscale has many advantages. I have better night vision. I memorize shapes more readily, and I’m not easily fooled by camouflages. And black-and-white photocopies are cheaper. I didn’t feel there was a physical problem, and I never wanted to change my sight. I wanted to create a new organ for seeing.

46

How do you describe what it’s like to be a cyborg? There is no difference between the software and my brain, or my antenna and any other body part. Being united to cybernetics makes me feel that I am technology. The definition that [scientist] Manfred Clynes gave for “cyborg” in 1960 was that in order to explore and survive in new environments, we had to change ourselves instead of changing our environment. Now, we do have the tools to change ourselves. We can add new senses, new organs.


What’s the most unusual aspect to your extrasensory abilities? At first I could just sense the visual spectrum of light, but I’ve upgraded it to include the infrared and ultraviolet [UV] spectra. One thing is being able tell if it’s a good or bad day to sunbathe. If I sense there’s a high level of ultraviolet light, it’s not a very good day, so I know to wait a bit or put on some extra sun cream. When I go walking in the forest, I like the ones with high levels of UV. They’re loud and highpitched. One would think the forest is peaceful and quiet, but when there’s ultraviolet flowers all around, it’s very noisy.

What are the most memorable questions you get from people about your antenna? I don’t get any particular questions, but what people think my antenna is changes with time. In 2004, people thought it was a reading light; they’d ask me if I could turn it on. In 2007, it was a hands-free phone, then in 2008 and 2009, it was a GoPro camera. In 2015, many children thought it was some kind of extendable selfie stick. Last year, people started yelling “Pokemon!” at me. In a small village in Italy, an old man asked me if I could do cappuccinos with it. If people start instead asking, What can you sense with it? I know it will mean it’s become normal, and that people understand it’s a sensory organ.

47


How has your experience of the world shifted since you got your implant? My understanding of the world has become more profound. The more you extend your senses, the more that you realize exists. If you’re in the same house for years, there’s a repetition of what you perceive there. If you add a new sense, though, the house becomes new again.

48

How has your self-perception shifted? I feel connected with nature in a stronger way. I consider myself trans-species: Having an antenna is common for other species, or sensing in infrared and ultraviolet, but it’s not traditional for humans.


What other technologies could break the boundaries of what is considered human? Most projects I see are chips, software or apps that give you the intelligence, not the sense. We’ve been giving senses to all these machines instead of ourselves, like cars with the sense of what’s behind them, and we can’t even do that. Imagine something like an earring that could give you 360 degrees of perception of your surroundings, and maybe it could buzz to tell you someone’s behind you. It’s strange to me that simple things like this aren’t happening.

Should there be restrictions on how people can modify themselves? I think we should all have the freedom to design ourselves as much as we want. Each sense depends on the individual. In the same way we all have eyes or ears, we all use them in different ways, and people use them in a good and bad ways. Do you believe that augmentation may ultimately influence human evolution? If, by the end of the century, we start printing our own sense organs, implanted with DNA instead of using chips, the possibility of having children born with these senses is real. If their parents have modified their genes or made new organs, then yes, it’s just the beginning of a renaissance for our species.

49


We’ve been giving senses to all these machines instead of ourselves, like cars with the sense of what’s behind them, and we can’t even do that.

50


51


THE MANY

COLORS

OF NOISE Most people are familiar with white noise, that static sound of an air conditioner that lulls us to sleep by drowning out any background noise. Except technically, the whirl of a fan or hum of the AC isn’t white noise at all. Many of the sounds we associate with white noise are actually pink noise, or brown, or green, or blue. In audio engineering, there’s a whole rainbow of noise colors, each with its own unique properties, that are used to produce music, help relaxation, and describe natural rhythms like the human heartbeat. If you know what to look for, you can start to notice the colors of the noise that make up the soundscape around us. If you decompose a sound wave, you can break it down into two fundamental characteristics: frequency, which is how fast the waveform is vibrating per second (one hertz is one vibration per second), and amplitude (sometimes measured as “power”), or the size of the waves. The noise types are named for a loose analogy to the colors of light: White noise, for example, contains all the audible frequencies, just like white light contains all the frequencies in the visible range.

52


In musical sound waves, the frequencies are spaced at intervals that we find pleasing to the ear, creating a harmonic structure that gives a sound its unique tone quality, or timbre. (This is what makes the same note sound different on a flute than it does on a violin.) The noises we hear every day—boots stomping across the floor, a car honking outside, the jingling of keys—are made up of sporadic waveforms, a random distribution of frequency and amplitude.

“if you know what to look for, you can start to notice the colors of the noise that make up the soundscape around us.”

And then, in a separate category, there are the colored noises. Unlike the inconsistent bang of a drum or shouting voice, these sounds are a continuous signal, but they aren’t exactly pleasant. The word “noise” actually comes from a Latin word for nausea; in audio engineering, the term describes any unwanted information that interferes with the desired signal, like static on the radio. Pure white noise sounds like that hissy “shhh” that happens when the TV or radio is tuned to an unused frequency. It’s a mixture of all the frequencies humans can hear (about 20 Hz to 20 kHz), fired off randomly with equal power at each—like 20,000 different tones all playing at the same time, mixed together in a constantly changing, unpredictable sonic stew. Pink noise sounds less harsh than white noise because humans don’t hear linearly. We hear in octaves, or the doubling of a frequency band, which means we perceive as much sonic space between 30-60 Hz as between 10,00020,000 Hz. We’re also more sensitive to higher frequencies (one to four kHz, which is about the frequency of a crying baby, sounds the loudest), so white noise, which has the same intensity at even the highest tones, can sound way too bright to our ears. The energy in pink noise drops off by half as the frequency doubles, so every octave has equal power, which sounds more balanced.

53


In recent years, pink noise has become the darling of the noise spectrum, dethroning white as the in-vogue option on sound generators for sleep or concentration. In 2013, a study published in the journal Neuron found that pink noise helped participants achieve deeper sleep; in recent years, various health blogs have touted it as the key to a better night’s rest. The inverse pattern of pink noise, also called 1/f noise, can also be applied to plenty of systems outside of sound. If you take the rise and fall of the tide, for example, and break it down into waveforms plotted on a graph, it will follow 1/f, which happens to be the exact midpoint between pure randomness and correlated movement. It turns out much of our world operates in this sweet spot between chaos and control: The pink noise pattern has been found in most genres of music, the shot lengths in Hollywood films, the structure of DNA, the rise and fall of the tide, the flow of traffic, and variations in the stock market. The world is basically awash in pink.

54


55


Brown or “Brownian” noise, a deeper version of pink, is not actually named after the color; the name comes from the fact that the signal mimics the “random walk” pattern produced by Brownian motion, or the random movement of particles in liquid. The sound (not to be confused with the mythical “brown note” noise) is a deeper, bassy rumble, kind of like ocean waves or heavy winds. Blue noise, which has more energy concentrated at the high end of the sound spectrum, is just the opposite: It sounds like the hiss of a water spray, a high-pitched screeching noise, with no bass tones at all. It’s essentially the inverse of pink noise: With blue noise, frequency and power increase at the same rate, so each octave has as much energy as the two octaves below it combined.

Because the high-pitched frequencies of blue noise are harder for the listener to discern, sound engineers use it for a process called audio dithering, which is intentionally adding noise to a signal to minimize any distortions that appear during the production process. Adding noise randomizes the errors, helping to smooth out the rough edges. Gray noise sounds the same at every frequency; like pink, it’s calibrated to sound more balanced to the human ear. There is no single example of gray noise, because every human has a slightly different hearing curve. In medicine, it’s used to treat hyperacusis, an increased sensitivity to normal sounds, or tinnitus, a ringing in the ear. White, pink, and blue noise are the only colors to have official definitions in the federal telecommunications standard, while


brown and gray have accepted meanings in certain industries. Meanwhile, the other colors of the noise rainbow have only been informally defined. Green noise, for example, has been described as a signal with more energy concentrated in the middle of the sound spectrum; with in a limited frequency range around 500 Hz, it supposedly simulates the ambient noise of nature. Orange noise is sometimes described as a clashing, cacophonous noise like an out-of-tune ensemble. Violet noise is simply a more intense version of blue, with even more energy concentrated in the highest audible frequencies. And there’s one more color of noise given an official meaning: black. It’s a spectral density of roughly zero power at every frequency. If white is all frequencies at once, black is the color of silence.

The pink noise pattern has been found in most genres of music, the shot lengths in Hollywood films, the structure of DNA, the rise and fall of the tide, the flow of traffic, and variations in the stock market. The world is

basically awash in pink.

57


NORTH AMERICA’S

58


MUSIC INNOVATOR Steve Reich’s use of phasing and electronics Steve Reich was born on October 3, 1936 in New York City. His parents divorced when he was only a year old, and he grew up splitting time between New York and California. Reich attended Cornell University, receiving a Bachelor’s Degree in Philosophy with a Minor in Music. After graduation, Reich continued studying composition privately, and even attended Juilliard for three years. He went on to earn his Master’s Degree in Composition from Mills University in Oakland, California in 1963. A few years after receiving his Master’s, Reich formed his own musical ensemble, “Steve Reich and Musicians,” to perform his original works. Since 1971, the group has toured worldwide performing Reich’s music. He continues to write ground-breaking compositions and has received various awards for his work (including the Pulitzer Prize in Music in 2009 for his piece Double Sextet).

59


Reich’s early compositional career in the 60s and 70s is best characterized by minimalism and phasing. American Minimalism is a term used to describe music by the likes of Terry Riley, Phillip Glass, John Adams, and Steve Reich. Common features of minimalism include consonant harmony (harmony that pleases the ear), repetition of motives (using the same small fragments repeatedly), and rhythmic pulsation (having a constant, clear and steady beat). Minimalism began as an avant-garde technique, but has since become more mainstream in contemporary classical music. Phasing is a musical technique mastered by Reich that influenced future generations of composers. In its simplest form, phasing is when two parts of a piece of music begin together playing at the exact same time, but then gradually “phase” out of sync with each other to create a new rhythmic pattern. Examples of minimalism and phasing in Reich’s music can be heard in Piano Phase (1967) and Clapping Music (1972).

60

In 1976 and 1977 Reich studied traditional cantillation (chanting) of Hebrew scriptures in New York and Jerusalem. This ultimately led to the composition of Tehillim in 1981. Tehillim (meaning “Psalms” in Hebrew) is written for four female vocalists and orchestra. As a large scale orchestral piece and as his first foray into composition influenced by his Jewish heritage, it departs somewhat from the style of Reich’s earlier compositions. The four movements still maintain the pulsating elements associated with minimalism, but with a strong focus on melody.


61


With another piece, Different Trains, Reich’s style started to shift from minimalism toward postminimalism. Reich wrote the piece for string quartet and tape in 1988 as a reflection on being Jewish in America vs. in Europe during World War II. The idea for the composition came to Reich on a train while contemplating the stark difference between his Jewish upbringing in the US, compared to what it might have been if his parents had lived in Europe at the time of WWII. Different Trains uses a pre-recorded tape with voices from intercoms at train stations and the sounds of trains on railroad tracks. The stark difference between the American and European movements are shown through the string quartet’s mimicry of the rhythms of speech on the different tapes; the American movement is upbeat, while the European movement is reflective of war. Steve Reich’s influence continues to this day, lauded as one of “a handful of living composers who can legitimately claim to have altered the direction of musical history,”2(The Guardian). In 2016 Reich turned 80, a milestone marked by a bevy of new releases from record labels and musicians. One of these releases (Third Coast Percussion’s Steve Reich), won a Grammy in February 2017, showing his on-going contribution and influence on present day musical culture. Reich’s work combining traditional instruments with electronics, originally avant-garde, has now become widely accepted among classical musicians and audiences from all walks of life.

62


63


�

a handful of living composers who can legitimately claim to have altered the direction of musical history

64

�


65


66


FROM SOUND TO LOGO A Painter, a Dog, a Dead Guy and a Record Player By Rian Noe

Down below is what we Americans think of as the RCA logo, having grown up seeing it everywhere. But the logo first belonged to British entertainment company HMV, which is in fact named after the original painting on which the logo is based, called “His Master’s Voice.” Francis Barraud was a Liverpudlian painter who had a brother named Mark. In the late 1800s, after Mark died, Francis inherited a bunch of his stuff: An early cylinder phonograph player, cylinder recordings of Mark’s voice, and Mark’s dog, a fox terrier named Nipper.

Francis observed that when he played the records of his dead brother’s voice, the dog would run over to the phonograph and listen intently. Francis painted the scene, calling it “His Master’s Voice” and tried to sell the painting. Initially no one was interested. But in 1899 Emile Berliner, the inventor of the Gramophone, had seen the picture in London and took out a United States copyright on it in July, 1900. The painting was adopted as a trademark by Eldridge R. Johnson of the Consolidated Talking Machine Company, which was reorganized as the Victor Talking Machine Company in 1901.


Victor used the image far more aggressively than its UK affiliate, and from 1902 most Victor records had a simplified drawing of Barraud’s dog-and-gramophone image on their labels. Magazine advertisements urged record buyers to “look for the dog.” In British Commonwealth countries, the Gramophone Company did not use the dog on its record labels until 1909. The following year the Gramophone Company replaced the Recording Angel trademark in the upper half of the record labels with the Nipper logo. The company was not formally called HMV or His Master’s Voice, but rapidly became identified by that term due to the prominence of the phrase on the record labels. Records issued by the company before February 1908 were generally referred to by record collectors as G&Ts, while those after that date are usually called HMV records.

68

The image continued to be used as a trademark by Victor in the US, Canada, and Latin America. In 1929, the Radio Corporation of America (RCA) purchased the Victor Talking Machine Company. In British Commonwealth countries (except for Canada, where Victor held the rights) it was used by various subsidiaries of the Gramophone Company, which ultimately became part of EMI. The trademark’s ownership is divided among different companies in different countries, reducing its value in the globalised music market. The name HMV was used by a chain of music shops owned by HMV, mainly in the UK, Ireland, Canada, Singapore, Australia, Hong Kong, and Japan. Interestingly enough, Barraud’s original title for the painting was “Dog looking at and listening to a Phonograph.”


69


MUSEUM OF ENDANGERED SOUNDS Because memory needs them By Olivia Solon

Do you miss the pleading bleeps of the Tamagotchi? Or the sound of a telephone rotary dial? You can now listen to these and other vintage tech noises at the Museum of Endangered Sounds. A character called Brendan Chilcutt has created the online “museum” in early 2012 to preserve the sounds made famous by his favourite old devices, such as the “textured rattle and hum of a VHS tape being sucked into the womb of a 1983 JVC HR-7100 VCR” (ah, yes). As new products come to market, these nostalgia-inducing noises become as obsolete as the devices that make them. Chilcutt is actually an online persona created by three graduate students keen to break into the advertising industry --- Phil Hadad, Marybeth Ledesma and Greg Elwood. Hadad told Wired.co.uk that the idea had been brewing for a while, but there were definitely a few “Aha!” moments.

70

“For instance, a while back I was sitting in the backseat of a car with two other friends. They were both texting or checking emails. One of them was using a Blackberry and one was on an iPhone. Although I could hear the typing of keys on the Blackberry, the iPhone didn’t make a sound. That for sure got me thinking of where we’re headed and what we’ve lost. Today an iPhone comes loaded with a sound library based on sounds that future generations will never have had direct experience with.” The Museum of Endangered Sounds currently features a rather limited collection, including the white noise of a cathode ray tube TV, the old Nokia ringtone immortalised by Trigger Happy TV and the strained buzzing of a floppy disk drive.


71


Chilcutt says that he has a “ten-year plan”, in which he will complete the data collection by 2015 and then spend seven years developing the “proper markup language to reinterpret the sounds as a binary composition”. He says on his site: “Imagine a world where we never again hear the symphonic startup of a Windows 95 machine. Imagine generations of children unacquainted with the chattering of angels lodged deep within the recesses of an old cathode ray tube TV. And when the entire world has adopted devices with sleek, silent touch interfaces, where will we turn for the sound of fingers striking qwerty keypads? Tell me that. And tell me: who will play my Game Boy when I’m gone?” Hadad told Wired.co.uk that the team plans to develop and experience that “packages the sounds and lets people have even more control over their interaction with them. Maybe make them downloadable or string

72

them into new listening experiences.” He added: “It’s so easy now to get caught up in an attempt to keep up with new technology. As soon as new gadgets or even new versions of old gadgets are introduced people line up to buy them. And plenty of us think that obtaining these gadgets will make us whole, or happier, or that we won’t be able to go on without them. “But for us, when we hear something like the old dial-up modem, or the sound of a pay phone, it takes us back to a time when our lives were simpler. And we realized we were pretty happy. Or it looks that way now. Maybe it’s the struggle to live in the moment that has caused us to fall so utterly in love with these dying sounds, but we feel they’re worth preserving.”


73


74


75


76

Roberto Calzari | Yi Cao | AndrĂŠs Monino | Livia Stevenin | Irene Zanardi


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.