International www.audiomediainternational.com
March 2018
READY FOR
BATTLE Inside the scoring process for Star Wars Battlefront II
INTERVIEW
FEATURE
REVIEWS
Engineer and Professor Susan Rogers on working with Prince
The life and legacy of legendary German producer Conny Plank
We test products from Lewitt, PMC and others
CONTENTS
24 INTERVIEWS 13 Susan Rogers The world-renowned engineer and producer on working with Prince 21 Russell Emanuel Bleeding Fingers’ chief creative on scoring Blue Planet II
13
FEATURES 17 Conny Plank We explore the extraordinary legacy of Conny Plank, including his custom mixing desk
17
24 Star Wars Battlefront II Ben Minto and composer Gordy Haab on creating the videogame music score for an iconic franchise 30 Post Mixing and mastering audio for games, films and TV series
PRODUCT FOCUS 35 Microphones
REVIEWS 40 Lewitt 540 Subzero
March 2018
3
WELCOME
www.audiomediainternational.com
A NEW HOPE
Experts in the issue Susan Rogers holds a doctorate in psychology from McGill University, where she studied music cognition and psychoacoustics under researchers. She worked as staff engineer for Prince between 1983–1988.
Gordy Haab is an award-winning film, video game and television composer who has written music for Microsoft’s Halo Wars 2, EA’s Star Wars Battlefront II and Star Wars Battlefront I.
Alistair McGhee began audio life in Hi-Fi before joining the BBC as an audio engineer. After 10 years in radio and TV, he moved to production. Most recently, he was assistant editor, BBC Radio Wales and has been helping the UN with broadcast operations in Juba.
4
ello and welcome to the March issue of AMI. If there are any Star Wars fans amongst our readership, which we’re sure (and hope) there are, you’ll be pleased to see that this month’s coverstar is none other than Captain Phasma, who everyone knows was in charge of the stormtroopers as part of the unofficial commanding triumvirate for the First Order. You’ve probably guessed by now that this issue is dedicated to audio for films, TV series and of course video games and as such we’ve got a host of special interviews and feature content with some of the world’s top professionals creating and perfecting sound for the screen. First up, audio director Ben Minto and music composer Gordy Haab take us behind the scenes of the scoring process for Electronic Arts’ second installment of the Star Wars Battlefront video game, Star Wars Battlefront II. To give you a brief idea of the out-of-this-world scale of the project, Haab composed over 150 minutes of material for the game, while all the original music was recorded by engineer Steve McLaughlin with the 100-plus member London
H
Symphony orchestra and 80-member strong London Voices Choir at Abbey Road Studios. You can read more about this on pages 24-27. From games to TV series, on pages 21-23 Russell Emanuel, the CEO and chief creative of production music company Bleeding Fingers, tells us how Hans Zimmer and his team of top composers created the score for the David Attenborough-narrated, hit BBC nature documentary Blue Planet II. And on page 29 Stephen Bennett reports on the latest developments in the world of mixing audio for games, film and series. Elsewhere in the magazine, on page 13, professor Susan Rogers, who worked as Prince’s staff engineer from 1983-1988 explains what it was like working with the star on some of his greatest hits as well as how she taught herself about audio electronics to become one of the world’s most renowned engineers and respected educators. On page 17 there’s a very special feature on the legacy of legendary German producer Conny Plank and the custom desk he built, which is now located at David M. Allen’s Studio 7 in Tottenham, London. We’ve not forgotten all the regular opinion pieces, product focus and technology reviews you’ve come to expect from AMI each month, which will hopefully continue to inform your decisions about the projects you undertake or the next big pro audio purchase you make. And finally, not that there ever was, but if there’s ever a doubt in your mind about the buoyancy of the professional audio market, it would be advisable to turn to page 6, where ISE MD Mike Blackman explains how this year’s show saw record exhibitor and visitor numbers, and on which he rightly comments “all points to a positive future.” So until the next issue of AMI in April, may the force be with you.
Murray Stassen Editor Audio Media International
EDITOR Murray Stassen mstassen@nbmedia.com
DESIGNER Tom Carpenter tcarpenter@nbmedia.com
PRINT SUBSCRIPTIONS
SENIOR STAFF WRITER Colby Ramsey cramsey@nbmedia.com
PRODUCTION EXECUTIVE Warren Kelly wkelly@nbmedia.com
is published 10 times a year by NewBay Media Europe Ltd, The Emerson Building, 4th Floor, 4-8 Emerson Street, London SE1 9DU
ADVERTISING MANAGER Ryan O’Donnell rodonnell@nbmedia.com
EVENTS DIRECTOR Caroline Hicks chicks@nbmedia.com
ACCOUNT MANAGER Rian Zoll-Khan rzoll-khan@nbmedia.com
CONTENT DIRECTOR James McKeown jmckeown@nbmedia.com
March 2018
To subscribe to AMI please go to www.audiomediainternational.com/subscribe Should you have any questions please email subs@audiomediainternational.com
Editorial tel: +44 (0)20 7354 6002 Sales tel: +44 (0)20 7354 6000
© Copyright NewBay Media Europe Ltd 2018 All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means electronic or mechanical, including photocopying, recording or any information storage or retrieval system without the express prior written consent of the publisher. The contents of Audio Media International are subject to reproduction in information storage and retrieval systems.
Printed by Pensord Press Ltd, NP12 2YA Print ISSN: 2057-5165
NewBay Media Europe Ltd is a member of the Periodical Publishers Association
ELEVATING THE
FESTIVAL EXPERIENCE
ROSKILDE FESTIVAL AND MEYER SOUND AN UNPRECEDENTED COMMITMENT TO GREAT SOUND, FOR 2018 AND BEYOND
NEWS
www.audiomediainternational.com
ISE MD MIKE BLACKMAN: ‘IT ALL POINTS TO A VERY POSITIVE FUTURE’ ntegrated Systems Europe saw its largest and busiest show draw record numbers of exhibitors and attendees to the RAI Amsterdam between 6-9 February. The number of exhibitors totalled 1,296, of which 294 were exhibiting at ISE for the first time, filling 53,000sqm of exhibition floor space over 15 halls. The number of registered visitors by the end of the show hit
I
80,923, an increase of 10.3% on the 2017 edition. This year’s gathering saw an opening address from architect and inventor Carlo Ratti that explored the ‘Senseable Cities’ concept, nine dedicated B2B conferences covering a wide range of AV topics, tours to three leading AV installations including the Amsterdam ArenA, and five days of professional development programming from AVIXA and CEDIA.
NEWBAY BEST OF SHOW AWARDS AT ISE 2018 fter a lengthy judging process on the show floor of Amsterdam’s RAI conference centre, Newbay’s Best Of Show Awards at ISE 2018 were revealed on Thursday, 8 February, with three deserving winners receiving accolades from Audio Media International. The awards recognise some of the best new pro audio products on the market from brands exhibiting at the trade show. NewBay publications PSNEurope, Audio Media International, Installation, and AV Technology Europe each gave out awards to products launched in the past year, since the last ISE Show. The winners of Audio Media International Best of Show Awards at ISE 2018 are: The 3000 Series Frequency-Agile True Diversity UHF Wireless Systems from Audio-Technica (pictured, top right), the LD Systems CURV 500 D SAT from Adam Hall Group (pictured, bottom right) and the PA-240Z from Kramer Electronics (pictured, bottom left). Commenting on the award, Gabriel Alonso, product manager, Integrated Systems at the Adam Hall Group, said: “We are most proud of having received NewBay’s prestigious Best of Show Award at ISE 2018. After the success of our CURV 500 solution, we wanted to offer even more flexibility and control to customers. “The new CURV 500 D SAT duplex satellite
A
6
March 2018
provides a different vertical dispersion pattern in order to meet the growing needs of our customers’ installation requirements, while keeping the same CURV 500 sonic signature.” Added Tim Page, marketing manager EMEA Professional Audio, Audio-Technica Europe: “We were delighted to receive a NewBay Best Of ISE Award for our newly launched 3000 Series wireless system. “The response it received from visitors to the show was extremely positive and it’s gratifying that the judges also appreciated the design, interchangeable mic capsules and other key features of the new system. The development of the 3000 Series has taken many months of hard work and all those involved are extremely happy to now be part of an award-winning team.”
The World Masters of Projection Mapping competition, a joint venture with the Amsterdam Light Festival which saw leading video artists project video artworks onto the EYE Filmmuseum in the centre of the city, also received its debut at the show. “We were delighted with ISE 2018 and both exhibitors and attendees reported very high levels of satisfaction,” said ISE MD Mike Blackman speaking to AMI . “This is due to the support and commitment of all those involved including the input from our co-owners AVIXA and CEDIA. The breadth of the on-site conferences allied to the professional development programme curated by the associations ensured that the event delivered on all levels.” Technology and trends at this year’s show included a growth in IP and the crossover between broadcast and AV, the emergence of virtual, augmented and mixed reality technologies, and networked audio. “We are very pleased that over 30% of attendees to ISE were visiting for the first time, a fact readily appreciated by our exhibitors in their quest to build more business contacts,” Blackman added. “Also, almost 300 companies were also exhibiting for the first time and for 2019 we have already more exhibition floor space reserved than the 2018 total of 53,000sqm. It all points to a very positive future… ” ISE 2019 will be held at the RAI Amsterdam from 5-8 February.
Science meets artistry. d&b Soundscape is a toolkit for sound engineers and audio specialists alike. It is designed to deepen the connection between audience and artist, enveloping both in an intense realm of emotion and imagination. En-Scene and En-Space – the d&b Soundscape modules – are driven by the DS100 Signal Engine. This revolutionary audio system processor makes it SRVVLEOH WR VFXOSW DQ\ VSDFH WR VXLW SUHFLVH DFRXVWLF VSHFLÀFDWLRQV 7KH modules allow for object-based mixing and positioning of up to sixty-four sound objects in order to create authentic audio realities.
dbsoundscape.com
d&b Soundscape.
OPINION
BREAK ON THROUGH
Tobin Jones on how breaking out of the box can reinvigorate the creative process and take songs in different directions.
TOBIN JONES
ith so many songs being conceived, written, recorded and mixed in the box, the involvement of the human body can become far removed from the creative process. Digital technologies have done wonders for musicians and music production, liberating a process which would have been out of reach for many musicians and only available to a small proportion of artists with financial backing. Today it is possible to create professional productions on a cheap laptop or tablet and put them online for the world to hear. Much has been argued about analogue vs. digital, what is best, what sounds better, what gets better musical results. The ease of digital has certainly streamlined production times, the simplicity of recalling in the box mixes means engineers can move fluidly between different songs. This can make the whole process much more creative. No longer
W
do they need to spend hours recalling an analogue desk mix to turn a backing vocal up by 2dB! Others feel that digital technologies have led to something being lost, making the music feel cold and clinical when compared with the warmth and character of analogue productions. In reality there is no real war between these technologies. It’s important for musicians, producers and engineers to use the equipment and technologies available to them to help produce music with character and emotion and there is no single process or piece of equipment that will do this for them. What is really important is finding a process that helps the artist feel more connected with the music and therefore the emotion within it. When I work with artists who have been working on their music using DAWs and home recordings it is often liberating for them when we start sending their stems through bits of analogue outboard. Not so much in a sense that it sounds different but in that they start to view the song differently. It starts to become fun and tactile. Turning knobs and faders, driving pedals and amps always seems way more fun and appealing than using a mouse to input data. This can lead to the music taking a new direction and add a new dimension of emotion and creativity. We have all experienced the joy of using our bodies to achieve something, be it physical exercise, playing an instrument or interaction with other humans and animals. Touch is an amazing sense and when we use it in music production it can be just as rewarding. You don’t need a big budget and lots of outboard to start breaking out of the box. Many of the pieces of equipment I have around the studio are relatively inexpensive, some don’t work correctly but that just adds to the fun. An effective way to start livening up your productions and mixes is by using guitar effects pedals. Using pieces of equipment like this can add an extra layer of sonic texture to your productions and add eccentricities to the sound which can add depth and interest to the whole production. As a listener it is these subtle differences to the sound which add interest and maintain a sense of mystery; when everything sounds the same we can quickly become bored and lose interest. Some of my favourite bits of kit to do this are old spring reverbs that have come out of broken guitar amps, tape delays, fuzz pedals and old mixers. I have an old WEM live mixer that distorts differently on every channel and the built in output limiter has a really unique sound that works great for drum parts. Using synthesiser modules is a lot more common
these days due to the popularity of Eurorack modules and the unique and creative boutique manufactures who put their own wacky ideas into their modules. Guitar amps are great for everything - not just for guitars - vocals, keys and drums can all benefit from a re-amped version adding an extra layer of character to the production. You must remember that when adding a new external processed signal in parallel to an already existing one you must ensure the tracks are phase aligned correctly. Just by passing a signal out of the box and back in adds a small amount of latency and can throw your signals out of phase. I usually just zoom in on the waveforms to ensure they are in correlation once recorded back in to the DAW. Interestingly we seem to have come to a stage where some digital equipment or software sounds almost analogue and some analogue hardware feels almost digital. As a result we are seeing more and more digital hardware units released, for example Strymon guitar pedals are combining analogue circuitry and DSP processor chips to create unique and interesting sounds. Likewise, analogue modular synth manufactures are also experimenting by incorporating DSP modules within their units. For example, TipTop Audio’s Z-DSP module allows the user to insert a small digital cartridge into the module to change its function, outputting CV allows the DSP unit to interact with traditional analogue modules. This shows that perhaps the interest in analogue units has less to do with the sound and more to do with the physical inclusion of the body in the creation process. Vintage digital equipment also has an aesthetic which has become synonymous with the sound of certain music genres, for example the MPC sampler is still a sought after unit for the creation of sample based music as their digital chips have a certain quirk or charm to the sound which users are drawn to. Digital recording technologies have forever changed the music making process and consequently also the role of the recording studio. So many productions are started at home and brought to a studio to work on the parts that require a studio, then taken away again and worked on further, going backwards and forward between home and the studio. It’s not just musicians that work in this way, but many engineers as well. So what are you waiting for? Break out and start having fun with your productions and mixes! Tobin Jones is owner and head engineer at The Park Studios, a recording studio in Wembley, London. www.theparkstudios.com
March 2018
9
OPINION
www.audiomediainternational.com
A WAR OF WORDS
Dialogue in games is for so much more than just telling a story. Here, Creative Assembly’s Will Tidman talks about creating dialogue in a nonlinear environment.
WILL TIDMAN
n the surface the dialogue in games can tell a story. The voiceover or dialogue on a cut scene, in-game conversation or the internal monologue of a character is all enforcing the narrative of the game. But the role of dialogue goes much further than telling stories and this is what stands game dialogue apart from other mediums. It gives feedback to the player, influences decision making, builds immersion, and develops character whilst importantly giving games a human touch in a digital world.
O
The Role of Dialogue Storytelling is of course a primary function of dialogue. Games are another form of entertainment and storytelling is key to engaging the player, and we now have scope to use voices to add to the realism of games and immerse players into new worlds. In Creative Assembly’s Total War series, the situational awareness and immersion of the player is key to success in the game, and dialogue plays a large part in that. The feedback the player gets from characters on the battlefield can change depending on the conditions of the battle and may have a direct impact on the decisions the player makes. This could be direct feedback on the player’s actions or other 10
March 2018
parts of an army communicating with each other, commenting on events happening on the battlefield or reacting to the opponent’s strength or weakness. Non-character based dialogue, such as crowds, vocalisations or group voice over (VO) is also used to give a sense of scale, distance and feeling of a space as well as reacting to the gameplay. This all contributes to the feedback the player receives, in the same way music can change to let the player know what is happening around them. Immersion is also built with the mix and how the dialogue is placed within the 3D space. With a potentially huge amount of characters and action on screen, at the same time, the wrong balance and placement of voices can make the space feel unnatural and undo all the work gone into achieving immersion.
Process of Production Our process starts at the pre-production stage working with the initial character development. We work with the development team to try to make each character as individual as possible, discussing how their traits will decide how they will sound, then fitting this into the voice characteristic of their specific race or faction. The writing further develops individuality, reinforcing the characters personality and this sense
Artwork for Total War: Warhammer 2
of individuality and group-specific characteristics are essential for helping the player navigate the battlefield. For example, Total War: Warhammer 2 (pictured, right) has close to 100 voiced characters over four factions so the variety that is created at this stage is essential for balance and clarity. Varied voices and scripts improve the overall dialogue experience. Next comes casting. It is at this stage we get to realise the ideas we have had during pre-production, working with voice actors to create the unique character voices then choosing a cast that also works as a whole, keeping the faction style consistent and matching each character. We record everything inhouse at our own studios and the same team working on pre-production and casting also direct the sessions. Having this consistency through the project from preproduction to the final implementation and balance
OPINION
means we never have to compromise. In a non-linear environment, where there is no one single use of almost any asset, this approach is essential to us. Once we have the cast and the recording is done we will go through post production. With Total War: Warhammer we have a massive range of Monsters, Undead, Goblins, Dwarfs as well as human characters so the post processing can be a big part of shaping the characters sound, tailoring the process to each character’s style, retaining intelligibility whilst often taking the human elements out of the voice. The importance of intelligibility, while creating an authentic experience, is also a challenge when creating our historical games. For example, if you are working with a non-English faction, you need to create authenticity to their culture, accent and language, while ensuring intelligibility when speaking in English.
The final part of the process is implementation. Implementing the dialogue into the game, testing and refining. Here we get the assets in the game and balance them to get the volume, filters and effects balanced at different distances, creating an environment as realistic as possible. Being in-house, the dialogue team can adapt the mix and assets, or add new features as the game requires, with the ability to react and iterate quickly to development changes and constantly improve the quality of the project. The scale of the voice production on a game can be enormous. A 12-minute TV animation will have 120-150 lines of dialogue; a film will vary wildly but a two hour feature could have 800-1,200 lines. On Total War: Warhammer 2 we have around 16,000 lines by almost 100 characters and a project currently in development has almost double that line count. As there is no pre-
determined order, all of these lines need to stand alone as individual files triggered by events in the game and work under every circumstance the line could possibly be used in. Creating dialogue for games poses challenges from every direction. We have to be highly technical but also emotive, we need to get performances from our hugely talented voice actors that make our otherwise digital characters human (or non-human!). Creating dialogue in a non-linear environment is an art and a science and we juggle these opposing subjects as best we can to create the balance needed for dialogue to bring a game to life. Will Tidman is a senior dialogue engineer at British video game developer Creative Assembly, having previously worked in dialogue and voice over for TV and radio.
March 2018
11
Natural expression XPRS Series speakers More options, more versatility, more power. Pioneer Pro Audio introduces a 10-inch two-way full range speaker and a single 15-inch subwoofer to its XPRS active speaker series. Pioneerproaudio | pioneerproaudio.com | #madeintheuk
P i r at e S t u d i o s , L o n d o n | J A M E S C h o i ( t o p l e f t ) | M e l l o r ( b o t t o m r i g h t )
Credit: Jandro Cisneros
INTERVIEW: SUSAN ROGERS
SUSAN ROGERS
ON RECORDING PRINCE
///////////////////////////////////////////////////////////////////////////////////////// In 1983 a phone call about a job opening in Minnesota changed Susan Rogers’ life forever. Working as Prince’s staff engineer for five years, she’d literally spend day and night behind the desk and tape machines to ensure that everything was well maintained and ready to record the star’s almost endless stream of hits, as she explains to Murray Stassen... didn’t know any recording engineers,” says Susan Rogers, when asked what inspired her to get started in the audio technology business. “And the ones that I saw were all male,” she continues. “I probably read the name of a woman or two like Leanne Jones and Peggy McCreary on the back of records eventually. You wouldn’t see them very often and it just didn’t seem very likely that someone like me who came from a lower middle class, working class background [could become an engineer]. I didn’t know any musicians and I didn’t know anyone in the industry.” Against all odds, Rogers, a music fan and recording
I
technology enthusiast “studied like a fiend” on her own by night and secured a job at Audio Industries Corporation in Los Angeles where she trained as a maintenance tech by day. “They were right across the street from A&M Records and just south of Sunset Boulevard, so right in the middle of Hollywood,” she recalls. “They sold and serviced the top audio equipment. The most popular brand was the MCI console and tape machine.” In 1980 she was hired to work at David Crosby and Graham Nash’s Rudy Records, where she stayed until 1983, the year Prince told his management that he wanted to recruit a new technician from either New York or LA.
After a successful interview, Rogers was on her way to Minnesota to be Prince’s staff engineer, a position she held until 1988, before going on to work with the likes of the Jackson family and Talking Heads’ David Byrne. Rogers is now a Professor in Music Production and Engineering at the Berklee college of Music as well as the director of the Berklee Music Perception and Cognition Laboratory. She holds a doctorate in psychology from McGill University, where she studied music cognition and psychoacoustics. Here, Rogers recalls what it was like working with Prince on some of the most iconic tracks of the ‘80s… March 2018
13
INTERVIEW: SUSAN ROGERS
www.audiomediainternational.com
Susan Rogers on the Sign ‘O The Times tour, Amsterdam, June 1987.
You started out as a technician and not an engineer. Could you tell us about that ? I overheard someone say that becoming a maintenance tech is a sure way to always have a job and that was what I wanted. I wanted in and I wanted to stay in so I started investigating what it entails and then I saw an ad in the back of the LA Times that said, A ‘ udio Trainee Wanted’. That just couldn’t have been more perfect. This company, Audio Industries Corporation, had no more than a dozen people, but there were three or four technicians in the big tech shop there who installed this equipment in recording studios and repaired it. They took me under their wing. I learned from them during the day and I studied electronics, studio installation and recording techniques. After Audio Industries, you went to work with David Crosby and Graham Nash in Hollywood… They had a studio and I was going there frequently because their equipment would break, like equipment does. They were right up the street, on Sunset Boulevard in a place called Crossroads of the World and the studio was called Rudy Records. It was actually called Rudy records because Rudy was a dog that had belonged to David back when they had all lived in the San Francisco Bay area. Graham and David bought this studio and I think Steven [Stills] maybe had a piece of it as well. It was this one room in the heart of Hollywood. Beautiful location. They invited me to leave my job and become their regular full-time technician, taking care of their equipment and keeping it running. That was the next step as a professional, because that allowed me to be on sessions occasionally as an assistant engineer. Did they have an MCI console in there and what other equipment were you using? They had the big JH- 500, which was MCI’s top of the line 14
March 2018
model. They had the JH 24 tape machine, which is like if you’re thinking of cars as an analogy, it’s kind of like a Ford or a Chevy. Not a bad machine, but not a BMW or a Lexus. The monitors were custom designed by George Augspurger who did a lot of work in Los Angeles. It was a one room studio, but it was a large one and we had a lot of folks from the West Coast crew come through and that would include Bonnie Raitt and members of the Eagles and folks like Art Garfunkel and Jackson Brown. How long were you there for? They hired me in around 1980 and I left in the summer of 1983, when I went to work for Prince. How did you end up working with him? It was my lucky day. John Sicetti , an old boyfriend who was chief tech at West Lake Audio heard from his boss, Glen Phoenix that Prince needed a technician. He was just coming off the 1999 tour. I believe the movie Purple Rain had received the green light so he had a lot of work to do. He was going to be making a film, he was going to be making this soundtrack album. He was going to make this big move onto the world stage. The technician that he had working for him at the time was a local Minnesota guy who had not had the experience of working at a professional level where the turnover is really fast and the pressure is really great. Prince told his management to get someone from New York or LA. The management went to West Lake Audio and said, Who do you know Glen? Glen went to his technicians and said, Guys, anybody want this job? Nobody wanted to work for Prince at that time because he was considered kind of a freak and it was Minnesota. They didn’t want to leave their Hollywood jobs. Right away John said, That’s Sue’s job, that’s my girlfriend! He had this Boston accent. It’s Sue’s perfect job! So he called me, he and I had gone our separate ways
but he called me and said, Sue the dream job is waiting for you. Prince is looking for a tech. That was the moment that changed my life. I called Glen and said, Glen, that’s my dream, he’s my favourite artist in the world, how do I get this job? Glen and I had a conversation, he asked me questions about the technology. Glen and I knew each other. He knew that I knew this stuff. I could handle it. Even all by myself out there with no support industry in Minneapolis, if it broke down in the middle of the night, I could fix it. Prince’s management interviewed me and we agreed on a salary. They leased a car for me. I packed up my stuff and off I went. Were you the only full time member of recording staff working with him at that time? Yes I was. At that time this was before Paisley Park was built. He had folks who were on retainer to work with him on tour that included Rob Colby who’s now a famous front of house engineer and the great LeRoy Bennett who did his lighting. Those folks were on retainer because they worked for other artists. I was the only one who handled the studio equipment and Rich ‘Hawk Eye’ Hendrickson handled the stage equipment. He set up and tore down and maintained the stage equipment so that Prince could constantly be rehearsing. He handled things like guitar amps and stuff and I handled the recording studio duties. When Prince brought me in, his home studio was in a small bedroom in his house where he lived and he had some of the Purple Rain album already recorded. I came in and helped him finish the remaining tracks. It was a functioning studio. It was a decent studio but it was very small, you couldn’t do much there. It was a one room control room, so you couldn’t record drums or a whole band . When Prince was working on songs like Darling Nikki that were machine driven, he was playing all that himself. He can do that at home and he did. He recorded that from home but anything bigger, we needed to be out in Los Angeles at Sunset Sound. So when we worked at Sunset Sound in LA, Prince liked working with me, and Peggy McCreary. Peggy was a staff engineer at Sunset Sound. Later on, David Leonard also came to work at Sunset Sound and Prince liked having David around too. David and Peggy eventually got married so Peggy is credited as Peggy McCreary on some of those records and Peggy Leonard on some others. When we went out on tour, a British engineer named David Tickle was Prince’s FOH mixer for Purple Rain and he is credited with coming into the studio and doing some work with us on the Purple Rain record. Do you remember the first session you worked on? Yeah, the first tape he had me put up was Darling Nikki. I had just gone to work with him, I had been with him for about a week and I had been doing tech work because his studio was in disarray and he needed a lot of things repaired. I did this tech work for about a week while he was upstairs. I could hear him taking meetings and doing choreography, I could hear him playing the piano, which was right above my head. Playing Purple Rain and The
INTERVIEW: SUSAN ROGERS Beautiful Ones and Computer Blue. He was aching to get back into the studio but I had to install the console, repair the tape machine. It was a lot of work to do. I was all by myself. I worked day and night and got that done. The first tape he had me put up was Darling Nikki. He said, Get a rough mix and then he left the room. I’ll never forget that moment of pushing up those faders and going, Oh my god. I can’t believe this. He had these big West Lake monitors in the wall there, five, six foot away, and under my finger tips in my control is Darling Nikki. It was a blast out of the speakers and I couldn’t believe it. Being a professional listener, I had this experience of thinking, Oh my god, what’s it going to be like when fans hear this? This is amazing. You’re doing that thing where you’re looking around like, Is anybody else hearing this? Can you believe this? This is going to be great! The first recording he had me do was for Jill Jones and that was on this song called Mia Bocca. He had me put up a vocal mic because Jill was coming to do a vocal so again pushing the tape and getting a signal ready to do a vocal. That was the moment I realised he expects me to do the engineering as well as the tech work and it was frightening, this wonderful revelation. Do you think that when they were bringing a tech in initially, that he had it in his mind to bring someone in who can do both, fix everything as well as be an engineer? Certainly. He just didn’t realise there was a distinction between those two jobs and why should he? He assumed if you were technical and knew the equipment and you could repair it that you could do the job of rerouting the signal. I don’t think he necessarily regarded engineering as an art so much as a skill set. He knew the surface level of the console like people can easily do. It’s easy to twist the EQ knobs to shape the signal. It’s easy to push up the fader to change the gain. It’s easy to turn the pan pot left or right but he didn’t know signal routing on a deeper level. He didn’t fully understand the patch bay, he didn’t set things up from the get go, he needed an engineer to route everything for him to do that technical work. He needed someone to be his hands in that sense because his hands were busy with musical instruments and his thinking was focused on the musical aspect of it. He needed someone else to help him out with the sonic aspect. Did his studio skills improve over the years or was that not something he was interested in? It’s not something he was interested in. He reached the stopping point and beyond that, he didn’t want to go. He just wasn’t interested. I can see that today when I see my students working with Pro Tools. They have no interest in it because they don’t need to know the visual audio shortcuts, they don’t work with it any more. I recognise that you reach a point and it’s like, I don’t really care to know. I’ve got people who will do that for me. I remember one time in the studio he asked me how to make a razor blade edit. He was really cautious about that. As soon as I showed him, he picked up the blade and he said, No, you do it. Maybe he didn’t want to risk cutting his fingers, you don’t cut your fingers when editing. The point of the blade is nowhere near the tip of your fingers. He didn’t want to know.
He must have had a lot of respect for what you did then? It’s hard to tell if he had respect because he wouldn’t compliment us directly, including his musicians. He was not very forthcoming with compliments but there were a few times, if he wanted you to know he approved of you, he would tell someone else that you were doing a good job. He’d usually tell them in earshot of you. He might say [to someone in the studio], Susan gets me, she knows what I’m all about, or he’d say, The only one with any energy around here is Susan. I’ve got a couple of birthday cards that he signed for me with half compliments. That’s who he was. If he approved of you, you kept your job. As long as you had your job that meant you were good in his eyes.
“The first tape Prince had me put up was Darling Nikki. I’ll never forget that moment of pushing up those faders and going, Oh my god. I can’t believe this”
What was happening at that point in terms of the technology you were using and the studio? There were advancements that were not helping me because he wouldn’t use them. Things like automation, console automation. We could’ve recorded mixes if we used automation but he wouldn’t use anything that was slow so that was just for him, we mixed by hand. Another advancement was the popularity of the SSL console. Solid State Logic had built-in compressors and limiters and noise gates on every channel strip, that would have helped us immensely. But he didn’t want to move off of the ATI console that he was fond of at the time when I was with him. Eventually an SSL went into Paisley Park, but when I was with him we were using old fashioned stuff. Another advance was this tendency to lock two tape machines together to synchronise them so you had 48 tracks, not 24. I asked Prince about that once but he just shook his head, he wouldn’t have anything to do with it because it was too slow. The great thing about that for him was that if you’re limited to 24 tracks, it means that your arrangement must be constrained by 24 tracks so while other people were piling up their mixes and arrangements and getting these big walls of sound. Prince knew if he couldn’t make a concise musical statement with 24 tracks, he’d have to go back in and rethink the track. It really makes your arrangements very efficient. Fewer instruments have to carry more and say more. He was a bit of a control freak then? Yeah, he really was. He said to us once, The only asshole around here is me. And he was. It was 100% true. He was 100% in charge of every aspect of his business, not control for the sake of control. He needed control for the sake of efficiency. He wasn’t one of those personalities that wanted to be a ruler or a tycoon. That was not Prince. He was not a lordly type. He was a working man. He had a strong work ethic. He could do more work in a month than people did in a year. He really, by order of magnitude, worked more than others so he needed a level of efficiency to allow him to keep up that pace. In that sense he controlled us so the machine would run efficiently, so it wouldn’t break down. Have you seen anyone else like that in the studio? No. You see fragments of that. You might think of a young Mick Jagger who was on fire, great songwriter, great vocalist but you’re not going to see him give you that performance on guitar or piano but another musician would. Imagine Keith Richards and Mick Jagger rolled into one, now we’re getting close. You might see people who have abilities on many instruments and can also sing and write but they don’t achieve mastery to the degree that Prince does. They don’t write as well, they don’t play as well, they don’t sing as well. The phenomenon of Prince was the number of things he was world class great at and where he set that bar. As a guitar player he could compete with any of them, as a piano player with any of them, as a vocalist with any of them, as a writer, with any of them. He was the best at many different disciplines and that’s extraordinary. March 2018
15
Visit us Booth C4337
THIS IS STATE-OF-THE-ART WIRELESS COMMUNICATION
TS
PE N NG
5
DI
PATE N
BOLERO WIRELESS INTERCOM • • • • • • • • • • •
Up to 10 beltpacks per antenna 100 antenna, 100 beltpack system capacity Best-in-class voice clarity “Touch&Go” beltpack registration 6-channel beltpack plus dedicated REPLY button Built-in microphone and speaker for Walkie-Talkie mode Smartphone integration via Bluetooth Ergonomic, robust beltpack design Sunlight-readable display with Gorilla Glass™ Decentralized AES67 IP networked antennas Seamless integration into RIEDEL‘S ARTIST intercom matrix
www.riedel.net
FEATURE: CONNY PLANK
THE LEGACY OF CONNY PLANK
Legendary German engineer and producer Conny Plank was the man behind the desk for recordings by Kraftwerk, the Eurythmics, Killing Joke, Devo, Brian Eno, Ultravox, DAF, Gianna Nannini and countless others. Here, his son Stephan Plank and producer David M. Allen tell us about the legacy he left behind, including the custom desk he built which is now in London’s Studio 7.
FEATURE: CONNY PLANK Q By Daniel Dylan Wray
here’s a scene in the 2017 documentary Conny Plank: The Potential of Noise in which Jalil Hutchins of the hip-hop group, Whodini, is brought to tears when recollecting the memories of working with the German producer. So overcome by recalling Plank’s generosity and warmth when it came to capturing the group’s sound that he has to walk away from the camera. It’s a quick, almost fleeting, moment in the film but it’s one that simultaneously captures the essence of the person in question and also the breadth of his work. Plank may be synonymous with krautrock, having produced Can, Kraftwerk, Neu!, La Dusseldorf, Cluster, Harmonia, Guru Guru and seemingly just about every pioneering German group from that period fascinated by and fixated on expanding the possibilities of repetition, elongated grooves and merging precision playing with a loose, exploratory nature. However, as this scene with Whodini proved, his palate and accomplishments were much wider than this greatly adored genre of German music that over the years has morphed from having a cult-like following to seeing groups like Can be sampled by Kanye West or appear in films by Paul Thomas Anderson. Plank produced, engineered and hosted a wealth of artists across his career, including Eurythmics, Killing Joke, Devo, Brian Eno, Ultravox, DAF, Gianna Nannini and countless others. Plank did almost all of this from his home studio on an old farm on the outskirts of Cologne, Germany. “He was an ideal partner and creative mind,” remembers Cluster’s Hans Joachim-Roedius. “He became like a third member of Cluster later on.” Plank’s character is remembered as much as his skills behind the desk, as Joachim-Roedius recalls. “He was such a great personality from the beginning. Every meeting with him, either at work or otherwise, was a big gift for Moebius and me.” Plank’s role in sowing the seeds of what would become one of the most pioneering, innovative and influential periods in contemporary music was a crucial and early one. He produced and shaped countless names through periods of identity searching as they attempted to align a sound, skill and vision that would match their ambitions. He guided the work of Ralf Hutter and Florian Schneider (before they founded Kraftwerk) in the group Organisation and then into and through Kraftwerk’s first four albums. Although their first three albums have all but been disowned by the band now, with 1974’s Autobahn being seen as something of a ground zero for the group to go hurtling into the future - after Plank had brought them to this period
T
18
March 2018
“You can learn a lot about Conny Plank by looking at his desk. He simply didn’t see any limits to what he did”
www.audiomediainternational.com
and helped turn the ignition in the car before they drove off. Michael Rother of Neu! and Harmonia was a frequent collaborator with Plank and he recalls the deep-set “understanding and enthusiasm” that Plank had for his work in Neu!. Not only does Rother praise his abilities in the studio but his man-management skills of keeping harmony between the occasionally disharmonious pairing of Rother with his drummer, Klaus Dinger, was key to allowing the band the creative space and flow needed to form their beautifully dichotomous music. Rother and Dinger often clashed as people but were glued together because their talents made so much sense when placed together, merging hurtling and propulsive rhythms with immersive and dream-like guitar ambience swirling around it. Neu! Only lasted three albums before calling it a day after relations soured for the pair but Rother credits Plank in getting some beautiful work out of them during that period that perhaps otherwise may not have come. “It was incredible how precisely Conny could sense our ideas,” Rother reflects. It was in fact the music of Neu! playing over and over again as Rother, Dinger and Plank worked in the studio that formed one of the earliest memories for Stephan Plank, Conny’s son and the co-director of the recent documentary. “My father was interested in authenticity, believability and innovation,” Plank says of his father. This natural intuition for talent of all kinds would go some way to explaining Plank’s varied and consistently strong catalogue but it also offers a further insight into his abilities as a producer going beyond technical skills. Plank’s work remains refreshingly free from a date-stamp production technique, the consistency of his output not so much tied to a style, tone or production method but a personal one that allowed and extracted the best from the people that came to record with him. A sense of freedom. Back in 2012, John Foxx recalled his eye-opening experiences working with him. “He provided this safe place to operate and gave permission for you to do whatever you wanted, then he quietly recorded everything. He said he felt it was important to allow people to go mad in their own particular way – to go through all the boundaries in order to get something new.” Plank reacted to working situations, or potential working situations, based on what he thought he could bring to it or get out of it and so this often meant passing on huge projects if he didn’t connect with it in some way. He disliked Bono’s bravado and ego and so passed on the opportunity to produce the monster-hit album, The Joshua Tree, and when David Bowie came knocking to enquire about working with him on the record that would become Heroes, Plank and his wife, Christa, sensed that Bowie’s drug habit and behaviour would not be conducive to the
FEATURE: CONNY PLANK environment of making music in his studio and home. Such unique approaches to people, as well as sound, resulted in the shunning of genres and traditional convention as far as his son is concerned. “He was not interested if something was krautrock or new wave, labels were not important to him, it was about the people who made the music and if they had the ability to be authentic in their art.” This authenticity was achieved by tapping into the essence of the artists he worked with. His instructions in the studio would not so much be technical when trying to extract something he wanted out of an artist but, according to his son, a favourite saying of his would be, “make me feel”. This thought process and statement of intent is further backed up by his preference to often record live, specifically drums, as he would eschew the often sterile, compressed and contained sound that would become the staple production style around this period. The pairing of Michael Rother and Can’s drummer Jaki Liebezeit on Rother’s solo works being a glorious example of this. The life, vitality and skipping beat of the drums almost seem out of place and jarring against the pastoral wash of the guitars and ambience but as the music evolves - with the drum beat marrying an almost country-like shuffle with the now notorious precision of the motorik rhythm - they soon interweave and create something staggeringly rich and alive. Something that, quite literally, makes one feel, such is the tangible nature of his production work. One person who, back in England as a teenager, was most certainly being made to feel from the work of Plank was David M. Allen. Allen is a producer with a sparkling history behind him, having produced the likes of the Human League, The Cure, Depeche Mode and The
Conny Plank
Sisters of Mercy. “I heard a Can record when I was about 18,” he recalls. “Although I was about 28 before I looked at my record collection and thought, ‘bloody hell, a third of them are by Tony Visconti and another third were Conny Plank.” Allen, along with producer Laurence Loveless, owns and operates Plank’s original custom made mixing desk from Studio 7 in Tottenham. “Somebody told me that Conny’s studio in Cologne had been sold,” Allen says.
“So, one drunken night at about two in the morning, I was having a bit of a nostalgia search, going through the old Conny stuff and then found the desk up for sale on the web. I bought it with a friend and producer of mine called Mark Ralph. It’s a work of art, it really is. I couldn’t bear to see it go without a fight. I think because I had a personal relationship with Stephan, he knew I’d look after it. To treat the old lady with respect as they say.” Plank died in 1987 at just 47, leaving a remarkable legacy behind him for his age. Whilst Allen himself never got to meet Plank, he did end up recording in his Cologne studio after his death through working with the Italian artist, Gianna Nannini, who Plank had also previously produced and she requested they work in his studio. Allen would also go on to produce more work there, such as the Damned, and feels he’s yet to visit a better studio. “I think it was the best designed studio I’ve ever been in,” he recalls. “The control room, monitoring desk, lights, acoustics, equipment-wise... the best I have ever worked in. It also had a very human feel; you could really tell it was a guy that designed it, a person that had made it, not a professional designer. Although he did know his shit.” Allen’s point, despite never having worked or even met with Plank, lines up with those that did. Of a figure with a powerful human touch that often resulted in something that felt otherworldly. Further adding to the on-going paradox that the future of electronic music being shaped at the time was as rooted in the human as it was any machine. When asked to explain what made Plank so unique as a producer, Joachim-Roedius doesn’t pick out a technique or a studio memory or how he got a sound but instead simply recalls the beauty of his “intelligence and soul”.
What makes Conny Plank’s custom-built mixing desk unique, as told by Laurence Loveless, Studio 7: You can learn a lot about Conny Plank by looking at his desk. He simply didn’t see any limits to anything he did and he went to unbelievable lengths to make the desk, customising every aspect of it. The desk is a one-off, totally unique, designed by Conny himself using components he relentlessly tested and upgraded - a process that went on from the desk’s conception in 1970 till his death. It is hand laminated in wood from a cherry tree from his own garden. The custom made EQ section has been designed and modified so the circuitry changes characteristic as the cut/boost control is swept through the centre position in order to assign Conny’s own pre-specified frequency settings separately to boost and cut. Most desks run each channel from a 48v power source in order to incorporate phantom power but Conny made a power supply that split the phantom power into its own section and then ran the power rail down to 35v meaning that the desk runs hotter than other consoles. This gives the console an incredibly warm tone and a totally unique sound. Its size was carefully
Conny’s custom desk, which is now at Studio 7 in Tottenham Hale, London
designed to fit Conny’s wingspan so he could reach all of the faders without moving his chair from a central position between the monitors. It also has a unique patch bay incorporated into the desk’s meter bridge so that Conny could patch in any hardware in seconds without moving from his central position. One section of the desk was designed to split away from the main desk
and be installed into his customised recording van so he could drive it out and record on location. There is even a recording Conny made when Duke Ellington stopped by Cologne to rehearse with his band. Conny turned up in the van and he agreed to let him record the session. He did a lot of remote recording like this - it was instrumental to records like Autobahn by Kraftwerk.
March 2018
19
Keep bass in its place.
Rear E
rsion (-15 dB ispe rea
Dispersion ergy (Om En ni
-15 dB
yD rg ne
dir ec
Rear l) na tio
r)
Front
Front
Typical Subwoofer
KS212C
with Product Registration
Introducing the KS212C K Cardioid Subwoofer. The world’s first-in-class single-box cardioid subwoofer for mobile applications. Perfect for dance floors, stages and outdoor events where you want your bass energy focused on the audience — not on the neighbors.
qsc.com
©2017 QSC, LLC. All rights reserved. QSC and the QSC logo are registered trademarks of QSC, LLC in the U.S. Patent and Trademark Office and other countries. World of K is a trademark of QSC, LLC.
REPORT: BLUE PLANET II
TOP SCORES
Image: BBC Pictures
Following on from the global success of the first installment back in 2006, Blue Planet II was the UK’s most-watched TV show of 2017 and continues to break viewing records around the world. Accompanying the extraordinary visuals is an epic soundtrack. Here, Russell Emanuel, chief creative and CEO of Bleeding Fingers, the production music company behind the score, explains how the music was created by a top team of composers, including Hans Zimmer. Murray Stassen reports...
///////////////////////////////////////////////////////////////////////////////////////// lthough it was only launched four years ago, California-based Bleeding Fingers has become a go-to music production house for some of television’s most popular shows. It’s not hard to see why. Not only does ‘Bleeding Fingers’ sum up the collective’s work ethic in a profound, two-word statement of intent, but it also boasts some of the world’s most-talented composers amongst its roster of collaborators. Russell Emanuel serves as chief creative and CEO of Bleeding Fingers and explains that the company “wanted to get into the field to counter the influx of companies coming into the custom music space”. “There was a massive gold-rush, with companies creating music for almost no money and sometimes actually for nothing and creating music that was very much in the box, with almost no live instruments,” he continues. “These poor composers were getting so little for a track, they would sometimes have to put out three or four a day.”
A
Emanuel adds that due to some composers being required to create up to six queues a day, the production values of these companies was often incredibly low. “This wasn’t the composer’s fault, they were just trying to make a living; it was more the fault of the company that was relying on just the back end,” he says. “So, we essentially wanted to counter that. “We felt there would be a need for a custom music company of quality; it was pretty much the same process as when we set up Extreme Music, my other music production company. We believe there’s always going to be a customer that gravitates towards a high-end product. ‘Reassuringly expensive’ are the words that we like to use.” One of Bleeding Fingers’ most recent large-scale projects was scoring the sequel to the BBC’s David Attenborough narrated and award-winning 2006 nature series Blue Planet. Blue Planet II aired in October 2017, was the year’s highest-rated show of 2017 and the most-watched natural history title in over 15 years. More than 14 million people watched the premiere episode.
Emanuel served as creative producer for the score, while composer duties were shared by Hans Zimmer, David Fleming and Jacob Shea. Additional collaborators include Radiohead, who’s track Bloom was reworked for the series. Here, Emmanuel tells us how he got started in this business and how the scoring process all came together… Could you give us some background on your own career? I started, as many people do, as an assistant in studios throughout London, starting in Abbey Road, De Lane Lea and Marcus, all the famous London studios. I eventually became an engineer, then a producer, and finally a tour manager, over a protracted period of time. That’s a very quick history, but I kind of graduated through the ranks and ended up managing a lot of very big, seminal punk bands at the time. I was on the road for, I believe, 18 years, and that was my intro into production music, because I needed to get off the tour bus. I kind of came across music for film and TV and felt like ‘those guys are having a nice easy life, and they get to go home on the weekend. March 2018
21
REPORT: BLUE PLANET II What was the brief for this project? Well, we’d just come off doing Planet Earth II, and that needs no introduction: the big, blue-chip, epic, panoramic, beautiful project for which we had to be very careful and very respectful of how we treated the sounds of the earth. Blue Planet II was kind of an evolution of that, being undertaken by yet another dedicated team of scientists from the BBC Bristol Natural History Unit. The brief of it was to create another project exploring the depths of the ocean, and so we asked ourselves how we could sonically come up with a signature for that, you know. What’s the sound of underwater? That was really the brief of it. They were a team of very experienced filmmakers and they wanted to do something different and original. Of course, different and original could probably describe every brief we get, but here it came wrapped in an enormously challenging show, one for which you could instantly see spectacular imagery and fantastic stories. So, the brief kind of writes itself, but we’re blessed with a team of composers who are overachievers. What were some of the main challenges that this particular project presented you with? Once we’d decided on our approach we wanted to create what we termed a Tidal Orchestra and that was its own challenge. Essentially, what we wanted to do was allow the musicians in the orchestra to create an ambient tone that would reflect the ebb and flow of the tides and the oceans. We did that by creating tiny movements within each instrument. For example, within a section such as the violins, if we had twelve violins, neither of them would play at the same time; this allowed us to create these tiny brush strokes. The challenge was recording a large orchestra in this way, then piecing it together and weaving it into the bigger score. Even once we’d created this sound, we had to go back and make it work for the entire project. When you’re doing a score like this, you’re essentially scoring four movies: it’s seven hours, and that’s a lot of music. The hope and challenge coming into that, is that you need to realise that what sounds good at first might not sound as good nine months into the project. So, I’d say the real challenge is keeping the whole project on track all the way through. How does the scoring process work? For example, how many composers are involved, are different scenes assigned to different composers or teams of composers? There were three composers: Hans Zimmer, David Fleming and Jacob Shea. Each episode would be divided up into scenes as it came in, and it was as simple as the composers sitting down and just choosing ones that spoke to their sensibilities. It seemed to work very well, and there weren’t very many fights.
22
March 2018
www.audiomediainternational.com
Could you describe an average day working on the score for this type of programme? Apart from the composers, there’s a team of score producers, coordinators and assistants. it’s a pretty large team. I think in any one day, you’ve got at least ten people working on a score. As the project went along, deadlines got tighter. But to give you an average day would be rather difficult, because at the start of the process you’re spending your days writing, and by the end you’re coordinating orchestras in London and Venice, and still writing because there are scene changes in LA. It’s a big undertaking, logistically and I think no day was the same. How long did it take from beginning to end? Around nine months, right from where we started to final delivery. Were there any standout moments from the process that you’d like to tell us about? There were many, it’s hard to pinpoint one. As I quickly think about it, the moment I saw the giant trevally scene - where the ginormous fish jumps out of the water and snatches up a bird in mid-air- was a standout. Everyone was really amazed by that. I also think that when we understood that this show had an ecological responsibility, we really felt like what we were doing was significant, and it doesn’t get more standout than that. What was it like working with Radiohead on the track Ocean Bloom? It started out with the idea to work with Radiohead, and we found out that the song Bloom, which they had created fifteen years earlier was actually inspired by the original Blue Planet. It just seemed like an opportunity not to be missed. It started with them providing us with the piano chords and the vocal stems, and then they got out of the way and allowed us to create something without any handcuffs. They were very generous, and then as we started to play them what we were doing, they were extremely encouraging and it just all fell into place very naturally and organically. Then, at the very end, when we were getting ready to finish the project and re-record the vocals, it all just clicked. It was really a dream project. Was David Attenborough involved in the scoring process at all? Sadly no. We would have loved to have had him involved in the scoring process. The only thing you can say, is that his voice is an instrument in and of itself. As in Planet Earth II, we were very mindful that we needed to be sympathetic to how his voice was such a character, and just stay out of the way. It’s a part of the overall sound.
“On any one day, you have at least ten people working on a score. It’s a big undertaking”
Image: BBC Pictures
REPORT: BLUE PLANET II
March 2018
23
REPORT: GAME AUDIO
www.audiomediainternational.com
USING THE FORCE: STAR WARS BATTLEFRONT II Senior audio director Ben Minto and composer Gordy Haab on creating a best in class interactive videogame music score for an iconic franchise
///////////////////////////////////////////////////////////////////////////////////////// Q By John Broomhall
xtending the Star Wars music canon is both a tremendous privilege and a weighty responsibility – one that’s taken extremely seriously by Electronic Arts’ star development studio DICE who lead a globally distributed team in bringing the world cutting edge videogames of the highest quality and reputation. Famed for their superbly crafted multi-award winning
E 24
March 2018
‘Battlefield’ series, it came as no surprise they should be entrusted with the hallowed Star Wars universe and have respected audio director and all-round audiophile Ben Minto reprise his role for their second interactive foray into the beloved franchise. He, in turn, has brought Gordy Haab back for an encore, the composer DICE originally hired for Battlefront ‘1’. “I wanted a playful John Williams-esque score that felt like the original film trilogy. We ended up with an excellent shortlist of around 15 showreels.
Gordy’s work stood apart from the rest as the obvious choice. It already felt part of the universe perfectly complementing the parts of John Williams’ original score we were using,” explains Minto. “I made sure we used a large brush stroke approach when briefing Gordy to allow us as many implementation options as possible as the game was developed through multiple iterations. For instance, linear cutscenes become more defined over a period how long is the scene, where are the cuts, what comes
REPORT: GAME AUDIO
Screenshots from Star Wars: Battlefront II
////////////////////////////////////////////////////////////////////////////////////// before and after this scene - and so on. For each of the interactive sections where music replay must respond at run-time to player actions, game events and conditions (whose outcomes and timings are unpredictable) you want a group classification and a common implementation – e.g. ‘play once’ cues – ‘you did it’; ‘you failed’; ‘here is Darth Vader’; ‘there he goes again’. I discuss music requirements with each stakeholder and communicate what’s needed to Gordy - how music will work and therefore how it should be constructed…
“Once a brief is set and understood, writing and delivering orchestral ‘mock-ups’ using sample libraries ensues, allowing everyone involved in approvals to hear how the final recordings will likely sound. I champion and guard the overall music feel and give primary feedback. If I’m happy, the cues go to Lucasfilm. Once all feedback has been worked through, we’re clear to record and use the resultant cues.” Each recorded cue is matched with its ‘in-game asset’ name. One-shot assets are topped and tailed
then imported into the game engine software in the correct location determined by name and folder. Meanwhile, most longer cues have an intro and sustaining loop section set up before import. A desired playback loudness is defined for each group of cues. For fairly consistent, less dynamic action pieces, this might be a global amplitude reduction either baked into the asset or set within the game engine. For more dynamic pieces the team find manually drawing and baking in a slowly varying envelope into the asset March 2018
25
REPORT: GAME AUDIO provides the most pleasing and consistent results. When it comes to interactive music implementation, playback parameters and methodology will mostly have been set up ahead of time and prototyped using test music. Minto: “Let’s say you start a piece of music that helps define the planet you’re on as you start a level. There are so many questions to consider. When should it stop? Should it loop? Should we be able to stop it before its natural end? The loading music should loop whilst we are still on the loading and spawn screens; once we enter the game the music should fade by 6dB over the first 20 seconds and then (if no other music gets a call to play) either play to its natural end (end of loop into release), or if someone within five metres of the player fires (or the player themselves fires) or someone shoots at the player (enemy projectile passes within 1 metre of ‘the listener’ position) then we should fade it out quickly over 1.5 secs. This fade is nicely covered by the blaster fire. What happens if we want to play a new piece of music (Darth Vader has arrived) before this one ends? How do we crossfade? Do we go back to the other piece once that stinger’s finished? Do we want to? We need to quickly crossfade into the Darth Vader stinger as that has more narrative importance. Once the stinger has played
there is no need to go back to the original piece. How do we mix the music against blaster fire? When should music dominate? When should it sit back in the mix? In this example of level intro music, this music cue has less of a narrative purpose once the level has loaded, and when a firefight begins that is the most important source of information for our players. There is no conflict at this point - the firefight dominates so we remove music from the scene. Should this balance be fixed or dynamic depending on context or narrative? Should we EQ the music when we
are trying to push pertinent dialogue through or use a side chain process to duck the music around the VO?” To check all this, the team conduct daily playtesting to ensure that end users who know nothing of the mind-boggling complexity of implementation just under the hood are immersed and engaged in a seamless entertainment experience whose music score feels both authentic and created especially for their journey, helping place them at the centre of the Star Wars universe and never breaking the spell.
COMPOSERS NOTES Scope Gordy Haab, (pictured) composed over 150 minutes of new music for Battlefront II alone. In-game, you’ll also hear his original score from the first SWBF title plus themes from John Williams’ acclaimed movie soundtracks. All original music was recorded at 96kHz/24-bit with the 100+ member London Symphony orchestra and 80-strong London Voices Choir at Abbey Road Studios by engineer, Steve McLaughlin. Mixing and mastering duties fell to Los Angeles-based engineer, Steve Kaplan to produce 48kHz/24-bit stereo masters.
FUNCTION Music is used in three distinct ways. Haab explains: “First: menu music - setting the proper tone for the player from the outset is crucial. In loading menus, we have a very long, single, looping 20-minute music cue with multiple start points allowing the music to feel different each time since you’ll likely not be in the loading menu for long. But if you do hang around, the full 20 minutes will eventually seamlessly loop back on itself. There are also bespoke menus associated with environments. For example, Starkiller Base has its own specific score. “Second: single player campaign. Music functions in two ways - with horizontal interactivity during gameplay, where events trigger certain cues within
Gordy Haab
the horizontal timeline. Or as set-piece dramatic/ emotional scoring for cutscenes which help unfold the story which are composed ‘to picture’ just as John Williams would score movie scenes. It develops from one cutscene to the next to create an overall story arc. “Third: interactive music for multi-player gameplay. The musical language Star Wars entails is complicated enough but making it interactive is a whole new level. The system works both horizontally and vertically, meaning, as the player progresses, events trigger music to play within that timeline. Say when the end of a battle is impending, a high energy battle cue is triggered. When you ‘win’ the music system finds a logical transition point to trigger a ‘winning fanfare’. The challenge with composing horizontal interactive elements in this style is that, by design, the music is never in one tempo, meter, key or instrument group for more than a few seconds - it’s what makes Star Wars music so exciting. Designing it to be interactive is an exercise in harmonic and rhythmic modulation and transition-writing like none I’ve experienced. But the interactivity also works ‘vertically’. In many video games, that will mean events triggering additional layers to be added or subtracted, producing changes in intensity, but a layering approach wouldn’t work well for Star Wars music. To sound truly symphonic like the movie scores, all parts of the orchestra are required no matter
what the intensity level. So rather than create additive layers, I composed multiple versions of each piece of music. Each interactive piece of music has three uniquely composed versions following the exact same timeline, tempo map, and overall musical form. Each version is re-composed, revoiced and re-orchestrated independently, using the full ensemble and recorded independently. In any given moment, whatever the intensity level, it’s always the entire 100-piece orchestra playing as an ensemble to maintain the truly pure symphonic sound Star Wars deserves.”
PROCESS Haab begins at the piano with pencil and paper choosing to work the same way John Williams would, and forcing him to develop musical ideas in his ‘mind’s ear’, without the luxury of trial and error: “Beginning with just my ears, a blank page, and a pencil has a way of quieting my mind and allowing me to write my best music. Once I have a good sketch, I work closely with one of my team members, Sam Smythe or Marco Antonini to create a synthesised mock-up. This is passed along to Ben or Olivier Asselin at Motive for either approval or some back and forth. Once approved, I jump onto the computer and finalise my orchestration which is then passed on to a team of copyists at Black Ribbon Pro who prepare all the individual parts for the recording session. Rinse and repeat!”
March 2018
27
PROFESSIONAL
ShowMatch™ DeltaQ™ loudspeakers provide better coverage for outstanding vocal clarity. ©2017 Bose Corporation.
With DeltaQ technology, new ShowMatch array loudspeakers more precisely direct sound to the audience in both installed and portable applications. Each array module offers field-changeable waveguides that can vary coverage and even create asymmetrical patterns. The result is unmatched sound quality and vocal clarity for every seat in the house. Learn more at SHOWMATCH.BOSE.COM
NEXT-GENERATION ARRAY TECHNOLOGY
FEATURE: POST
Matthew Martin, co-founder and managing director of Immersive VR
POST MASTERS: MIXING SOUND FOR THE SCREEN
///////////////////////////////////////////////////////////////////////////////////////// VR, AR and immersive experience is becoming an increasingly important aspect of audio for games, film and television and the professionals responsible for mixing sound for these formats are having to adapt to an evolving industry landscape in order to produce the highest quality audio, regardless of the final playback system, as Stephen Bennett reports... t used to be so easy, adding music, atmosphere and sound effects to accompany visual media. The sound engineers usually had access to the production relatively late in the process, got a few musicians together, pointed a couple of microphones at them and banged some coconut shells together for the effects. You only had to worry about mono or three-speaker mixes for the cinema and television’s tiny elliptical speakers were a lost cause anyway. Today, sound mixers for film need to supply high quality audio that works on playback systems ranging from the cardboard speakers that seem to be a feature of most flat screen televisions, through to Dolby Atmos-equipped cinema complexes, while those working in television must meet the demand of an increasingly sonically sophisticated home-based audience who are buying ‘high-fidelity surround receivers’ and ‘sound bars’ in droves. And that’s before we start to think about the challenges
I
and opportunities brought about by the explosion in new media formats, such as the non-linear nature of computer games and the immersive audio required for Virtual (VR), 360 degree and Augmented (AR) applications. So how are those at the coal-face approaching the final sound production that’s required to meet the needs of the various media? Rich Aitken has been working in the music industry for twenty-three years, most of them as a sound mixer whose work straddles film, television and computer games, winning an Ivor Novello award for Guerrilla Games’ Killzone 2 with the composer Joris de Man. “For film, I mostly deliver 5.1 surround stems to dub stages,” he says. “There is a very definite hand over date and generally, I have a near finished set of film reels to put to the music.” For television, Aitkin says that most music mixes he does are in stereo as there seems to be less request for stems. “Although I do them as a matter of course anyway,” he adds. Sam
Girardin is CEO of GameOn, a Montreal-based game audio production company offering everything from Foley, dialogue, sound effects and music to games designers—their credits include work for Ubisoft and a 61st MPSE Golden Reel Awards nomination for WB Games Montréal’s Batman Arkham Origins. Girardin’s background is in music production but the move to game sound was almost inevitable given the lack of resource in this field when the company was founded in 2002. Even though games are, by their very nature non-linear, Girardin approaches the creation of the sonic landscape in quite a traditional fashion. “In an interactive game the flow of each part is linear, so you need to have this kind of mind-set to create convincing dialogue,” he says. “Most games now have a story which branches a lot, so our job as an audio provider is to still think in a linear fashion and mix the dialogue in a similar way to a film.” Paul Zimmer is a games composer and sound designer at ZAP! (Zimmer Audio March 2018
29
FEATURE: POST Production), whose clients include Electronic Arts (EA) and Disney. “I’m interested in continually developing fresh approaches to composition and sound design,” he says. “Sometimes this can be achieved by pushing technical boundaries, but often it’s as effective to try different compositional approaches and experiment.” The technology, hardware, and mixing and mastering environments used by games sound designers to create their music scores, Foley and sound effects are similar to those used in most areas of audio production. For example, Girardin uses Avid’s Pro Tools, Aitkin works mostly ‘in the box’ for reproducibility (alongside a few choice hardware units) whilst Zimmer is a great fan of Virtual Instruments–although the latter believes that more traditional elements are also important when mixing for games. “Over the years I’ve become aware that the most important technology is ensuring that you can trust the sound that you are hearing when mixing and mastering,” he says. “The key to that is good monitor speakers, a well-treated room and acoustic optimisation. How good the final mix ends up sounding all flows from that.” Girardin mixes in, what he calls, “a (very nice) living room-type environment,” in 5.1 surround and makes sure that he auditions his work on the same kind of playback systems as will be used by the gamers themselves. Aitkin says that mixing audio for the various media has become remarkably similar. “I mix in a cinema style calibrated room so I tend to approach those kind of levels,” he says. “This has the wonderful side effect of being pretty much in line with television levels and today’s requirements for Spotify and so on. I send any artist-derived music off for mastering, and I do master soundtracks but they are mostly ones I haven’t mixed.” Zimmer believes that technology can only take you so far when producing quality audio for games. “It’s important for me to find an interesting angle for every piece I work on - even if it’s not my favourite genre.” While television and film mixes are usually done to a
www.audiomediainternational.com
complete (or near-complete) visual edit, it may not be the case with other media. “With games, I’m rarely given anything to mix to; it’s not a linear medium,” says Aitkin. “Sometimes I get some video which helps in the pacing and allows me to see and hear how much space there is for music. It used to be the case that Rich Aitken, film, TV and games mixer
I’d be providing mixable stems or loops as well but this seems to be less of a requirement than it used to be; the interactive element of games has evolved to a much more creative level than simply adding or subtracting layers and its focussed much more on how the music is written.” When it comes to the placement of the audio assets within the game itself, the methodologies and technology differs somewhat from the dubbing of audio in film and television. Games Simon Hayes on the set of Aladdin
30
March 2018
engines such as Unity Technologies Unity enables the sound designer to place the mixed and mastered audio within the game and help the audio engineer and games designers define how it reacts to the characters and their environemnts. Girardin uses Audiokinetic Wwise-which he calls the ‘Pro Tools of game audio’. “It’s
not an editing system as such, but a huge mixing matrix which allows you to synchronise your animation with your audio and help make sure your ambiences and atmospheres are correctly rendered”, he says. A new area for composers and sound designers is the emergence of VR, AR and 360 degree technologies that require suitable accompanying ‘three dimensional sound’. One of the issues faced by audio mixers in this field is that it’s not just the audio that must be placed in the soundfield to line up with the audience’s direction of sight, but it must also reflect the simulated—and ever changing—environment being presented to the consumer. The ability to produce immersive audio has been with us for some time, whether using binaural or multi microphone array techniques, but it is the advent of powerful computer control that makes it possible to align these recordings with the head movements of a gamer wearing a VR headset. Norwich-based Immersive VR , founded by managing director Matthew Martin and technical manager James Burrows, has been producing VR-based material for the likes of Ikea and Yamaha for some time, but they soon realised that the technology and expertise could be put to good use in the production of VR-based games. “Our game, Primal Reign, started off as a fun internal project that quickly escalated into a project for public release,” says Martin. To date the company have created several VR gaming titles as well as several mobile gaming products for various brands. Burrows oversees the
FEATURE: POST audio side of the business, using typical Digital Audio Workstations (DAWs) such as Cockos Reaper and Steinberg’s Nuendo for mixing and mastering. “My background is in music recording,” he says. I make sure the audio works in iPhone earbuds, DT150 headphones and NS10 speakers first and then I’ll perform the spatial positioning.” Burrows says that because VR headsets are expensive, it’s likely that the consumer also has a decent set of headphones. “We use binaural and special audio design inside the Unity game engine,” he adds. “For a recent project we used rifle mics in an array which we then mixed using Facebook’s 360 spatial audio tools to get a tracked point source audio—it takes your audio assets and puts them out as multi-channel, binaural or ambisonic formats.” However, those working on mobile-gaming platforms could end up being frustrated in all their efforts as it appears that, according to Garadin, the majority of these gamers play with the sound switched off. In story-driven narratives, the dialogue and performance are all important. However, the quality of the dialogue in the final mix is always only going to be as good as that captured on set or in the studio— if you need to do further dialogue replacement (ADR)—something that many directors abhor—the performance, sonic consistency and narrative continuity may suffer. BAFTA and Oscar winning production sound mixer Simon Hayes has a long track record in working in film sound production and his recent work includes the upcoming Guy Richie film Aladdin and JK Rowling’s Fantastic Beasts and Where to Find Them, directed by David Yates. Hayes works on what is arguably the most important element in creating a compelling narrative in film and television production, on set dialogue—so he’s understandably concerned as to how the actor’s performances emerge Sam Girardin, CEO of Game On
Simon Hayes, BAFTA and Oscar winning production sound mixer
from the mixing process. Hayes says that the actual sound technology used on set hasn’t changed that much over the last five years, but there have been important changes that have helped improve the audio quality that we have come to expect, especially in television productions. “High end television now has production values as good as any feature film,” he says. “In high-end television production, we now have adequate crew to make sure that the audio quality is as good as any feature film.” What this means in practice is something that Hayes is keen to stress as essential—that multicamera productions generally now have two first assistant ‘sounds’—formally boom operators—and a dedicated second assistant covering radio mics and acoustic integrity on set. “That’s the same crew we’d have on a $150m feature film,” he adds. Hayes believes that a boom-placed microphone will always provide the best sound quality on set, but that for wide shots, it has been usual to use radio-microphones as booms would be visible. While these systems have improved in quality over the years, they may sound quite different from the boom microphones and also be subject to costume-related noise interference that’s hard to remove. Interestingly, this issue is being tackled not by improvements in audio technology, but by the use of visual effects. The television series House of cards was created by, amongst others, the director David Fincher who, Hayes says, is notoriously obsessive about the quality of the audio. “Fincher asked the sound mixer if it would help if booms could be used in the wide shots—where they would usually be visible—as well as in the close up, so that you could get almost perfect, and consistent sound,” says Hayes. “Fifteen years ago that would mean painting out the booms frame by frame, but now
they can use what we call a ‘plate shot’.” Hayes explains that the booms are taken out of the shot after the clapperboard has, well, clapped, to give the visual effect houses a boom-free frame to work with. “The booms are then swung into the edge of the close up within the top half of the wide shot. These can then be matted out of the wide shots simply, cheaply and effectively,” he adds. This method is now in common use across the industry and, in fact Hayes himself is now using the same method,notably, on Tom Hooper’s 2012 film, Les Miserables. It appears that the tools and methodologies that sound mixers use to produce the sonic assets for use in the various media are quite similar. It is how those assets are then placed in the sound field, whether it be stereo, 5.1, Dolby Atmos or VR-based immersive audio, where differences begin to manifest themselves, with new sophisticated tools being developed to assist engineers in this ever-increasingly complex process. Otherwise, the ‘traditional’ methods of capturing the performance, using the correct recording and mixing spaces, applying care and attention to the final mixes and masters and being aware of the ways consumers will be listening to the work are all important criteria for audio engineers whatever the field they work in. But perhaps the most important change over the years is an acknowledgement amongst media commissioning companies, directors and producers that highquality productions require the accompaniment of commensurate high-quality audio. www.simon-hayes.com www.soundlister.com/portfolio/rich-aitken/ www.gameon.studio/en/ www.zimmeraudio.com www.immersivevr.co.uk
March 2018
31
FEATURE: POST First past the post As various new listening formats continue to see increased implementation across the board, creative opportunities are becoming easier to access for post production professionals. Coupled
Seb Juviler, studios director at SNK Studios We had a fantastic 2017. Since opening Studio 7, a Dolby certified Commercials & Trailers suite almost exactly a year ago, we’ve regularly mixed 10 projects a week for cinema. We’re now also working on a lot more long-form audio and foreign dubbing work for high profile clients, which is really exciting. In Studio 7 we’ve done quite a bit of work in Atmos and mixed work for VR experiences, although traditional 5.1 is still the preferred spec. Across the wider facility however we do loads of 3D mix work. We’re lucky enough to be part of the same group as Red Apple Creative, a specialist audio and digital agency who write the most incredible 3D audio scripts. A lot of that work is designed specifically for the big online audio platforms – where lots of people listen
Today we work in accordance with three major deliverables in our commercial work: EBU R128 -23LUFS (UK and Europe), -24LKFS (North America and other March 2018
with the need to adhere to specific audio standards and deliverables, mixing and mastering for TV, film and video games has become somewhat of a balancing act. The explosion of immersive formats such as Dolby Atmos and VR is undoubtedly the
start of an industry-wide revolution, and many believe that the best is yet to come. Here, a number of professionals offer their thoughts on the rapidly changing post landscape and reveal how they achieve high-quality mixes in an evolving market.
wearing headphones – so a perfect audience for 3D! We get to produce 99% of Red Apple’s work, so 3D and binaural is a huge thing for us. It’s a great time to be in the business. Along with our traditional TV, cinema and radio, digital has properly come of age now. With all the work on new platforms like Atmos, VR, and binaural, we find that variety breeds opportunity. On the audio for advertising side, we’ve seen a gradual shift away from the traditional agency model towards one where the clients have more say - that seems to be much more commonplace now. We find ourselves on procurement lists now which we never did before. The relationships we’ve built up with clients and production companies are invaluable; they will often request their agency use us for the audio, which is certainly a different dynamic.
Raja Sehgal, sound designer, Grand Central Recording Studios (GCRS)
32
www.audiomediainternational.com
territories) and the generic 6PPM (everything else). EBU R128 -23LUFS and -24LKFS give more dynamic mixes as they are based on average loudness, whereas 6PPM is more compressed and limited. This means working in different styles as, with R128 -23LUFS and -24LKFS, you have more space to play with and can give the mix a more dynamic rounded feel. The loudness standards also help to keep everything at a constant. Of course, many commercials have digital online versions and cutdowns for social media channels. We tend to use the 6PPM mixes as templates for levels as the mixes will be tighter for laptop and phone use. Often, depending on the platform, these may get rendered to -1db peak. We recently launched Audio Lab 2 - a dedicated Dolby Atmos Theatrical and third order Ambisonics (TOA) immersive audio sound design and mixing facility. I believe we are one of the only sound companies in the world delivering Dolby Atmos projects for both theatrical commercials and trailers. For Dolby Atmos commercials, we deliver RMU files for Dolby to MXF Wrap and they then send these on to the screen distributors. For trailers, we also do an Atmos recorder session which has 9.1 beds in it, with the 118 objects recorded with their associated panning information. This means countries can take the trailer, revoice it in their language and put it back together again in Atmos, in a workflow that’s quick and precise.
Steve Lane, sound engineer, GCRS & technical lead, GCVRS Working on a wide range of different projects means working with a variety of formats and delivery methods, down to platform-specific file types. When it comes to VR, level measurements and mixing require completely different thinking. The user could be looking in any direction at any time, so it is important to make sure that, while the overall balance is good, the individual elements you are highlighting are also prominent enough to stand out when required. At the same time, it’s important that the audio is not overly loud at points when it is not intended as the focus of the user’s attention, which can mean a bit of a balancing act.
FEATURE: POST Jonathan Wales, re-recording mixer/chief technologist, RDR Sound The explosion in delivery formats and the ramp-up in content creation over the last 10 years has, in some ways, led to a hey-day in post production. However, along with the increased overall production has come deliverable schedules running into many pages of different formats and specifications of increasing complexity. We have also inherited all the integrated loudness specifications from broadcasters, which can make the creative process hard to align with the technical. EBU and ATSC specs proliferate the industry and thus the art of true “mastering”, which was not such an important part of the post process previously, has now become front and centre in everything we do. When we mix feature films for cinematic distribution we are creatively lucky. We have an audience in a (hopefully) controlled environment, with tight specs, where we are able to use a very wide dynamic range as part of a large-format storytelling environment. It can be - and should be - an engaging, emotional and immersive experience for the audience. However this does not translate well to the home/second (third,
fourth) screen experience for the consumer. In that situation, wide dynamic range material results in constant grabbing for the volume control and worse - loss of comprehension of the story. Therefore we are now making “nearfield” mixes for non-theatrical consumption as a normal part of the process. These mixes are performed on small speakers, in a truly near-field environment and are mixed at significantly lower volumes, to replicate the home experience. In a similar fashion to the music mixing techniques, we also often review on smaller formats during the process. As such we are now employing real mastering techniques to communicate as much of the intensity of the original program as possible, without breaking through the overall level barriers dictated by our respective standards bodies. This is actually resulting in extremely high quality content because we are focused finally on the experience of the average consumer in the home rather than just relegating home video delivery to the second-class citizen status. It’s also creatively fun - we are playing with multiple compression and EQ strategies, multi-band, dynamic EQ, bass management - in order to preserve the creative intent of the filmmaker and communicate it into the home.
Doug Sinclair, sound mixer and co-founder, Bang Post Production When we started as a company we were almost entirely providing sound editorial for TV drama, mainly for the BBC. We now cover every aspect from VO recording to full audio post production and mixing for major TV drama series and features for the global broadcast suppliers, as well as picture post for single features and documentaries. Dolby Atmos pushed us forward by allowing us to take Sherlock to the next level and brought us to the attention of those clients who are starting to deliver in Atmos. I think the best is yet to come as consumers become more aware of it. We also started out in the game industry working with an independent game developer on a POV sci-fi platform game. We got involved in the early stages of development with the game engine, audio implementation and the challenges of mixing in a 3D
space. Since then, we’ve completed projects for multiple platforms and are enjoying the challenge of using sounds creatively in an interactive environment to enhance the in-game experience. We’re continuing to build relationships with local and national developers and have some potentially exciting projects on the horizon. In terms of workflow, there is a great deal more being done via file hosting and streaming, to the extent that it has become an integral part of the production and sign-off approval process for final mixes. A little extra time has to be factored in to allow for the notes coming back rather than having a discussion with someone in the mix theatre, but it really helps to break down geographical barriers and keeps things on schedule when key people are moving around. The industry keeps evolving to produce an increasingly high standard of work, while the best of the picture and sound work on TV in an average weeknight is often feature quality.
March 2018
33
DIGITAL 6000
No intermodulation. More channels. More power for your business.
Others dodge problems. We prefer to solve them. Of course, you can work your way around intermodulation and do some software magic — but that is no real solution in the already congested and limited frequency spectrum. By design, Digital 6000 has no intermodulation artifacts. Our superior RF technology results in more channels and more flexibility for any production and any stage — with no trade-off in transmission power or quality. Smarter, leaner, more efficient — this is the built-in principle from user interface up to spectrum efficiency. Redundant Dante™ sockets and the command function are just two components of the recent update. More about the next step towards the future of audio: www.sennheiser.com/digital-6000
PRODUCT FOCUS MICROPHONES
MIC CHECK 1-2
With the presence of a wide range of versatile new products on the market, from wireless radio offerings to all-in-one modelling systems, microphones remain the bread and butter of a recordist’s arsenal. Here, we take a look at some of the most feature-packed models for capturing that sought after sound in the studio.
Audix SCX25A The SCX25A from Audix is a professional cardioid studio condenser microphone that utilises a patented capsule suspension system. Shock-mounted within an intricate machined brass ring, the capsule is completely isolated from the microphone body and electronics. By successfully minimising acoustic reflections and diffractions, the SCX25A delivers what Audix describes as a “pure, open air sound with exceptional detail and realism”. The SCX25A is consistent when responding to on and off-axis signals and exhibits impressive phase coherence and minimal proximity effect. The microphone has a wide cardioid polar pattern, will handle SPLs in excess of 135 dB, and provides up to 20 dB of ambient noise rejection. Affectionately referred to as ‘The Lollipop,’ the SCX25A is designed to capture the sound of vocals, as well as acoustic instruments such as pianos, guitars, vibes, woodwinds, brass, percussion toys, drums, orchestra and symphony sections.
Neumann U 67 re-issue On its launch in 1960, the U 67 was quickly adopted as the new studio standard because of its versatility and sound quality. It was the first microphone equipped with the famous K 67 capsule, which has since become associated with “the Neumann sound” and continues to be used in its successor, the U 87 A. No less important is the U 67’s tube circuit, which features a pre-emphasis/de-emphasis scheme to minimise tube hiss. The U 67 was also the first microphone to address modern recording techniques such as close miking, with its switchable low-cut filter compensating for the proximity effect occurring at short recording distances. Using its pre-attenuation, the U 67 can handle high sound pressure levels of up to 124 dB without distortion – and much more if users do not mind a bit of “tube grit”. Sonically, the current re-issue is identical to the U 67 made from 1960–1971. It uses the same capsule and electronic design, while some key parts, such as the BV 12 output transformer, have been meticulously reproduced according to original documentation. www.neumann.com
www.audixusa.com
Lauten Atlantis FC-387 The Lauten Atlantis FC-387 is a multi-pattern and multivoicing large diaphragm FET studio condenser microphone designed for recordists looking for a modern FET sound. The voicing switch makes the Atlantis particularly diverse. (F) Forward offers a very open response and contemporary, revealing sound. (N) Neutral is a more even response with slight de-emphasis on mid and high frequencies for a more classic sound. (G) Gentle offers the ability to tame harsh, bright or very forward sources or for a very vintage sound. The multi-voicing feature means there is no need to swap out different microphones or EQ on the way in; simply flip the switch on the rear, compare the various timbre qualities and start recording. The Atlantis also features Lauten’s three-position attenuation and gain switch. The +10dB gain switch boosts the microphone’s output, making it less dependent on preamps, while the -10dB Pad gives users the option to record very loud sources. www.lautenaudio.com
March 2018
35
PRODUCT FOCUS MICROPHONES
Sennheiser Digital 6000 The Digital 6000 Series of radio microphones brings top audio quality and solid RF wireless transmission to demanding live productions. The series uses the same long-range mode and Sennheiser Digital Audio Codec as the Digital 9000, and comprises a two-channel receiver in two different versions, a bodypack and a handheld transmitter as well as a rack-mount 19� charging unit. The SKM 6000 wireless live vocal microphone is said to deliver more channels and better transmission performance via the intermodulationfree handheld transmitter, with maximum spectral efficiency and huge signal reliability. Compatibility with Sennheiser and Neumann capsules is via the Sennheiser standard capsule interface and up to 5.5 hours of trouble free operation is assured with a lithium-ion battery pack.
Audio-Technica AT5047 Based on the distinctive four rectangular diaphragm design of the AT5040, the AT5047 is a cardioid condenser with a transformer-coupled output that delivers a smooth sonic character and ensures high SPL handling without the risk of overloading mic preamps or console inputs. With its ability to cope with wide variances in dynamic range, the latest model is designed to capture everything from snare drums to vocals, guitar amps and brass instruments. The AT5047 is crafted from aluminium and brass, with an advanced internal shock mount that decouples the capsule from the microphone body. The custom-designed AT8480 mount, included with the AT5047, also ensures isolation from knocks and transmitted vibrations in the studio. www.audio-technica.com
www.sennheiser.com
Townsend Labs Sphere L22 The Sphere L22 microphone system from Townsend Labs models the characteristics of sought-after large-diaphragm condenser microphones and allows selection of different mics and patterns even after tracking. The Sphere system consists of a high-precision dual channel microphone, which when paired with the included Sphere DSP plugin (UAD, VST, AU, AAX Native), accurately models the response of a wide range of mics, including transient response, harmonics, proximity effect and three-dimensional polar response. Mic models include a 47 (with VF14 tube), a 67, an M49, and a C12. Using a dual-capsule microphone with dual outputs makes it possible to more completely capture the soundfield, including the directional and distance information otherwise lost with a conventional single-channel microphone. This allows the Sphere system to precisely reconstruct how different microphones respond to the soundfield. www.townsendlabs.com
March 2018
37
PRODUCT FOCUS
www.audiomediainternational.com
MICROPHONES
Schoeps V4 U The V4 U is Schoeps’ studio vocal microphone, the aesthetic of which is based on the Schoeps CM 51/3 from 1951. The V4 U is a thoroughly modern studio microphone, its capsule, circuitry and mechanical construction the results of extensive new development. The small-diaphragm capsule architecture, with bevelled collar, controls the polar response,
Earthworks SV33 which results in a warm and clear sonic character with a smoothly rolled-off diffuse field response, according to the company. Additionally, a newly developed bridge-type balanced output circuit can handle a maximum sound pressure level of 144dB SPL and features high immunity against electromagnetic interference. www.schoeps.de
DPA d:dicate 2011 The d:dicate 2011 twin diaphragm cardioid mic, acclaimed for its balanced close-miking ability and hi-SPL handling, is optimised for onstage use as well as for studio recording sessions. Two opposite-facing miniature capsules, placed into a double-diaphragm, one-capsule composition, are custom rebuilt inside this model. These capsules provide fast impulse response and large frequency bandwidth, and are loaded to the d:dicate microphone series preamps, which give the sound more air and precision. The d:dicate 2011 mic capsule can be combined with any of DPA’s preamps. It pairs particularly well with the MMP-A mic preamp, an “ultra-transparent”, transformerless preamplifier with active drive for impedance balancing, which has a slightly softer character than the other preamps in the d:dicate microphones series. www.dpamicrophones.com
38
March 2018
Earthworks’ first ever studio vocal microphone, the front address SV33 features a 30Hz-33kHz frequency response and a 145dB SPL rating. The SV33 has a large sweet spot with 140 degrees of freedom for vocalists. Its performance off-axis is also impressive, with rejection of sounds coming in off-axis and of those that sound natural and uncoloured. The SV33’s proximity effect is well behaved. While it does get warmer and boosts the bass as the user gets closer to the microphone, it does not become boomy or uncontrolled. As the mic moves away from the source, there are slight changes in volume, yet the SV33’s low end remains intact and there is little fluctuation in tone. “Discrete Class A circuitry ensures a pure and clean signal path with zero compromise,” according
to Earthworks. “With robust low end, smooth mids and airy highs you will get an open, lifelike sound. Effortlessly positioned in the mix, vocals are full of vibrancy and intimacy that bring a track to life.” www.earthworksaudio.com
ISE 2019 SAVE THE DATES
S TA Y C O N N E C T E D
PRODUCT REVIEW
www.audiomediainternational.com
LEWITT LCT 540 S M Engineer and producer Ross Simpson tests out the latest creation from Austrian microphone firm Lewitt.. y first experience of Austrian microphone designers, Lewitt, was nearly a year ago when I reviewed their innovative dual output LCT 640 TS condenser. I was very impressed with the next-gen design, quality engineering and remarkable results for such a reasonable cost, making it a real option in a crowded market. Founder and CEO Roman Perschon’s latest creation, the LCT 540 Subzero (LCT 540 S in Europe and UK), follows in similar steps to their LCT 640 TS in terms of unimpeachable build quality, zinc die cast enclosure and full feature set of supplied accessories. But whereas the 640 TS benefits from the flexibility of the infinitely adjustable polar pattern, the 540 Subzero has been created with a fixed cardioid 1” capsule and just one diaphragm, meaning you’d need to buy two of them (with Lewitt’s Perfect Match Tecnology) to achieve stereo recordings. So why might you want to add the 540 S to your list? Well, the clue is in its full name; the LCT 540 Subzero!
A Bold Statement It can be all too easy to get caught up in marketing ‘mumbo-jumbo’ and Lewitt are no exception to the rule when it comes to the LCT 540 Subzero, as their main headliner for this mic is ‘Better Than Human Hearing’. They claim that “comparing the hearing threshold to the self-noise values of the LCT 540 Subzero, you can clearly see that our microphone is always below the threshold of hearing”. I’m not sure about you but when I read claims like this I can be a little dubious, after all, how can we prove it? In fact, Lewitt built a unique measurement device weighing in at over 4,500KG (around the same as two large modern 4x4 cars). This “heavily dampens frequencies up to a 1/10,000th of their sound pressure. The whole frequency range of a microphone from 20 Hz to 20,000 Hz is dampened by 60 dB, up to 80 dB on the higher frequencies.” All rather impressive, but these kind of technical figures can become a little overwhelming for some and perhaps can detract from what is really important how does it actually sound – this low self-noise and yet highly sensitive newcomer? 40
March 2018
MICROPHONES
Key Features Accessories: Shock mount, magnetic pop filter, protective case Sensitivity: 41 mV/Pa, -28 dBV/Pa Perfect Match Technology: Sensitivity matched at 1 kHz Max SPL: 136 dBSPL, 0 dB pre-attenuationRRP: £629.00 ($871.00) www.lewitt-audio.com
PRODUCT REVIEW
In Use Opening the bomb-proof Pelican case, you are presented with a neatly set-out, well protected set of compartments which house Lewitt’s own structure-borne shock-mount, their innovative magnetic pop-filter, a foam filter and of course the 540 S. The mic employs, as with others in the Lewitt range, silently operated, flush buttons on the face of the mic with corresponding white and red LED lights to clearly indicate set up choice for low cut filters, pre attenuations, clipping evidence and history. My first test was to see, in real world terms, if the claimed low self-noise was just marketing hype or a noticeable and usable factor. Using a simple signal generator in the booth, I matched the gain of the 540 S, Lewitt’s very own LCT 640 TS and a benchmark mic we all know rather well, the Neumann U87ai. Without feeling the need to quote figures, I found the 540’s noise floor exceptionally quiet indeed, so quiet in fact that the U87 felt particularly loud in this test with the 640 TS sitting somewhere in the middle of the two. Obviously this was all subject to the room, the preamp and converters being used but the differences were relatively huge, and I was confident of my results. All very impressive, but what does this mean
in the everyday studio application and does it all matter? Put simply, yes, because this gives us a blank canvas to work from. This mic offers new possibilities when recording as noise issues only really arise during the mixing stage, when compression is applied. As we all have experienced, the mixing process boosts previously inaudible noise, which can cloud a delicate vocal line, or a sustained acoustic guitar note at the end of a song. All this is far less of an issue with the 540 S, even when capturing the most delicate of sound sources. The second test was using the microphone for a voiceover, this particular project was for an audio book with no underscore. My first impression was how rich and natural this mic sounded on quiet vocals, the sound was strong, full bodied in tone without being clinical, reflecting the meticulous build quality we’re coming to expect from Lewitt. Next was an acoustic guitar (my favoured approach to this is mid-side but for this test I used just the 540 S on it’s own). Again, the sound was very natural, but the detail for a large diaphragm was impressive whilst retaining warmth that very few mics can achieve with this level of clarity.
We recorded several other sources, all with very positive results, including a floor tom played by a rather enthusiastic young player. There were no concerns about SPL (max on this mic is 136). The tone was big and rich with a good bit of top end snap. Overall its technical ability is sublime; it sounds warm, clear (yet not clinical), smooth and dynamic. I would look forward to further using a pair for a live classical recording lessening the need for noise reduction tools in the mix stage. Thank you to Lewitt for pushing innovations and setting new benchmarks for microphone manufacturing. For the price point of £629.00 this is a brilliant achievement, so much so that the bigger, well established players need to pay attention. I wish Lewitt all the best and cannot wait to see what they come up with next!
The Reviewer Ross Simpson is a sound engineer, composer, producer and vocal coach with 20 years of experience in the professional audio business.
March 2018
41
PRODUCT REVIEW
www.audiomediainternational.com
PMC RESULT6 Simon Allen evaluates these new active studio monitors from the Peter Thomas and Adrian Loaderfounded speaker company, PMC.
Key Features Frequency Response: 45Hz - 22kHz Dimensions: H380 x W199 x D360 (mm) Input Connectors: Balanced analogue XLR, wired Pin-1 screen Weight: 8kg (17.6lbs) RRP: £2,394.00 ($3,314.00) (Pair) www.pmc-speakers.com
42
March 2018
PRODUCT REVIEW MONITORS
t’s a well-known fact by now, that our industry has grown outwards. We’re in the cottage industry era of the pro audio scene, where we witness even the top-level seasoned pros now working out of their own smaller facilities. One reason for this of course, is due to developments in the technology and the tools we use. Ultimately, the demand has risen for more affordable kit. PMC had to find a way to get in on the action. This posed a small challenge for PMC, thanks to the nature of their beliefs and design elements, that have secured so much respect for the brand. How were they going to achieve it? With their rather British mindset, cutting corners simply wouldn’t do. The Result6’s therefore, employ some new technologies and design features to make this all possible. The big question however, is; do they still harbour the PMC sound?
I
Staying True On paper, the Result6’s are a full-range, near- to mid-field active studio monitor, with a frequency response of 45Hz to 22kHz. They have a 6.5” woofer and a 27mm tweeter, with Class-D amplifiers of 100W and 65W respectively. So what makes these part of the PMC family and have they had to sacrifice anything, in order to bring the price down? To answer that, you probably have to compare these to the next most affordable model in the PMC range with a similar driver size, which brings you to the much-loved twotwo6’s. The major difference is that the twotwo range sport digital crossovers and some degree of parameter control via DSP processing. The Result6’s use an analogue input with a simple attenuation control on the rear panel. The crossover in the Result6’s is in fact analogue too, which seems quite rare these days. The concept therefore, is for the Result6’s to deliver as close to “correct” as possible, straight out of the box. Unsurprisingly, to help them achieve this they have used their well-known advanced transmission line (ATL) technology. This instantly earns the Result6’s a place in the PMC family and I explain what it achieves for this model in my listening experience later. I believe there may be a small trade-off with this ATL technology for some perspective buyers, and that is size. Typically, if you’re thinking about entry-level reference monitors and a driver size of 6 inches, you probably imagine a compact unit. While the front face of the Result6s isn’t huge, the side profile is quite deep. Presumably this is because of the size of the ATL chamber needed inside the cabinets. This may be of consideration for those in very tight spaces, or with shallow speaker mounts in their studio. Talking of placing these monitors, PMC have integrated some de-coupling into the cabinet design. Two wide bands of rubbery material run around the exterior of the boxes, so that there is separation between the monitors and
the surface they’re stood upon. This is an excellent idea and looks very neat too. If however, as I discovered, your speaker stand/shelf is a narrow surface, then these two rubber surrounds are too far apart. The sound was much better when the de-couplers are the only point of contact, which proves they work I guess. Then come the new technologies which we haven’t seen in a PMC product before. Firstly there’s the new D-Fins, which are the most visually distinctive aspect of the new monitors. I believe these were designed by Oliver Thomas, son of PMC co-founder Peter Thomas. They’re designed to reduce the diffraction of the high frequencies by the front baffle due to the centrally placed tweeter. They’re also designed to enhance the drivers HF dispersion, therefore reducing the difference between on- and offaxis response. Finally, there is some new technology now in place at PMC for building these drivers. The LF driver is laser measured to ensure the correct performance, along with a newly designed soft-dome tweeter, both unique to the Result6. It’s clear PMC are drawing on their old school of thought with these monitors, but have employed new technologies in order to achieve the best results.
Listening Carefully There’s a lot to love about these “affordable” PMC monitors. I tried them in three different studios and the first thing that strikes you, is their competence wherever you place them. So often I find myself tweaking the position of monitors a few centimetres one way or another. The Result6’s however, seem to sound great straight off the bat, in each room I tried them. I imagine this is partly due to the front-facing port, but thanks also to the new D-Fins. The stereo image is great, with the sweet spot being surprisingly wide. I tried the monitors in a horizontal orientation and the sweet spot seemed greatly reduced. This suggests to me that the D-Fins are having a significant effect as they mostly lie around the tweeter when in the vertical orientation. Compared to other like-minded monitors of a similar specification, the power and punch of the Result6s is impressive. This must be a direct result of the dedicated 100W amplifier for the LF driver, leaving a separate 65W amplifier for the HF driver. If you’re planning on using these as mid-field monitors, then you should still find there’s plenty of grunt. At the other extreme, the response of these monitors is still apparent at lower monitoring levels too. Then we come to the bass extension provided by their ATL technology. PMC’s signature party trick. The best description I can think of is; “true”. Don’t expect these monitors to rumble the floor or sound larger than they look. What they do achieve however, is a tight low end that rolls off gently. This in turn is likely to suit smaller studio spaces as they won’t excite boomy frequencies as much as other boxes might. Besides bass extension, there is another aspect that the advanced transmission line offers. By allowing air to move more freely on the rear side of driver, than it would
inside a regular cabinet design, lower levels of distortion should be experienced. This I can certainly hear working. This leaves these monitors sounding very “capable”, even when pushed. The best part of the frequency spectrum from these Result6s has to be the top end. The new tweeter design is a hit with me. I might even go as far as saying they sound fresher than the twotwo range. With a good level of ‘air’ pronounced in the top end and a smooth analogue crossover, vocals sound wonderfully clear. I’m happy to report that they sound like a PMC, as they should do. Perhaps with a little element of “next-gen” found in the top end.
Conclusion PMC have achieved their goal of developing a cheaper model, still worthy of being a part of the PMC family. I’m sure their approach to retain the fundamentals of their design and stay true to their beliefs, will play a key role in the Result6’s success. For those that know the PMC sound, you will feel immediately familiar with these monitors. If however, you’re approaching these speakers as the next level up from your existing monitoring, you will have to learn to trust these honest boxes.
The Reviewer Simon Allen is an internationally recognised freelance engineer/producer and pro audio professional with over 15 years of experience. Working mostly in music, his reputation as a mix engineer continues to reach new heights.
March 2018
43
PRODUCT REVIEW
www.audiomediainternational.com
RØDELINK PERFORMER KIT
Alistair McGhee tells us how Rode’s new wireless mic kit performs in this in-depth review...
ode have been just one of the leaders in the charge into radio mics in the 2.4Ghz band. The attraction is obvious - no licence required - here, there or anywhere. All round the world 2.4Ghz is a come-as-you-are party for wifi networking. Rode dived in with their very popular Filmmaker kit comprising belt pack transmitter and battery powered receiver aimed at direct camera connection rather than necessarily involving a mixer. And now Rode have launched The Performer, this new handheld-based outfit is made in Oz, comes with a mains powered receiver and offers pretty tidy build quality at what is a pretty attractive price point. The Performer sports a Series 2 designation reflecting improvements in the diversity performance of the system. Diversity came to
R 44
March 2018
WIRELESS MICS
professional radio mics with a bang when Audio Ltd. released the DX2000 system back in 1990 - the idea is that the receiver has two tuners (if you do it properly) and two antennas - the receiver then compares the strength of the two signals and chooses the strongest signal - seamlessly switching to ensure you have the best quality all the time. With the advance of DSP, modern 2.4Ghz systems can search across the band for the least busy channel and hop back and fore to find the best operating channel. Rode have obviously been working on their core RF technology here and this really is the deal breaker for a wireless system. If your basic RF tech is no good then your radio system, however fully featured, is not going to do the business. The front panel of the RX-Desk has an on switch, a bright colour LCD screen and a scattering of buttons to select
Key Features Frequency range: 35Hz - 20kHz Range (distance): Up To 100m* Max input signal level: 140dB SPL Transmission type: 2.4 Ghz Fixed Frequency Agile System RRP: £399.00 ($551.00) www.rode.com options. Up and Down buttons are available for the direct control of audio level from -20 to +20 dB in 10 dB steps. The remaining three buttons on the left hand side of the screen pair the receiver with the transmitter, remote mute or unmute the
PRODUCT REVIEW
“The Performer system offers a clever mix of features and engineering that delivers licence-free wireless in a decent package” microphone and change the operating channel. The output of the receiver is available on a line level jack or balanced on XLR at switchable. The last button came as a little bit of a surprise as I’ve been used to pairing systems with no manual input at all. The procedure with the Rode is select a channel on the receiver and then enter pairing mode and hold down the pair button on the transmitter. Rode don’t want the talent fiddling the channel display and pairing button on the hand held transmitter are hidden inside the body - being revealed by unscrewing the bottom half of the body cover. It’s a slight pain but probably worth it as you don’t really want your presenter toying with the pairing settings a few seconds before kick off. One point about pairing is that it is similar but not identical to setting frequencies on a more conventional radio mic. I’m guessing this is because of the channel hopping technology built into all the 2.4Ghz systems - the notion of working on a particular frequency just doesn’t exist. However a word of warning here - you can make a traditional radio mic fail by taking it out of range but I have never known a radio mic change its frequency settings of its own accord - in other words I’ve never failed to ‘pair’ a transmitter and a receiver. However when you fire up a 2.4Ghz system from cold the TX and the RX end have to find and recognise each other - even if previously paired.
This process is automatic, usually almost instant and requires no user attention, but some systems are better at it than others. I found the Rode particularly robust in this aspect and having used systems where mics would occasionally be unable to pair without user input this is a big plus to Rode. The LCD receiver screen displays pair status and channel number alongside signal strength and audio and output levels. The hand held transmitter has a nicely concealed on switch with led (orange for on and green for on and paired) on the bottom and a big mute slide. To be honest I’m against mute switches on mics - having had quite a few painful mute based experiences over the years - so I was pretty pleased to see Rode have included a mute override button on the receiver, allowing the operator to mute the mic remotely or indeed unmute it if the talent has forgotten to do so. The Performer kit comes with a rechargeable battery in the handheld - and a micro USB charging port - and you can remove the pack and replace it with two AA cells at any time. I have to say that is another sign that Rode have listened closely to their users and offer what is the best of both worlds in terms of transmitter powering. But for every swing there is a roundabout, and the licence free nature of 2.4Ghz has proved too much for some systems I’ve tried. Rode recommend checking the
2.4Ghz traffic with your mobile to see how many access points are in use and note that three heavily used points will be enough to have an impact on the range of your Performer system. I used the Performer in a local venue (150 seats) alongside a couple of other channels of 2.4Ghz wireless mics with more than three access points in operation and had no problems - but the line of sight distance between mic and receiver was under 20m - if you are trying to work at much longer distances I would recommend some pre-flight checks before going live. That is the reality of RF in the 2.4Ghz band. The Performer system offers a clever mix of features and engineering that delivers licence free wireless in a decent package at an attractive price. I found the audio quality at least equal to more expensive systems and certainly a real bonus at the price. All in all - a good Performer.
The Reviewer Alistair McGhee began life in Hi-Fi before joining the BBC as audio engineer. After 10 years in radio and TV, he moved to production. Most recently, he was assistant editor, BBC Radio Wales and has been helping the UN with broadcast operations in Juba.
March 2018
45
PRODUCT REVIEW
www.audiomediainternational.com
ANTELOPE AUDIO ZEN STUDIO+
Simon Allen praises this versatile new USB interface from Antelope Audio...
remember when the original Zen Studio interface was launched by Antelope Audio, unmissable in its striking red outfit. At a time when interfaces were becoming more compact with greater connectivity options, the Zen Studio was the surprise that kept other manufacturers on their toes. The original Zen Studio’s 12 mic pre’s and handy mobile case, made recording a band on location both feasible and simple. With the release of the Zen Studio+ I was keen to see what surprises Antelope had packed this time. At first glance there doesn’t seem to be many major differences, but from a company who have a reputation for ‘micdropping’ in front of the competition, I knew this needed close attention.
I
What’s New? Immediately we can see that the bright red colour has been changed to a smart silver and black design, leaving just the logo in red on top. I don’t think we should typically worry about the colour of gear, but in this case I think there is some significance. Clearly the Zen Studio+ is less recognisable as it’s predecessor, except the distinctive handle on the side. This does of course bring the unit in-line with Antelope’s other interfaces, in their perhaps overlyextensive product line. The Zen Studio was starting to look astray from their other products, and was certainly the next unit due a re-vamp. The main significance about the colour change for me however, is down to some new toys on the market by a certain brand name beginning with ‘F’. These afore mentioned interfaces have taken the market by storm, and in my opinion lead the way in sonic quality. Antelope’s USP is different though, typically offering the solution to the question; “if only there could be an interface that did x, y and z”. 46
March 2018
There’s nothing new about the number of mic pre-amps this interface offers, but 12 is still 4 more than most other manufacturers offer in 1U. There are still; D-Sub connections for additional line inputs and outputs, word clock connections, dedicated monitor and dual headphone outputs, two ADAT connections and S/PDIF. What’s new, are two guitar re-amp outputs and built-in talkback on the front panel, plus a Thunderbolt port on the rear. This leaves the Zen Studio+ very well equipped and current. Whilst these aren’t massive changes, I think this demonstrates how forward thinking the original unit was, and still is to be honest. I can see how useful guitar re-amp outputs will be for many, especially those in project or home studios. The built-in talkback however, I feel is key to the Zen Studio’s success. This unit was always built as a mobile unit that you could feasibly record an entire band with at a rehearsal room for example. Talkback is something I wish more manufacturers built in to their products, otherwise you end up sacrificing one of the high-quality pre-amps for this mundane but necessary requirement. The big news for many however, will of course be the Thunderbolt connection and improved on-board FPGA DSP power. Just before you rush out and buy one though, Antelope still have some work to do. Unfortunately the Thunderbolt port is for Mac users only, with working PC drivers still being promised. What’s more, while other manufacturers are shouting from the roof tops that their Thunderbolt technology provides extremely low-latency connectivity, Antelope only advertise the additional channel count achievable via Thunderbolt. This being said, at least we have a very versatile unit here that does offer dual connectivity options. The market is looking more scarce everyday for those that are only able to use USB interfaces, so this is definitely a string
USB INTERFACES
Key Features 12 Class-A Mic preamps Front Panel Controllable Talkback with IOS/Android App implementation 64-BIT AFC Technology built-in RRP: £1,970.00 ($2,722.00) www.antelopeaudio.com to Antelope’s bow. Once connected, the software which operates the Zen Studio+ is excellent. While confusing at first glance, the routing options are extremely versatile. You can run up to 4 independent mixes with built-in reverb and effects via the zero-latency DSP engine. Antelope also offer mobile control apps for that modern ease-ofuse we’re becoming accustomed to.
Standing Strong As I said earlier, I believe the ‘F’-word of interface brands with their ‘C’-word range of Thunderbolt units are sonically smashing it at the moment. The pre-amps on the Zen Studio+ are very good and were renowned on the original model. I’m not aware of any significant improvements in this latest model however. The clocking also appears to be the same with only a small improvement to the converters. This isn’t a negative point of course, because Antelope have had good solid tech in this department for a while. If you’re considering one of these interfaces because of the multitude of selling points, then the sound of the hardware won’t disappoint. One of the areas which sets the Zen Studio+ apart, is of course the on-board FPGA FX. This is something
PRODUCT REVIEW
that many brands are shying away from. As we all know, this is thanks to another nameless brand beginning with ‘U’. If you’re after a ‘cheaper’ alternative to hardware-based DSP solutions, which seems to be getting thinner than ever before, then Antelope are that alternative. For me, this is how Antelope justify the price of the Zen Studio+. It might not appear particularly cheap in comparison to some solutions until you realise all the on-board FPGA FX are free. Antelope have been releasing firmware updates for their hardware regularly, providing new internal effects. Users can make the most of these developments, loading the most recent firmware releases straight to their devices. Personally, I don’t see myself planning to mix ‘in-the-box’ and utilising this DSP power alongside my
DAW anytime soon. When recording however, these are very powerful tools which bring back the art of tracking in a well-equipped studio anywhere you are.
Conclusion The Zen Studio was certainly due an update. The resulting Zen Studio+ will continue to fly the Antelope Audio flag high. There aren’t really a shocking number of improvements with this new model, but some modern refinements. Unlike the dramatic entry the original unit made a few years ago, this one still delivers everything we fell in love with, but for 2018. Thankfully Antelope Audio have recognised what an eye-opener it was and kept the story alive. The entire product range by Antelope Audio almost
fills the gaps between every other manufacturer, and everything in between. Of those solutions, the Zen Studio will always be the original in my eyes. Next time you’re wondering what alternatives there are, take a look at Antelope Audio.
The Reviewer Simon Allen is an internationally recognised freelance engineer/producer and pro audio professional with over 15 years of experience. Working mostly in music, his reputation as a mix engineer continues to reach new heights. March 2018
47
Welcome to Welcom
Broadcast 3.0 Audio A di P Production d ti
3.0 3 0
mc2 96 – Grand Production Console
With experience gathered over more than 40 years, German audio innovator Lawo is distinguished by its engineering and manufacturing of the most reliable and most advanced audio mixing consoles available. Originally
Join us @
PL&S:
Hall 3.1 – #E89
developed for mobile and studio broadcast environments with zero tolerance for failure, Lawo consoles are also widely chosen for their audio quality in theater, studio and live performance applications. With the mc² 96, the mc² platform is pushed beyond limitations, culminating in a pure, unparalleled audio production tool dedicated to those who can tell the difference. Whether it’s intended to be used in broadcast, theater or recording – in each of these applications it stands out with a unique set of features. The mc² 96 provides optimized performance within IP video production environments with native SMPTE 2110 support. In addition, the revolutionary LiveViewTM feature enables thumbnail previews of video streams directly in the fader labeling displays, and innovative mix-assist systems support the engineer in achieving outstanding results. Curious? Visit bit.ly/LAWOmc96 for details.
AUDIO PRODUCTION
IMMERSIVE MIXING
3.0
3.0
bit.ly/2BLguHn
mc² 96 – Grand Production Console
LIME – Immersive Mixing Engine for mc2 Consoles
Open link to see the first-hand impressions of Grammy-awarded audio engineers Josiah Gluck, Laurence Manchester and others...
Fully integrated control | Supporting all relevant 3D/Immersive formats | Virtualized 3D for binaural headphone monitoring
DOWNLOAD mc2 96 BROCHURE bit.ly/LAWOmc96 www.lawo.com
BACKBEAT
1
3
4
2
7 5
NAMM 2018 in Pictures The latest edition of the ever-popular NAMM Show took place from 25-28 January at the Anaheim Convention Centre in Los Angeles, featuring an eclectic mix of companies and professionals from across the music industry. Here is a selection of pictures from this year’s global gathering... 1. NAMM 2018 opens its doors at the Anaheim Convention Center 2. QSC founders Pat Quilter and John Andrews, and current CEO Joe Pham celebrate the company’s 50th anniversary 3. Steve Vai receives the Roland Lifetime Achievement Award 4. Thomas Dolby receives the Roland Lifetime Achievement Award – and then demonstrates how he wrote She Blinded Me With Science on vintage Roland keyboards 5. Grammy-winning engineer Andrew Scheps at the Waves Audio booth 6. Rob Thomas, FOH engineer for Third Eye Blind recounts using VUE Audiotechnik’s boxes for the band’s fall tour 7. Legendary engineers Jack Joseph Puig (hat) and Eddie Kramer (blue shirt on right) at the Mix With The Masters booth 8. Flava Flav discusses his career from Public Enemy to his VH1 reality shows at the Harman booth.
8
6 March 2018
49
BACKBEAT
www.audiomediainternational.com
TWEETS
)ROORZ XV IRU WKH ODWHVW QHZV DQG WUHQGV
Audio Media Intl @AudioMediaInt
FROM THE PRO AUDIO WORLD
Here are some of the best tweets from February, including activity from Europe’s premier AV show, ISE 2018 ,6( #,6(B6KRZ .
Feb 9
Astro Spatial Audio
7KDQN \RX IRU PDNLQJ #ISE2018 LQQRYDWLYH inspirational and truly unforgettable!
@AstroSpatial
.
Feb 12
7KDQN \RX WR HYHU\ERG\ ZKR YLVLWHG XV DW WKH #$OFRQV$XGLR86$ ERRWK DW WKLV \HDUV #,6(B6KRZ. ,6( LPPHUVLYH 'VRXQG
%LDPSÉĽ #%LDPS .
Feb 9
*UHDW WR VHH SOHQW\ RI RXU SDUWQHUV RQ WKH VKRZ Ă RRU during #ISE2018! @POLARaudio, #$9,;$, #3UDVH0HGLD7HFK
6DYH WKH GDWHV IRU #ISE2019 )HEUXDU\
2
$9 7HFKQRORJ\ (XURSHÉĽ #$97HFK(XURSH .
Feb 13
$ %,* VKRXW RXW WR DOO RXU %HVW RI 6KRZ $ZDUGV ZLQQHUV &RQJUDWXODWLRQV WR @KramerElec @Barco #&UHVWURQ #+RORYLV,QW #,QQHRVB//& #/LIHVL]H+' #0HUVLYH #0X[/DE,QF #YLYLWHN #3DQDVRQLF3UR$9 & @PrysmInc +RZ GR \RX IHHO" $9WZHHSV #ISE2018
3
7
%HFN\ 9DQFHÉĽ #%HFN\:LOGZRRG .
2
50
9
March 2018
21
+RORYLVÉĽ ÉĽ #+RORYLV,QW .
Feb 7
2
)HE
7KDW¡V WZR #ISE2018 %HVW 2I 6KRZ $ZDUGV LQ WKH bag, go team!
1
<RX DVNHG IRU LW ZH GHOLYHUHG @madsoundguy is EDFN ZLWK #+RZ7R$9 WDONLQJ WHFK GHVLJQ DQG 8; at #ISE2018
3
1
4
Feb 13
2XU QHZ ELQDXUDO GXPP\ KHDG KDV DUULYHG ,W KDV QRZ EHHQ QDPHG œ/HRSROG¡
22
Feb 7
7KH ($: WHDP KDG D JUHDW GD\ DW #,6(B6KRZ 6WRS E\ WRPRUURZ DQG YLVLW XV DW +DOO Stand M230. ,6( ($: 6SHDNHUV 6RXQG #Audio #ProAV
1
0DULDQD /RSH]ÉĽ ÉĽ #0DULDQDB-B/RSH] .
4
($:ÉĽ ÉĽ #HDVWHUQDFRXVWLF .
6
-%/ 3URIHVVLRQDOÉĽ #7KH-%/SUR .
Feb 8
'HVLJQHG VSHFLĂ&#x20AC; FDOO\ IRU SHUPDQHQW LQVWDOODWLRQV WKH 9/$ &RPSDFW RIIHUV ORQJ WKURZ FDSDELOLWLHV LQ D PRUH FRPSDFW OLQH DUUD\ VROXWLRQ 6HH LW QRZ DW #,6(B6KRZ LQ 6WDQG ) #ISE2018
The 8" driver with everything in perfect balance We
our FEA
JEMQA PK AHAOPEKJż KQN >AOLKGA EJEPA HAIAJP J=HUOEO REOQ=HEO=PEKJ OK_S=NA >NEJCO PKCAPDAN I=CJAPE?ż IA?D=JE?=H =J@ =?KQOPE?=H @AOECJÅ» P DAHLA@ QO AILKSAN KQN iqki qÆ >=OOÆ&#x2022;IE@ @NERAN SEPD = LANBA?P >=H=J?A KB HKS BNAMQAJ?U ATPAJOEKJ =J@ OAJOEPEREPUż I=GEJC EP E@A=H BKN QOA EJ kÆ&#x201E;S=U ?=>EJAP ?KJ[CQN=PEKJOÅ»
CF0820BMB
From a range of small-diameter, high performance Celestion professional drivers
MANUFACTURERS Contact Celestion to request a sample
celestion.com