EDITORIAL
We begin the year by dedicating an extensive space for analysis and interviews to one of the technological developments shaping the content industry throughout 2025: virtual production. With the pandemic as a turning point and driven by the latest advances in LED screens, graphics engines, and artificial intelligence, an increasing number of companies are embracing these solutions, primarily attracted by cost reduction and the impressive creative possibilities they offer.
Having surpassed its initial boom, this technological innovation has reached the level of maturity needed to be applied in those scenarios where it is truly useful, no longer seen as a one-size-fits-all solution. Its implementation is also driving profound changes in workflows. For instance, new coordination spaces are emerging between different departments, thanks in part to the real-time visualization – even remotely – of the final product during the content recording process, a significant advantage enabled by this technology.
For all these reasons, it is certainly an ideal time to analyze its potential and, above all, understand how the market is leveraging it today. To this end, we have reached out to the technical directors of two pioneering production companies in virtual studios – Quite Brilliant and Dock10 – to share their innovations, outstanding projects, and future predictions from their respective areas of expertise: advertising and television.
In addition, in this issue, we explore the gaming industry through one of Europe’s most cutting-edge centers: the Esport Factory of maze GmbH. We have gathered some very interesting insights on this type of production, which we can already reveal to be wilder and more spontaneous than traditional linear television.
We hope you enjoy this selection of exclusive content, along with many more features we have prepared for you in this issue. On behalf of the entire team at TM BROADCAST, we take this opportunity to wish you a happy and ambitious 2025.
Editor in chief
Javier de Martín editor@tmbroadcast.com
Key account manager
Patricia Pérez ppt@tmbroadcast.com
Creative Direction
Mercedes González mercedes.gonzalez@tmbroadcast.com
Chief Editor
Daniel Esparza press@tmbroadcast.com
Administration Laura de Diego administration@tmbroadcast.com
Published in Spain ISSN: 2659-5966
TM Broadcast International #137 January 2025
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
Virtual production takes a step forward
Advancements in LED screens, graphics engines, 3D experiences, and artificial intelligence are driving this technology to unprecedented levels of quality
VIRTUAL PRODUCTION IN ADVERTISING | QUITE BRILLIANT
Interview with Russ Shaw, Head of Virtual Production: “Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value”
VIRTUAL PRODUCTION IN TELEVISION | DOCK10
Interview with Paul Clennell, CTO: “As well as the limitless creative possibilities virtual studios deliver, it also brings cost efficiencies, time savings, and environmental benefits”
TECHNOLOGY | VIRTUAL PRODUCTION To
We are now witnessing one of the most important moments in the history of broadcast film and television. If we compare it with what happened in the past, it is very possible that the technological changes the audiovisual sector is currently experiencing are as important -or even more so- than what happened when it went from silent to sound, from B/W to color or from analogue to digital creation, among other milestones.
INTERVIEW | ESPORTS
TM BROADCAST explores the gaming industry through one of its key players maze: the wildest side of esports
Interview with Felix Volkmann, Technical Director
The event will take place in Barcelona from 4 to 7 February and is estimated to attract more than 70,000 visitors from 170 countries
With the recent addition of the 8030, the latest member of the family, we bring to our pages the lab test of a couple of microphones featuring the latest in technology and quality for the capture of clear, clean and natural sound.
DDP integrates with Axle AI MAM to enhance media management
DDP Dynamic Drive Pool by Ardis Technologies announces the inclusion of Axle AI Media Asset Management (MAM) with every purchase of the DDP storage system in the Asia Pacific and Middle East regions. This collaboration enhances the DDP ecosystem with advanced media management tools, enabling users to:
› Generate proxies for streamlined workflows.
› Organize and catalog media assets efficiently.
› Utilize powerful search features to quickly locate files.
Elvin Jasarevic, Managing Director of DDP Asia & Pacific and Middle East, said: “We are thrilled to integrate Axle AI MAM with DDP. This partnership combines our advanced storage solutions with Axle AI’s robust media management tools, providing customers with an
innovative platform to simplify media workflows, enhance accessibility, and accelerate project timelines.”
Key features of the integration:
› Enhanced Media Management – Axle AI MAM offers cataloging, metadata management, and advanced search functionalities. Editing teams using Adobe Premiere Pro, DaVinci Resolve and Apple FCP can generate proxy media automatically, reducing reliance on high-bandwidth transfers and enabling seamless collaboration.
› Remote Access and Flexibility – Axle AI MAM provides remote access to streamable video proxies and assets, allowing teams to work from anywhere. This improves productivity and simplifies collaboration across distributed locations.
› Streamlined Workflows –Editors can use low-resolution proxies for review and editing, while high-resolution originals remain securely stored on DDP. This eliminates constant file transfers, optimizing bandwidth and efficiency.
› Integrated Axle AI MAM panel for Adobe Premiere Pro, and drag-and-drop Axle AI MAM app interface for Da Vinci Resolve and Final Cut Pro.
› Optional on-premise AI capabilities and workflow automation – Axle AI’s Tags and Connectr software, both optionally available, can provide a range of key further on-premise capabilities including scene understanding, semantic vector search, speech transcription, face recognition, logo recognition and no-code workflow automation.
› Optional cataloging of R3D RAW, ARRI RAW, BlackMagic RAW Avid Media Composer MXF Op-Atom media – Axle AI’s optional Advanced Transcode and Pro modules allow native cataloging, search and proxy generation of a range of advanced media formats.
› Scalable and Reliable Storage –DDP storage is designed for media-intensive workflows, with features like caching, tiering, folder volumes, and hard link support to handle any scale of operation.
New firmware and PY64-D Dante interface card for Yamaha DM7 digital mixers are now available
These innovations will enable the creation of workflows for immersive audio experiences in live environments
Adding new features, the latest firmware update for Yamaha DM7 digital mixers is now available, alongside the new PY64-D Dante interface card. Together they add immersive functions, helping DM7 users to create an wider range of bespoke live and installed sound systems.
The latest DM7 Version 1.60 firmware brings remote control to AFC Image, Yamaha’s object-based immersive system, creating workflows for immersive audio experiences in the live environment. It also adds 5.1 surround panning and monitoring to the DM7’s Broadcast package expansion.
Through collaboration with Digital Audio Labs, the latest firmware
has also brought support for Livemix personal monitor systems while, in a first for Yamaha digital mixers, DM7 V1.60 firmware allows users to save the console file to either a USB memory stick or to the console’s internal storage. This adds extra flexibility for data backup.
Meanwhile, the new firmware includes includes a Bus Lock function for Yamaha’s MonitorMix application when used with DM7 series consoles. This adds security, allowing users to select which buses MonitorMix is allowed to connect to, reducing the risk of accidental connection to the wrong ones.
The new PY64-D card expands the Dante connection on DM7 series mixers with sample rate converter (SRC) functions, ensuring straightforward connections for complex systems, including devices using
different sampling frequencies. SRC ON and OFF modes are switched from the DM7 console.
Providing 64in/out at 96kHz, the PY64-D features primary and secondary connectors for redundancy, with daisy chain connections also supported.
“We are delighted that so many users are harnessing the advanced but intuitive features of the compact DM7 series. The new V1.60 firmware and PY64-D interface card make it an even more powerful and flexible solution for many different live and installed sound applications”, says Thomas Hemery, Senior General Manager at Yamaha’s Professional Solutions Division.
The PY64-D interface card and update to DM7 V1.60 firmware are available now. The free firmware update can be downloaded from (insert URL).
Krotos unveils an AI feature to generate audio ambiences from an image
Earlier this year, the company introduced AI Ambience Generator v1, a text-based tool that allowed users to describe a scene and receive an ambience preset
Krotos, a company focused on sound design software, announces the world’s first commercial image-to-sound feature as part of AI Ambience Generator v2 for Krotos Studio. This update enables film makers, video editors, sound designers
or game developers to generate audio ambiences directly from an image and save time.
Earlier this year, Krotos introduced AI Ambience Generator v1, a text-based tool that allowed users to describe a scene and receive a unique ambience preset.
Now, by providing an image — whether it’s a film still, a video screenshot, or concept art — the AI generates a custom prompt. Users can refine this prompt, and Krotos Studio produces a tailored preset, transforming visuals into sound.
Arri announces the Artemis 2 live camera stabilizer
It is part of the company’s solutions for the live production sector, expanded earlier this year by the release of the Alexa 35 Live – Multicam System
Arri introduces Artemis 2 Live, a lightweight camera stabilizer for live productions such as sports and event broadcasts. Providing an accessible entry-point into Arri’s Camera Stabilizer Systems (CSS) range, Artemis 2 Live is modular and upgradeable, allowing users to adapt their rig to different production demands and career pathways.
Arri’s Artemis 2 Live combines operational flexibility with ergonomic, minimalist design, according to the company. An short post gives users more legroom and greater freedom to follow the fast, unpredictable action of live productions, also in tight spaces. New lightweight monitor brackets have been specially designed for live applications, with the base mount and monitor jockeys accommodating 7” monitors from brands including Transvideo, Smart Systems and SmallHD. A second bracket
supports the 5” ‘on air’ monitor often used on live production rigs.
Artemis 2 Live can be combined with a Tiffen Volt gimbal. With its modular system structure, Artemis 2 Live adapts to different power sources, making it possible to match whatever batteries are available at the shooting location and whatever camera is being used on the rig.
Artemis 2 Live is part of the Arri’s solutions for the live production sector, expanded earlier this year by the release of the Alexa 35 Live – Multicam System. Arri makes no qualitative distinction between the live and cinema markets, so Artemis 2 Live components are interchangeable with those of Arri’s cine-style. The needs of EFP workflows are met with features such as TALLY and LBUS, allowing hybrid lens control of both ENG and cine zoom lenses.
Users can upgrade their Artemis 2 Live to a fully featured Artemis 2 for film and drama productions, or to a Trinity 2, incorporating electronic stabilization for high-end applications involving more complex camera moves. And since brackets and accessories are compatible across the entire CSS product range, Artemis 2 and Trinity 2 owners can shrinkfit their rig to a live configuration with the help of an Artemis 2 Live Conversion Kit.
Telos Alliance announces its VX Duo broadcast phone system is now shipping
It can be placed on any studio or control room surface, and up to three units can be placed side-by-side on a single rack-mount shelf
Telos Alliance announces its brand new VoIP broadcast phone system, Telos VX Duo, is now shipping and available for immediate delivery.
Announced in August 2024, just prior to the IBC2024 Show, VX Duo brings the same user experience and high-quality
audio as the flagship Telos VX Enterprise system at a price point that puts VoIP within the reach of any budget, as the company claims.
“Broadcasters worldwide have relied upon Telos VX VoIP phone systems to bring phone calls into their facilities and get them on the air with the highest possible audio quality for years,” said Robbie Green, Telos Alliance Product Manager. “VX Duo makes it possible for all broadcasters, even those with a modest
budget, to finally transition away from increasingly expensive legacy copper phone lines and reap the cost, reliability, and call quality benefits of a VoIP-based Telos VX system.”
Solving audio challenges in Ferrari film: insights from sound designer Chris Jojo
The specialist used, among other microphones, Sennheiser’s MD 421-II large-diaphragm microphone and AMBEO VR Mic
UK-based sound designer and sound recordist Chris Jojo has specialised in authentic car recordings for film, simulators and gaming. For the filming of Ferrari, which has been nominated for several sound design awards, he used, among other microphones, Sennheiser’s MD 421-II large-diaphragm microphone and AMBEO VR Mic, as well as the company’s evolution wireless plug-on transmitters and EK 6042 receivers to record the impressive sound of vintage Ferraris with impact and clarity. Jojo speaks about his way into vehicle recording and how he captured those engine sounds that make every fan’s heart beat faster.
“I have worked in the game software industry since 1992, when I joined Software Creations, a Manchester-based startup, as a composer, musician, sound designer and artist for their games”, shares Jojo in an interview with Sennheiser. Here, he had free reign to compose and underscore games, channelling his inspiration from the film industry. It was when he moved to Codemasters full time as a Senior Sound Designer in 2009 that he really began to focus on vehicle recordings.
“When I started with Codemasters, I had the opportunity to attend some recording sessions with Mark Knight, the Audio Lead on Codemasters DiRT3, and I really enjoyed the experience”, says Jojo. “I was fascinated by the process and the inherent challenges, especially in respect to miking very large, high-capacity engines that are incredibly loud in terms of SPL, like Formula, GT or certain Rally Car classes,
mitigating the effect of wind on the exhaust mics, mic placement and getting the best off-axis response”.
Soon, Jojo took ownership of car sound recording for the Codemasters Motorsports titles, ensuring the constituent components of the various Motorsports IPs were authentically captured to give the correct response for in-game action.
“With rally in particular, there’s a lot of interaction with the environment – primarily the car’s interaction with all terrain surfaces such as gravel, dirt, tarmac, mud, sand, etc. And when the player goes off race line, there are the impacts and collisions with elements within that environment”, he says. “There’s also dynamic weather: there might be snow, hail stones and all manner of rain conditions. It’s fantastic to have this huge scope of sound design components to integrate into the various facets of a rally game”.
Jojo also sources the cars licensed for recording, building upon relationships with motorsports teams, works, privateers, manufacturers’ heritage and private collections. Over the years, he has created an ever-increasing library of car recordings, including many iconic motorsports cars and rare historic marques. The relationships and trust cultivated with the owners is key – and not just for gaming: one contact directly led to his involvement with Michael Mann’s Ferrari film.
“I’d recorded a number of cars sourced through Nick Mason’s company Ten Tenths, an incredible collection that Nick has
acquired over the years and has competed with, often making appearances at Goodwood’s annual Festival of Speed. Nick co-curated the ‘Motion. Autos, Art, Architecture’ exhibition with Sir Norman Foster at the Guggenheim Bilbao; I was involved in the sound recording of the ten selected cars that chronicled the evolution of the automobile for a timeline audio-visual installation undertaken by Sennheiser. Nick’s legendary Ferrari 250 GTO and Bugatti T35 were amongst the cars I recorded”.
A few years down the line, Ten Tenths contacted Jojo to ask if he was interested in a Ferrari-based
recording project for a film production that involved some of Nick’s cars. “I had an inkling then that it might be Michael Mann’s long-gestating Ferrari biopic, needless to say I jumped at the opportunity”, exclaims Jojo.
The biopic follows a pivotal period of Enzo Ferrari’s career, as he tried to turn his company and life around in the run-up to the 1957 Mille Miglia, a thousand-mile open road endurance race. Centre stage are the cars, and the film has been nominated for several awards, including a BAFTA nomination for best sound design.
South Korea’s Daegu Concert House upgrades communications with Riedel
The installation by South Korean distributor Dasan SR features the all-IP Artist-1024 matrix and Bolero wireless intercom systems
Riedel Communications has announced that the famous Daegu Concert House in South Korea has upgraded its communications systems with a shift to Riedel’s intercom technology. The installation by South Korean distributor Dasan SR features Riedel’s all-IP Artist-1024 matrix and Bolero wireless intercom systems.
“The Riedel intercom solution has transformed our communication capabilities,” said Mi Chu Cho, Director Sound/Stage, Arts Department at the Daegu Concert House. “As an all-IP system, Riedel’s Artist-1024 is easy to configure and troubleshoot in real time, and it also provides superior audio quality and reliability. By eliminating coverage and performance issues, this upgrade to Artist and Bolero systems allows us to focus on delivering world-class performances rather than on our communications technology.”
Completed on Oct. 8, 2024, the comprehensive installation includes the Artist-1024 matrix, which offers support for up to 1024 ports. Complementing this are two RSP-2318 and four DSP-2312 SmartPanels, which
SOUTH KOREA’S DAEGU CONCERT HOUSE HAS UPGRADED ITS COMMUNICATIONS SYSTEMS WITH RIEDEL’S CUTTING-EDGE INTERCOM TECHNOLOGY, INCLUDING RIEDEL’S ALL-IP ARTIST-1024 MATRIX AND BOLERO WIRELESS INTERCOM SYSTEMS.
provide user interfaces via high-resolution, multitouch color displays. The Bolero wireless intercom system, complete with eight antennas and 10 beltpacks, ensures communication and voice transmission throughout the venue.
The Bolero system has proved particularly effective in overcoming the limitations of the venue’s previous intercom systems, according to Riedel.
The system’s robust RF performance prevents audio errors and dropouts, ensuring crystal-clear communication, even in the presence of large crowds — a crucial factor for the concert venue, as the company claims. Moreover, the Bolero system’s extended antenna coverage and longer battery life outperform those of alternative solutions while eliminating the need for additional hardware such as transceiver splitters.
In a key innovation for this project, Dasan SR created multiple virtual Bolero presets for each beltpack in the venue’s Grand Hall and Chamber Hall. With the Bolero network divided into different 2G4 hopping modes, users can seamlessly connect to the appropriate antenna in each hall. The Bolero beltpack automatically links to the designated ports for that specific hall. This intelligent configuration allows for flexible use of the beltpacks without the need for manual reconfiguration.
“With the ability to create virtual profiles and deliver unparalleled RF robustness, we have resolved long-standing communication
challenges for the venue,” said David JooYoung Jeon, Project Manager for Dasan SR. “We’re proud to have facilitated this transformation, which not only enhances the Daegu Concert House’s operations but also sets a new benchmark for intercom solutions in Korean theaters. The success of this installation is a testament to the power of our experience and close collaboration with Riedel.”
The Daegu Concert House team first encountered Riedel products at KOSOUND 2023 and subsequently participated in multiple training seminars at Dasan SR headquarters. This hands-on experience played a crucial role in the decision
to adopt Riedel’s intercom ecosystem. “We are thrilled to bring Riedel’s state-of-the-art intercom technology to the Daegu Concert House,” said Vincent Lambert, Director of Key Accounts, Asia, at Riedel Communications. “This installation showcases the capabilities of the Artist and Bolero systems while also demonstrating our commitment to providing tailored solutions that meet the unique needs of performing arts venues. As a first-of-its-kind installation in South Korea, this project surely will have a significant impact on the country’s broader performing arts culture.”
Eswatini’s national radio upgrades to IP with Lawo Diamond consoles
The next phase of the modernization includes installing a centralized IP router, which will unify workflows across EBIS’s network
Eswatini Broadcasting and Information Services (EBIS) radio, the national broadcaster of the Kingdom of Eswatini, in Africa, has embarked on a technology upgrade with the installation of Lawo’s Diamond modular broadcast consoles. This installation, managed by South African systems integrator B&I (Broadcast & Installation Engineering (Pty) Ltd.), marks the start of a comprehensive shift to IP-based broadcasting, preparing Eswatini’s radio infrastructure for a digital future.
EBIS radio has invested in a setup of two Lawo Diamond consoles. Each of the IP-native Diamond consoles supports RAVENNA/AES67 audio-over-IP standards and is compatible with ST2110-30/-31 protocols, providing a foundation for IPbased integration. “The Diamond console’s intuitive interface and modular design have made it easy for our team to adapt,” says Mr. Sabelo Dlamini, Director of Broadcasting and Information Services. “This upgrade brings new creative possibilities and a fresh motivation for our team”.
The Diamond console offers a user-friendly control layout with customizable fader-adjacent color displays and motorized faders. Mr. Mabandla Bhembe,
a presenter at EBIS, praises the console’s ease of use: “The Diamond is incredibly intuitive. You can work quickly and comfortably—it becomes second nature in no time. The quality of the controls is obvious, from the smooth faders to the responsive buttons.”
B&I Engineering, based in Johannesburg, won the tender to install the consoles and provide staff training. Some Radio Eswatini team members traveled to Lawo’s headquarters in Rastatt, Germany, to train with the equipment—where they saw snow for the first time. Gunther Becker, CEO of B&I, notes, “The training and installation went smoothly, and the response from the EBIS team has been extremely positive. The Diamond console’s streamlined interface and high-quality build were very well received”.
Equipped with Lawo’s Power Core mixing engines, which provide expandable I/O connectivity including AES67, MADI, analog, AES3, and Dante, the Diamond consoles offer Radio Eswatini
an adaptable solution that will support the station’s growth in the years to come. The next phase of the modernization includes installing a centralized IP router, which will unify workflows across EBIS’s network and enable seamless content sharing between facilities.
“This IP-based infrastructure is a game-changer for our broadcasting capabilities”, says Mr. Dlamini. “It ensures that we’re not only equipped for today’s demands but ready for future expansion and innovation”.
Germany’s sports website kicker deploys the Bitmovin Player to elevate its streaming capabilities
Bitmovin, a provider of video streaming solutions, announces that kicker, a leading German digital sports publisher, has chosen the Bitmovin Player for Web to elevate its video streaming capabilities and drive revenue growth.
kicker is one of Germany’s most popular sports websites. Video is central to kicker’s digital offering. Following the growth of its content library, kicker decided to deploy a new video player to help it monetize its content more effectively and provide seamless playback on the web. kicker required a video player that could handle high viewer traffic, especially during major football events, while ensuring seamless ad integration across target countries.
“kicker is on an impressive growth trajectory, and it’s essential that we have robust video streaming infrastructures in place to ensure
we fulfill our strategic goals,” said Tobias Buhl, Product Manager at kicker. “Bitmovin is known for its high-performance Player and proven ability to handle high traffic volumes, deliver high-quality streams, and support revenue growth. Since deploying the Bitmovin Player, we have seen improved quality of experience and increased ad revenue. I highly recommend the Bitmovin Player to anyone who wants to deliver an exceptional viewing experience to their audience and grow their brand.”
By deploying Bitmovin’s Player, kicker can guarantee flawless playback quality on the web, including low latency and configurable adaptive bitrate streaming, as the company claims. The Bitmovin Player also provides ad-insertion capabilities such as a dedicated and customizable ad module, a customizable viewer experience,
and an extensible user interface, allowing kicker to tailor the viewing experience to align with their branding while enhancing user engagement.
Additionally, the Bitmovin Player provides broad browser coverage and is compatible with all major desktop and mobile browsers, empowering kicker to cover many features, including adaptive bitrate, picture-in-picture, multilanguage, subtitles, and more.
“kicker is the go-to publisher for the very latest sports news and content among German-speaking audiences. As it continues to scale its digital content, it needed a robust player that guarantees flawless playback and the ability to monetize,” said Stefan Lederer, CEO and co-founder of Bitmovin.
“We are proud and excited that the Bitmovin Player for Web will deliver a world-class experience to millions of sports fans.”
Korea invests in Ikegami UHK-X700 cameras to broadcast esports contests
The government has been supporting and promoting this industry since the early years of this century
Gyeongnam Culture and Arts Foundation (GCAF), a promotor of cultural and artistic diversity, has chosen Ikegami UHK-X700 camera chains for integration into the Gyeongnam eSports Arena. Four systems have been delivered to televise esports contests for terrestrial and satellite broadcast audiences throughout South Korean republic. The purchase was negotiated in partnership with Ikegami’s Korean distributor, Dong Hwa AV.
Korea is an esports powerhouse with the third largest market size in the world after the United States and China. The government has been supporting and promoting the esports industry since the early years of this century by organizing
specific contests, establishing and operating a dedicated television production facility and building stadiums. GCAF is a public foundation located in Gyeongsangnam-do, southeastern Korea. It was established in 2013 to promote the region’s culture and arts, including event planning and management. The organization also provides educational programs to enrich the cultural lives of the country’s citizens.
GCAF chose Ikegami UHK-X700 cameras following tests of their ability to capture high quality low-noise images within the relatively dark light conditions which are usual in esport studio environments. The cameras enable the foundation’s production team to create and
deliver high quality 4K-UHD content with consistent color rendition. Ikegami customers in Korean include four terrestrial television broadcast stations as well as satellite broadcasting channels.
The UHK-X700 provides the freedom to operate at 4K-UHD, 1080p or 1080i resolution in standard dynamic range or high dynamic range and standard or wide colour gamut. Content can be mastered in full 4K-UHD to ensure a long commercial lifetime for high-budget productions. The cameras can also be operated in multi-format workflows.
Designed for studio or outdoor applications, UHK-X700 cameras each incorporate three 2/3-inch CMOS 4K sensors with a global shutter which provides freedom from rolling-shutter distortion and flash-banding artifacts. The camera is uniformly balanced and has a low centre of gravity to ensure operations on a pedestal, tripod or shoulder.
Core of the Ikegami UHK-X700’s electronics is an ASIC chip which encapsulates a wide range of high-grade video processing functions into ultra-compact component dimensions. Optional high frame-rate shooting at up to 2x speed in 4K or up to 8x speed in HD can be performed via the BSX-100 base station for applications such as capturing fast motion in sport or stage events.
Richard Welsh has been elected as SMPTE president
SMPTE , the home of media professionals, technologists, and engineers, has announced that Richard Welsh has been elected by its membership to serve as SMPTE president. Welsh formally began his term on Jan. 1 after serving as SMPTE executive vice president. His term will extend two years to Dec. 31, 2026.
“I am honored to have been elected SMPTE president, and I am excited to work with the whole SMPTE family worldwide in supporting progress across our industry,” said Welsh. “Since the Society’s inception more than 100 years ago, bringing the moving image to audiences worldwide has been at the heart
SMPTE appoints
“My goal is to encourage more collaboration across sections to create more opportunities for members”, said the new senior manager
SMPTE announces that Sally-Ann D’Amato has been named executive director by the SMPTE board of governors. D’Amato formally began this new role on Dec. 18, 2024, after acting as interim executive director since October.
“I’m honored to accept the role of executive director,” says D’Amato. “After more than two decades with the Society, I’m humbled to be chosen as its
of SMPTE’s mission. With video devices in viewers’ pockets and content available to them on demand, SMPTE’s ongoing commitment to delivering the best possible integrity, quality, and experience in media to global audiences is more vital than ever. I’m thrilled to have the opportunity as SMPTE president to deliver on our vision for the media technology industries of unlimited creativity and experiences for everyone.”
Welsh is currently the senior vice president of innovation at Deluxe. He has been on the SMPTE board for more than 10 years, most recently serving a two-year term as SMPTE executive vice
Sally-Ann
president. He has also served as SMPTE’s vice president of education and as governor for EMEA, Central/South America. Welsh is also on the board of IBC and the chair/co-founder of the volumetric asset management company Volustor.
D’Amato as executive director
leader. I will continue to work toward a Society that is efficient, innovative, and united. My goal as executive director is to encourage more collaboration across sections to create more opportunities for members, strengthen the standards community, and reinforce the organization’s infrastructure. This will be enacted through a mission we’re calling ‘We Are All One SMPTE.’”
D’Amato joined the SMPTE family in 2001, working as an administrative assistant. She was promoted to executive assistant in 2003, and again promoted to director of operations in 2005.
In 2016, she became director of events and governance liaison. In this role, she was responsible for planning and executing events and was also responsible for working with the board on issues of Society governance and board activities. In October 2024, she became the interim executive director.
Virtual production takes a step forward
Advancements in LED screens, graphics engines, 3D experiences, and artificial intelligence are driving this technology to unprecedented levels of quality
Virtual productions are taking a step forward. Although this technology is not new and has been evolving over the past decade, recent developments in LED screens, graphics engines, immersive experiences, and artificial intelligence have propelled it to a new level.
These innovations not only fuel the imagination of creative teams by allowing them to create never-before-seen environments that blend the real and digital like never before. They also broaden the range of projects that can incorporate these technologies thanks to improvements in efficiency and cost.
When a new technological breakthrough emerges, the industry typically rushes to explore the new possibilities it offers and how it can help companies optimize their workflows. Once the initial excitement subsides and a few years have passed, it becomes possible to
Text: Daniel Esparza
assess its advantages and challenges more thoroughly and conduct a more accurate analysis of the current state of the technology.
This is precisely what we aim to do in this special feature on virtual production. For this, TM Broadcast International reached out to the technical leads of two pioneering studios in virtual production —Quite Brilliant and Dock10. From their respective areas of expertise —advertising and television—, they shared firsthand insights about the innovations they’ve implemented in this field, their most prominent projects, and their outlook for the future.
QUITE BRILLIANT
Interview with Russ Shaw, Head of Virtual Production: “Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value”
Quite Brilliant is an independent virtual production studio based in London. Although it is primarily focused on the advertising sector, it has expanded its services to other industries, such as film. Its clients include the BBC, Starbucks, Pokémon, and Rolls Royce. In this interview, Russ Shaw, Head of Virtual Production, discusses the company’s key milestones, details its cutting-edge equipment, and shares insights into the trends that will shape the future of this market.
First of all, how has the company evolved in recent years?
Quite Brilliant is an independent virtual production studio based in London, originally established in 2012 as a traditional production company. While exploring various cutting-edge technologies over the years, we have positioned ourselves at the forefront of the real-time 3D and virtual production revolution.
Our work focuses on creating efficient, cost-effective, and sustainable workflows across multiple media sectors, including advertising, feature films, episodic content, and social media.
Our services now encompass a comprehensive range of virtual production solutions, including LED Volume Virtual Production (VP), on-site and on-location ‘pop-ups’, storyboard
breakdowns, virtual art department services, photogrammetry and Lidar, digital twinning, real-time asset generation, and still shoots. Leveraging advanced technologies such as AI, Unreal Engine, NERFS, Lidar, and augmented reality, we enable productions to virtually transport shoots to any real or imagined location, significantly reducing the need for traditional location scouting and set building.
One notable achievement has been the production of the UK’s first carbonneutral virtual production film, exemplifying our commitment to innovation and sustainability within the industry. Our dedication has also been recognised through accolades such as the 2023 APA ‘Best Use of Virtual Production’ award. Working alongside top UK agencies, production companies, and postproduction houses, Quite Brilliant has collaborated with esteemed clients like the BBC, Starbucks, Pokémon, and Rolls Royce.
What has been your journey with virtual productions?
We were fortunate to have early experience with the principles of virtual production, undertaking our first shoot back in 2014 for the cosmetics brand Barry M. For this project, we utilized a Barco 4K laser projector to display moving product animations and environments onto a giant screen, while the talent performed in the foreground with minimal
props. At the time, there were no advanced tracking solutions available, so the camera movement had to be roughly synchronized with the environmental content.
Our second virtual production shoot took place on location, using LED panels. For Halloween, we transformed the windows of the private members’ club Annabel’s in Berkeley Square, London, with animated displays. These ranged from giant eyeballs giving the building a dollhouse-like appearance to scenes of crazed, dagger-wielding zombies. It was an unforgettable spectacle for passersby.
In 2020, we created a VP demo film at Rebellion Studios in collaboration with Universal Pixels, which became a catalyst for our deeper focus on virtual production technology. This journey led us to partnerships with Garden Studios and, subsequently,
Twickenham Film Studios, where we’ve achieved significant success, with hundreds of productions undertaken with our guidance and operation.
Your studio facilities are partially modular. Could you provide more details about this?
Operating primarily out of Twickenham Film Studios gives us access to multiple sound stages, some permanently set up for virtual production and others readily adaptable for bespoke solutions. Being partially modular allows us to customise the size and shape of the LED walls to meet specific production needs. This flexibility is crucial, as directors often require tailored setups to bring their creative visions to life and don’t want restrictions or extra work in post-production unnecessarily.
“In the early days, there was often a misconception that projects needed to be entirely location-based or entirely virtual”
What are the main advantages for clients?
The modular setup provides significant advantages for them. With our designers working on-site, we can quickly implement content changes without the delays associated with remote uploads. Additionally, our existing technology infrastructure reduces the need for transporting equipment to remote locations, enabling faster testing and a more streamlined setup process.
That said, we are equally comfortable with building solutions off-site when required, as we prioritise the financial and logistical needs of our clients. For example, we’ve completed projects in both northern and southern England using local services. We also serve as virtual production consultants and content producers to some agencies in the UK and abroad.
“AI-generated content often lacks control and adaptability, meaning we still rely on traditional 2D visual effects software for subtle adjustments or artifact corrections”
Have you recently upgraded any of your studios or acquired new equipment to optimize your virtual productions?
We continuously review our hardware and take client feedback into account to ensure that our solutions remain both competitive and efficient. Currently, our Unreal render nodes are equipped with cuttingedge technology, including
Intel i9-14900KS processors and RTX6000 Ada Lovelace GPUs. This enables us to capture on our 8m x 4m virtual wall at up to 240fps without requiring additional cost uplifts to our standard rate card. The same is true for providing the in-house
Red Komodo, 8K playback and atem system.
Are you planning new investments for the near future?
Looking ahead, we have significant investments
planned for 2025, including the acquisition of new technology and major studio upgrades. While we are unable to share full details at this time, we can assure you that these advancements will be both fast and exciting.
Regarding the demand for this type of production, what trends have you observed in recent years?
It’s important to understand that virtual production is a tool, not a one-size-fits-all solution. In the early days, there was considerable hype surrounding the technology, and many productions rushed to explore its potential.
However, there was often a misconception that projects needed to be
entirely location-based or entirely virtual. This was compounded by the high costs of the technology at the time, which often reached seven figures.
As prices have decreased, the accessibility of virtual production has improved dramatically. Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value. Lower-budget projects are increasingly adopting virtual production because of its efficiencies. For instance, LED screens
“There is indeed a shortage of qualified professionals, primarily because the role requires a multi-disciplinary skill set” can provide up to 70% of fill lighting, reducing the need for extensive lighting setups. Smaller crews are often sufficient, and the background content has become significantly more photorealistic, while the costs of creating it have dropped considerably.
Are you approaching new sectors, beyond your core focus on advertising?
Our founders have backgrounds in advertising and post-production, and we initially focused on offering cost-effective and sustainable solutions to the advertising industry. However, moving to one of the UK’s oldest film studios has sparked interest from movie line producers and directors, leading to a surge in enquiries from the film sector.
With the UK’s recently confirmed tax breaks for productions, along with recognition that content created for LED screens falls into post-production categories, 2025 is shaping up to be a significant year for our long-form projects. While advertising remains a core focus, we are expanding into other sectors as demand grows.
What can you tell us about the technical staff for these types of productions? Do you handle this task internally?
While we typically hire studio staff with strong 3D skills due to our use of Unreal Engine, internal training is essential to support live shoots effectively. Virtual production studios vary significantly, as they incorporate equipment from different manufacturers, each with unique software and setup requirements. For example, setting up LED walls, processors, camera tracking devices, media playback servers, and lighting controllers demands a deep understanding of many disciplines from frame synchronisation, down to types of networks, video and display cabling.
Our production crew are also highly experienced in camera setup and calibration. For clients bringing their own crew, we offer free technical recces, enabling them to experiment with both studio and personal equipment before the shoot. This hands-on preparation is invaluable for ensuring a smooth production process.
Do you perceive a shortage of qualified professionals in the current market?
There is indeed a shortage of qualified professionals, primarily because the role requires a multi-disciplinary skill set. However, the variety of tasks—ranging from designing assets for digital twin shoots to live content adjustments— makes working in virtual production both challenging and incredibly rewarding.
What can you tell us about Artificial Intelligence
(AI)
and how it is changing virtual production workflows?
AI is advancing at an extraordinary pace and has proven to be a valuable tool for creating photorealistic background environments under specific conditions. For instance, an AI-generated background can be created and approved in less than a day, compared to the three or more days required to build a comparable Unreal Engine scene.
“Advancements such as micro-LED technology and finer pixel pitch screens have significantly enhanced the realism of virtual environments”
However, there are limitations. AI-generated content often lacks control and adaptability, meaning we still rely on traditional 2D visual effects software for subtle adjustments or artifact corrections. Additionally, camera movement can be restrictive, as AI content often uses 2.5D layering to create a sense of depth and fails if pushed too far or held on for too long.
Could you share an example illustrating the disruptive potential of this AI?
Earlier this year, we utilised AI to create 21 unique environments for a five-day moving image and stills shoot, spanning locations from mountain tops to deserts. This approach was critical for meeting the project’s budget, timeline, and sustainability goals. By pixel-mapping lighting to physical setups, we also achieved far greater
believability compared to traditional green-screen compositing plus the edit was achieved much quicker.
What direction do you believe this technology will take in the future? Do you foresee significant changes in this area in the coming years?
Looking back to 2020, the quality of Unreal Engine environments was just passable for real-time production, and LED screen resolutions were far from optimal. Directors of photography often relied on shallow depth of field to obscure background imperfections.
Today, advancements such as micro-LED technology and finer pixel pitch screens have significantly enhanced the realism of virtual environments. Unreal Engine has also improved, with innovations like Nanite for high-detail meshes
In the near future, rendering quality will continue to improve, driven by faster GPUs and better connectivity standards like SMPTE 2110. This will enable low-latency, synchronized, HDR, and high-framerate streaming, reducing technical challenges on large LED walls.
We also anticipate growing interest from post-production companies through tools like Chaos Arena, which streamline the integration of V-Ray pipelines with LED walls without the need for preprocessing. These advancements will further enhance the efficiency and accessibility of virtual production workflows.
“Moving to one of the UK’s oldest film studios has sparked interest from movie line producers and directors, leading to a surge in enquiries from the film sector” and Lumen for enhanced lighting and shadow quality, resulting in sharper and more lifelike content.
Tell us about a recent project that you are particularly proud of due to the technical challenges it presented
While every project we’ve undertaken has unique and interesting elements, a recent campaign for Nissan with City Studios and The Gate Films stands out. Over 60% of the TV commercial was shot virtually in the studio, featuring Manchester City’s manager Pep Guardiola. Despite his limited availability—just 90 minutes on set—he appeared in 20 shots.
The biggest challenge was the tight timeline, leaving no room for errors. To ensure success, we captured all the necessary plates a day prior to the VP shoot, creating a rough animatic for everyone to visualise the car’s position, camera placements, and background locations for continuity. Thanks to meticulous planning, an excellent director, DOP and experienced crew, the project was executed flawlessly, earning unanimous praise. See the project here.
DOCK10
Interview with Paul Clennell, CTO: “As well as the limitless creative possibilities virtual studios deliver, it also brings cost efficiencies, time savings, and environmental benefits”
Dock10 is a key reference in the United Kingdom for television content. Its strong focus on innovation has positioned it at the forefront of virtual studios, pushing the creative boundaries of the hundreds of programmes produced at its facilities using this technology. In this interview, its CTO, Paul Clennell, shares his insights into the company’s workflow, as well as the main technological developments from its R&D studio, dedicated to exploring new interactive, immersive and game-like forms of entertainment.
How would you describe the company’s development over the past few years?
In one aspect, dock10 has been pushing the creative boundaries of television in what our virtual studios do every day. At the forefront of pioneering this technology, dock10’s industry-leading 4k UHD-ready virtual
studio capability has made hundreds of programmes across genres as diverse as sports, children’s, entertainment, corporate and comedy. As well as the limitless creative possibilities virtual studios deliver, it also brings cost efficiencies, time savings, and environmental benefits. All within a production space and process that is tried, tested and trusted.
As a pioneering television facility in the implementation of virtual production, what major a dvances have you experimented with this technology?
dock10’s next-generation virtual studio capability is a powerful creative toolset for delivering even
greater onscreen value - enabling the creation of even more content-rich sets. Our proven technology seamlessly combines physical, virtual and augmented reality in real time, together with live data. This enables our in-house designers to translate a production’s vision into spectacular photorealistic environments - importing sets from any 3D modelling package and enhancing them with easily sourced and adapted pre-made assets.
The results are entirely realistic, including the addition of shadows and reflections as well as the perfect integration of even single strands of hair. Cameras can be pointed in absolutely any direction across the whole of the studio to deliver a continuous on-screen set. Using the latest real-time rendering, you can see how each shot will look as it happens either live in the studio or via secure remote viewing.
“Using the latest real-time rendering, you can see how each shot will look as it happens either live in the studio or via secure remote viewing”
Last time we spoke with you, you mentioned that your approach to virtual production has been innovative, as you implemented this technology across all your studios to take on a wider range of projects. Could you remind our readers how you achieved this and if you have recently implemented any technological upgrades or acquired new equipment?
Compared to most studios, at dock10 we’ve taken a deliberately different approach: rather than build a single virtual studio we have installed virtual capabilities in every one of our ten television studios.
These use the latest real-time games-engine technology from Epic and Unreal Engine with Zero Density, together with the Mo-Sys star tracking system. By centralising the systems in our CTA we give our customers the flexibility to choose the right size studio for their production – anything from 1,000sq ft to 12,500sq ft. But it’s about more than space and technology. Over the five years we’ve been using virtual studios, we have built up a brilliant team of highly experienced designers who are dedicated to creating fantastic virtual worlds, as well as a team of engineers and craft talent who really understand how to help productions get the most out of this amazing technology and deliver extraordinary on-screen results.
How do you train technical operators for this type of production? What challenges do you face?
dock10 established a dedicated Innovation Team which is now one of the UK’s leading R&D specialists for the TV production sector, actively exploring ways for the industry to adopt more interactive, immersive and game-like forms of entertainment at a time when traditional linear, 2D video is in decline, particularly amongst younger audiences.
Having delivered hundreds of hours of innovative content, the Innovation Team has earned a strong reputation for developing the bespoke tools and workflow required for cutting edge productions. It also plays an important role in educating and informing the sector about new ways to create content.
Headed up by Richard Wormwell, the full-time Innovation Team’s mission is to lead the future evolution of broadcast entertainment by developing innovative production techniques and leveraging emerging technologies. It has a deep knowledge of television production, real-time rendering, virtual production, AI and immersive technologies.
The team’s role is to help production companies create engaging content, whilst looking for ways to add levels of immersion and interactivity. By leveraging 3D assets and multi-camera virtual production, the Team is able to create content that is very different from traditional broadcast television.
“Compared to most studios, we’ve taken a different approach: rather than build a single virtual studio we have installed virtual capabilities in every one of our ten TV studios”
Could you share any recent project that you are particularly proud of due to the technical challenges it presented?
The Innovation Team is working to perfect the integration of technology that enables animated content to be rendered in real-time within a live, multicamera studio setting. This builds on dock10’s work on BBC Bitesize Daily, which made significant strides in animating motion-captured performers in live virtual studio pipelines. This technique was developed further in dock10’s work on Channel 5’s Dinosaur with
Stephen Fry, where pre-animated dinosaur actions were manipulated in the gallery to allow for new playback sequences to be created live in the studio.
Off the back of these successes the Innovation Team created a dedicated R&D TV studio, working closely with dock10’s engineering department and key external suppliers to incorporate numerous innovative technologies into one space. The UHD / HDR multi-camera studio features a three-wall infinity curve with multitalent motion capture, face capture, optical talent tracking and video / audio recording capabilities.
The team’s first development within the R&D studio was DMX-VL, a real time software solution that allows for the control of both physical and virtual moving lighting from a single industry-standard lighting console. This helps to deliver infinite lighting possibilities for productions without additional kit, crew or cost.
“The team’s first development within the R&D studio was DMX-VL, a real time software solution that allows for the control of both physical and virtual moving lighting from a single console”
What can you tell us about Artificial Intelligence (AI) and how it’s transforming virtual production workflows?
Following on from this development and utilising the DMX-VL technology, the team is now focused on real time ‘light transfer’ virtual production. Specifically, it is developing techniques for creating realistic shadows and reflections in multi-camera green screen virtual production. For years, this has proven to be a huge challenge for production. The solution is the development of a bespoke AI model. Called AI Composure, this proves that green screen virtual production can look better and be delivered more cheaply than LED volume stages; and it isn’t limited to the size of the LED wall. The original techniques developed in AI Composure are now being supported by Innovate UK as a £1.8 million R&D project that is looking to take the research into real time studio production workflows.
“Creating
realistic shadows and reflections in virtual production has proven to be a huge challenge. The solution was the development of a bespoke AI model”
To know the present is to enjoy the
future
Virtual production in broadcast
We are now witnessing one of the most important moments in the history of broadcast film and television. If we compare it with what happened in the past, it is very possible that the technological changes the audiovisual sector is currently experiencing are as important -or even more so- than what happened when it went from silent to sound, from B/W to color or from analogue to digital creation, among other milestones.
By Carlos Medina, Audiovisual Technology Expert and Advisor
Technology has always set the pace in the field of leisure, entertainment, AV and shows. When I set out to write about Virtual Production (VP) based on my knowledge, my work experience and along the research/documentation path itself on this subject, I realize that we are now
going through a paradigm shift in Broadcast/cinematographic AV production.
VP is not just a new machine or a set of equipment, innovative software, or an aesthetic trend, an art movement, and/or an executive business strategy. It is
clearly a turning point between before and after. My article entitled "Immersive LED Sets.
A new way of audiovisual production" (AV Integración Audiovisual, October 22, 2024) is only the tip of a huge iceberg, which has an impact on the creative, the technical and the
VP is not just a new machine or a set of equipment, innovative software, or an aesthetic trend, an art movement, and/or an executive business strategy. It is clearly a turning point between before and after.
business aspects within the Broadcast/cinematographic audiovisual sector.
With the advent of VP, everything we know, have studied and developed previously in the work world is changing or is simply no longer valid: working times, executive and creative decisions, budgets,
technologies used, profiles of the various professionals involved in the process itself, and even the form of consumption by viewers/audiences are all undergoing changes.
Virtual production is the whole process, from start to finish, which goes from the development of an idea
to the achievement of a commercialized audiovisual material (either finished by the professional or with the participation of a type of viewer that is becoming increasingly active and involved in the completion of the audiovisual product), through ongoing update in the workflow.
Therefore, we will attempt to conceptualize what virtual production is, leaving open all the possibilities that will arise in the near future. Virtual production is a way of producing audiovisual material and content under just two factors: digital technology as a dynamic element of the process in all existing phases and carrying out the work in real time.
This digital technology is present since the very origin of the audiovisual project and has an impact on what is done, how it is done, where it is done from, and how to enjoy the finished audiovisual material (fiction series, film, TV program...). It involves the full integration of the real environment and the virtual environment, thus facilitating the very evolution of the type of resulting film and/or TV program.
With regard to real-time work, the development carried out from engineering, robotics, computer science and communications is final. In this sense, the capabilities
of video game engines, such as Unreal Engine, Unity, Game Maker, UbiArt Framework, CryEngine, Godot... have enabled faster data processing and solutions, ranging from the most artistic or creative to the most complex issues in the field of programming.
The TV Broadcast live streaming industry marks the beginning of virtual production when real-time graphics are generated and broadcast live. For instance, in sports or election programs, where data are constantly changing and the graphics must be updated based on the course of the TV program's own content.
Virtual production is not exclusive, nor limited, nor does it have an expiry date. In includes and will include all the techniques and technologies that join in to the achievement of professional audiovisual content, thus allowing several advantages over traditional production:
> Nonlinear Workflow (or Pipelines). The work phases are fully integrated, with
cross-functional team dynamics and running in parallel. There is multi-location, rendering immediate results of great technical and audiovisual quality. It's much more than just placing content on an immersive LED screen for a film shooting or TV show recording.
> Time and costs savings. Creation of stages, locations and visual scenographic resources without the need to travel, build and/or rent.
> A scalable technological solution for projects of different production scopes and with a wide range of narrative and creative possibilities, ideal for any client who wants to make a quality audiovisual piece.
The TV Broadcast live streaming industry marks the beginning of virtual production when real-time graphics are generated and broadcast live.
> A real-time process enabling ongoing viewing and experimentation. It makes evolution possible and changes can be made without major consequences, thus allowing the addition of new ideas on the fly. A more immediate, collaborative process, freer in regard to creative decisions.
> Integration of all possible resources with the aim of "seeing results", from script to marketing. Therefore, it is the combination of (real/virtual) techniques and various items of equipment used, such as capture devices (types of face-to-face/ remote/automated
cameras), image display devices (screens/LED modules), visualization and calibration systems (ACEScg, HDR and OpenColorIO...), visual content generation equipment (specialized hardware and software), chroma techniques and virtual scenography, 2D/3D tracking and monitoring solutions and systems (camera trackers such as FreeD -i.e. Mo-Sys, Vive Mars, etc-, Vicon, stYpe, OptiTrack, and EZtrack) as well as mapping, object motion capture systems (Motion Sensors), motion capture suits (MoCaps), facial capture systems, and/or signal distribution and specific wiring.
> Immediate realism for the different areas of creation. Especially in relation to the interaction with the cast of actors/actresses, the atmosphere created by the director of photography, the type of content envisioned by the producers; and of course, the decisions of the filmmaker concerning the staging. What is known as ICVFx (In-Camera Visual Effects) is a new term used in virtual production where the whole team can see the final result and special effects live, without having to wait for post-production, that is, it is shown in front of the camera itself or on a computer at the same place and time of the recording/shooting.
> Changes, tests, and immediate artistic trials. A very high level of adaptive response to circumstances and decisions, in areas such as visual composition (framing, angles, perspectives...), colorimetry, lighting... Always in real time, without having to wait for post-production tweaks.
> Fake live, as the only valid work method to obtain higher productivity ratios. Understanding such fake live as the generation of valid material simultaneously to its creation, which sometimes can also
be "live" when there is an audience/viewers enjoying it at the time of generation, as for example a TV program.
> Backward-compatible technological solution. Virtual production is making it possible to put all production models into operation, from face-to-face or local (on-site) –a more traditional one and known to all our readers–to off-site or remote (REMI-Remote Integration Model); including online production and hybrid/mixed and automated/assisted production.
> High levels of return, despite the fact that virtual production involves a hefty economic investment. Also, the first audiovisual works completed by applying virtual production are rendering a spectacular creative, technical and artistic result.
VP has modified the four traditional phases of audiovisual production by incorporating new, highly specialized processes in each of them.
VP has modified the four traditional phases of audiovisual production by incorporating new, highly specialized processes in each of them (source: https://www.unrealengine.com/):
1. PREPRODUCTION.
VP basically allows the VISUALIZATION (VIS/VIZ) of content in the complex process of creative growth with the ability to display it in a more faithful, realistic and accurate way. This facilitates decision-making:
1.1. Pitchvis. It is a way of previewing content in order to achieve the approval of a project when presenting (pitching) it to potential investment agents and/or financing companies.
1.2. Previs. Based on the script, the goal is to offer a visual reference of what should be shot live. It is the revamped version of cartoon drawings –storyboard–and an animatic, that is, the use of computer-generated images (CGI). The advantage here is that it allows filmmakers to experiment with different options for staging, the characters and their props as well as art direction, such as lighting, camera placement and movement, stage direction and editing, without incurring the higher costs of real production. Virtual cameras (Vcams) may be required.
In other environments -more typical of animation- discussion revolves on Layout, which are responsible for staging, establishing and placing the action of each sequence (scouting).
1.3. Stuntvis (or action design). It combines the previs and techvis stages for action sequences and physical stunts. A high degree of precision is needed so that action designers can choreograph takes accurately. They can also enjoy greater creative involvement, and ensure a high level of safety for stunt doubles and stunt specialists. It is usually guided by the stunt coordinator or action choreographer.
1.4. Techvis. It will allow us to visualize and determine the exact technical equipment in relation to the decisions made about real and virtual before committing the equipment and material necessary for the shooting (cameras, cranes and motion control rigs) or when the recording set is made available.
It is very useful for previewing the feasibility of designing the plans within a specific location (e.g. where the crane will go, or even if it will fit on the set), as well as blockages involving virtual elements. It can also be used to establish the amount of physical scenery that is needed instead of extensions of the digital set.
As a novelty, the presence, from the project's outset, of new professional profiles dedicated to the generation of virtual content such as the Virtual Art Department (VAD), as well as operators/creators/artists for Pitchvis, Previs... and even riggers, shapers, lighters, among others.
2. PRODUCTION.
VP -always in real time- allows to create a workspace (the filming or recording set) where a real environment is combined with a virtual environment designed and adapted to the story to be shown. New work processes appear:
2.1. Live Action. It is the process where a camera is used to film/record the real image.
2.2. Live Compositing (live composition or simulcam). Thanks to this technique, directors can see the CG elements composed from the live-action during filming, which allows them to get a better idea of what type of content is being captured.
2.3. Virtual exploration. It allows key members of the creative team (such as the project manager, cinematographer, or production designer) to review a virtual location within virtual reality (VR), thanks to the ability to immerse themselves in the set and get a real sense of scale. It also facilitates the design of particularly difficult scenography and sets and choreographing complex scenes: digital resources relate to the environment on a human scale.
2.4. Green Screen. The ever so popular green screen for chroma key.
2.5. LED Wall Immersive. This installation is a large screen made up of a multitude of led modules in continuous fashion, which allows to generate backgrounds, images, and immersive videos reaching
360º with a high level of realism (hyperrealism). That is, the essence of recording/filming live the images that are shown on the LED screens is kept along with other classic film/television resources such as the acting cast (star, group of actors/actresses, presenter), lighting (daytime effect, sunset effect...), camera equipment, props, costumes, sound capture... among many other things. This combination of LED technology and real scenery is also known as Stagecraft.ICVFx (In-Camera Visual Effects). Real-time scenes are interactive and live, moving relative to the camera to create a realistic parallax effect so everything appears in the right perspective.
2.6. AI Background Removal. Process and technique that removes the background from any video through Artificial Intelligence (AI) systems: Goodbye GreenScreen and Robust Video Matting are some of the technologies for Background Removal.
2.7. Postvis. It takes place after the physical shooting. It is used when visual effects are to be added to a take, but these are not yet finished (or, in some cases, their creation has not even started). Post-visualization or postvis can convey the directors' vision to the visual effects team, as well as provide a more accurate version of any unfinished visual effects shot for the editor to assemble. In this phase, in addition to the usual professionals that are typical of a shooting or recording, a group of
specialist technicians skilled in the areas of a virtual environment in combination with the real environment join in.
The Virtual Production Supervisor, the Performance Capture Specialist, VFX Supervisor, Virtual Image Technician/Grip (VIT), among others, are essential.
3. POST-PRODUCTION.
It builds on the foundations of traditional production to reach a finish of the image, the effects, the soundtrack, but with many decisions already made through the intervention of VP and entailing more complex workflows given the expanded range of the way in which audiovisual works are consumed by the viewer/audience.
4. MARKETING.
VP opens up a wide range of products to be consumed and different ways of enjoying them for the public/viewers, striving for greater participation and interactivity between AV, reality and a more immersive experience.
4.1. VR (Virtual Reality)
4.2. AR (Augmented Reality)
4.3. MR (Mixed Reality)
VP -always in real time- allows to create a workspace (the filming or recording set) where a real environment is combined with a virtual environment designed and adapted to the story to be shown.
Such is the number of changes and novelties that come with the speedy pace of virtual production (VP) that the name itself should be updated and be referred to as VPxR, a term that refers to virtual production using extended reality (XR) technologies, which integrate physical and digital elements in real time throughout the audiovisual production process.
XR is an umbrella term for technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), which combine physical and digital environments to deliver immersive experiences to audiences/viewers.
Professionals who have dedicated themselves to visual animation (2D/3D), video game development, visual effects generation (VFX) are completely used to this terminology and the above-mentioned processes, but it is the rest of the professionals in the audiovisual sector (TV, Broadcast and Cinema) who have to accept that conventional production is vanishing.
In addition to the professional profiles known and involved in traditional production, there are a number of VP specialists such as: Environment Artist, Character Artists, Technical Artists, Programmer, Stage Operator, Previs & Production Designer...
In 2009, members of ASC, ADG, PGA, ICG, and VES formed the Virtual Production Committee. This committee shared case studies on film and television projects that make the most virtual production and came up with many of their initial definitions.
In 2019, "The Virtual Production Field Guide" -2 volumes- from Epic Games (creators of Unreal Engine) was the go-to publication to understand what has been happening and why we are at this point in 2025. A constant way of producing audiovisual content that still has to go through moments of uncertainty and learning, if we add to VP others such as cloud computing,
generative/control artificial intelligence (AI) and real-time communication (5G). These are times of great change in the audiovisual industry, which can only lead to success through continued innovation, permanent and specialized training and highly coordinated multidisciplinary teamwork. There is an obvious need for audiovisual training centers, focusing on the reality of production, that prepare technicians for the most immediate present, given that technology generates changes in the professional sector at great speed.
Classic production (the most traditional one) and virtual production will coexist as we already know at present; but it is necessary to understand the changes and assume them as professionals in the audiovisual sector and main agents of audiovisual content to be in a position to enjoy the future.
These are times of great change in the audiovisual industry, which can only lead to success through continued innovation, permanent and specialized training and highly coordinated multidisciplinary teamwork.
TM BROADCAST explores the gaming industry through one of its key players
maze: the wildest side of esports
with Felix Volkmann, Technical Director
What differentiates TV productions from esports ones?
This is one of the insights shared by Felix Volkmann, a renowned professional with over a decade of experience in linear television who has spent the last three years as Technical Director at maze GmbH, a company specialised in the gaming industry.
Based in Germany, maze GmbH offers a range of audiovisual services and has made a name for itself in this particular sector through its Esport Factory. This facility stands out as the first location in Europe that can accommodate guests overnight while also housing an extensive production department.
This interview gives us the chance to learn, directly from one of the industry’s key players, about the current technological landscape in the esports market and the trends shaping its future. Volkmann also details the equipment the company relies on to successfully execute its projects, discusses the challenges faced in one of them, and shares insights into the upgrades planned for the near future.
Text: Daniel Esparza
First of all, could you tell us what maze GmbH is and share your recent evolution as a company, so our readers can get to know you better?
maze GmbH is a merger of three companies that have existed since 2014. Since 2020, we have had a film and livestream production, an advertising agency and the event location “Esport Factory” under the name “maze”.
We are the first pure gaming location in Europe that can accommodate guests overnight and also provides an extensive production department. We are an owner-managed company without outside capital and now work for international companies and brands.
We feel at home in gaming and esports and focus on projects there. We advise companies that want to enter the world of gaming or, for example, want to take human resource measures there. We create concepts for brands and can also implement them in-house with our own staff. Graphic creation, programming, film or livestream production are just as much a part of our portfolio as the design of gaming locations beyond our own location for example.
We’d like to focus on your Esport Factory. What do these facilities include?
It all started with the idea of a private leisure room for gaming. It quickly became clear that the location could also be used commercially and was then gradually developed further. Additional rooms were developed and the possibility for the live production of smaller esports events was created. We now have 1200 square meters of
space in our facility. We receive national and international customers from the private and professional sectors who stay with us for several days to game in groups of up to 30 people. We provide customers with a high-speed fiber optic connection, the latest gaming hardware and also sleeping space. In addition, we hold events of all kinds in the Esport Factory. Be it a company event with a gaming experience or extensive esports tournaments with
multi-camera productions. In addition, content of any kind can be produced in our studios.
What recent technological innovations have you implemented in these studios? Have you acquired any new equipment?
In a fast-moving industry like broadcasting and gaming in particular, our customers always expect modern equipment. That’s why we are of course
constantly adapting our inventory. When it comes to PCs, our equipment probably changes the most. Be it devices related to broadcasting or those that we make available to our guests who play games in our Location.
Every esports title has individual, special requirements. Every broadcast is different. It is important to respond to these unique requirements each time and create the appropriate technical requirements.
Do you have any plans to upgrade your studios in the near future?
As I mentioned in the previous question, requirements are changing very quickly and constantly. Our goal is to remain a solid player in the gaming and esports industry and to grow. That’s why updates to our facility are essential.
In fact, we are currently remodeling one of our studios and adapting it to our requirements. A suspended ceiling has been removed to make room for trusses. This gives us more flexibility and space in the studio itself. Until now, we have had to place the relevant technology (especially lighting) on tripods.
A new floor covering and an additional backdrop should expand the production options and give the studio a new look. New patch panels for RJ45, XLR and BNC round off the whole thing.
“A suspended ceiling has been removed to make room for trusses. This gives us more flexibility and space in the studio itself”
“We
are planning to renovate our network infrastructure. New and more switches as well as additional fiber optic connections for higher data throughput”
We are also planning to renovate our network infrastructure. New and more switches as well as additional for higher data throughput. In this area, requirements are constantly increasing.
In general terms, could you tell us which manufacturers you rely on for the main equipment in your studios?
When it comes to cameras, we currently generally use Sony. Here we have a mixture of camcorders and models from the Alpha 7 IV series. The latter can be used very flexibly for photos and videos in the field of content creation. In terms of video technology, we have probably installed the most technology from Blackmagic Design. The cost-benefit factor is simply unbeatable in this segment. We control our permanently installed light using a grandMA from MA Lighting with the appropriate peripherals. When it comes to sound equipment, we mostly rely on Sennheiser or Behringer.
However, we recently entered into a partnership with Beyerdynamic. Here we swear by the classic “DT-770 pro” for studio headphones. We have since equipped our gaming stations with the high-end “MMX 300 pro” models. Our customers from the gaming sector are very happy with these.
Could you share with us a recent project you worked on that you feel particularly proud of due to the technical challenges it presented?
A major project that we recently implemented was the final event of the DFB (German Football Confederation) eFootball Cup. This event took place in a customer’s hall. As a general service provider, we designed the event together with the customer and implemented the content and technology. We are proud that the customer described the event as “their best eCup final of all time”. Our challenge was to generate secure live streams to various destinations in an
unfamiliar location with our own network technology and to guarantee the professional players optimal conditions. The aim was to entertain spectators on site and in the live streams. We coordinated a crew of around 40 people on site. In more than 10 hours of broadcasting we reached more than 1.35 millions of unique viewers.
Regarding the evolution of esports, what technological advancements do you think have contributed the most to driving this industry forward in recent years?
I think that the affordability of technology is a big factor in this industry. Even smaller companies or private individuals who want to try out this segment can buy more or less professional technology. This simply accelerates innovation.
I myself worked in linear television for over ten years and at least at that time I found quite rigid structures. In esports
productions things are often a bit more spontaneous and wilder. That is both a blessing and a curse.
In esports productions we also often find environments that only use PCs and no longer standalone broadcast devices. Even if it then becomes less reliable, compared to highly professional broadcast technology, the possibilities are many times greater. This means you remain more flexible in some areas.
How are advancements in virtual reality and artificial intelligence impacting your workflows?
We have been offering virtual gaming in our location for several years. There is also good demand for this from customers. In the B2B business, we often rent out our VR gaming as a crowd puller at trade fairs.
“In esports productions things are often a bit more spontaneous and wilder. That is both a blessing and a curse”
An exciting development that we keep seeing is so-called “V-tubing”, where a virtual avatar of a streamer is created using hardware and software. Some streamers are no longer visible “in real life” at all, but only a virtual image that translates body and facial movements into virtual reality using face tracking.
Of course, we use AI tools in our company. The usual programs have made things a lot easier for us when creating graphics.
But to make sure it doesn’t look ordinary, the human individual has to know exactly how to feed the AI. There are also some exciting developments in post-production in content creation. Automatic speech optimization, for example.
In my opinion, we are still at the beginning with all of these things. Of course, it is amazing what is possible in this area. Personally, I’ve already had enough of images that were generated purely by AI.
I suspect that consumers will feel the same way. In general, I’m critical of the development of AI in the context of artistic copyright and its previously inadequate labeling requirements.
Do you have any experience with virtual productions?
At this moment virtual productions is not one of our business cases because we dont have any requests by customers.
What direction do you think the esports industry might take in the future?
The gaming industry, of which professional esports is also a part, is continuing to grow. Personally, I don’t think there is an end in sight any time soon. The number of gamers is constantly increasing and the generations that have never played games are gradually dying out. Big brands that want to play
a role among young to middle-aged adults have to deal with the topic of gaming and esports.
On the other hand, many events in this industry are dependent on sponsors. In the current global economic situation, there is also a noticeable decline in some areas. In any case, it is a highly competitive market in which many young companies are currently active. These will become even more professional.
Thank you for your interest in our company and our work! Best regards from the entire maze team! You are always invited when you are in Germany!
“I’m critical of the development of AI in the context of artistic copyright and its previously inadequate labeling requirements” Is there anything else you would like to add?
ISE brings the market back together for the main AV integration show of the year: these will be the key novelties
The event will take place in Barcelona from 4 to 7 February and is estimated to attract more than 70,000 visitors from 170 countries
If there is one event capable of bringing together all the most cutting-edge trends and technological innovations in the global audiovisual integration market in one space, that is ISE (Integrated Systems Europe) and no other. After more than 20 years now, the fair has experienced significant growth in recent editions -in line with the strength that the market is showing- and has become established as an annual meeting point of great impact for professionals in the sector.
Text: Daniel Esparza
ISE will take place at the Fira de Barcelona from 4 to 7 February. With more than 70,000 visitors expected from 170 countries and some 1,400 exhibitors covering an area of 82,000 square meters and eight halls, it promises to serve once again as a barometer of the current state of the market.
Among the elements that arouse the most interest among attendees are the various content and training cycles, which offer professionals the opportunity to get up to date on the latest market trends, something that has been especially taken care of by ISE this year, according to the organizers. These contents have been designed by ISE itself, AVIXA (Audiovisual and Integrated Experience Association) -joint owner of the show- and CEDIA, the association for smart home professionals. The program will span the four days of the show and will include a variety of interactive sessions, expert panels and live demonstrations.
Some references worth highlighting are Live Events Stage, which will explore new possibilities in production of live events; Esports Arena, which will delve into this booming industry; AVIXA Xchange Live, which will address key issues such as AI, cybersecurity or wellbeing; and CEDIA's Smart Home, focused on smart home innovation. The fair will also host two forums focused on business development and investment.
In this regard, it should be noted that the hosts of the show have introduced some improvements on this occasion concerning access options, such as the Content Day Pass, which allows access to ISE Summits and Track Sessions held on the same day at no additional cost.
Regarding the ISE Summits program, it will focus on the following theme areas: Smart building, AV Broadcast, Digital Signage, Smart Workplace, Control rooms and Education Technology. On the other hand, the ISE Tracks will offer a series of additional
sessions on five major trends that are driving the industry: artificial intelligence, audio, cybersecurity, retail and sustainability.
Exhibitor novelties
The main manufacturers in the audiovisual industry have made good use of the weeks prior to the event to offer previews to the media about the products and innovations that they will showcase at their booths. Based on the material received at the closing of this magazine, the TM Broadcast editorial team has prepared a selection with some of the main developments.
Screens and projectors
One of ISE's featured brands will be Panasonic. This manufacturer will offer at the show from AVoIP solutions aimed at enhancing collaboration, to products and software designed to optimize image quality. Thus, among other products, it will exhibit its all-in-one LED screen featuring the new integrated Intel SDM
technology or its 3-chip DLP projector, which reaches a brightness of 40,000 lumens.
Regarding screens, Philips also deserves to be mentioned. PPDs, the exclusive global supplier of Philips Professional Displays, will have a varied portfolio of indoor and outdoor dvLEDs (Direct View LEDs), digital signage or software solutions at the booth. In addition, it will organize for the first time ever at ISE a daily program of shows, which will also include a series of live signings from global partners in the world of sport, telecommunications and processing technology.
Also worth highlighting is the presence of Sony, which will showcase a range of solutions for the retail, corporate, educational, content and virtual production sectors, with a strong focus on sustainability. The BZ-L series of BRAVIA professional monitors, which use recycled products and feature advanced components such as a System on Chip
(SoC) and an Eco panel, will occupy an important space in Sony's booth. In addition, they include an ambient light sensor that optimizes energy consumption.
As regards of projectors, it is also worth highlighting the participation of Epson. This manufacturer will present at the fair its latest advances in high-brightness 4K 3LCD projection technology, with laser light source. On this occasion, it will focus on the relationship between digital technology and the natural world. Thus, a key element of the exhibition will be a 6-metre-high echosphere of
mirrors on which the models in Epson's EB-PQ2220B series will be projected.
Digital Projection, on the other hand, will showcase its new 1-chip DLP projectors: the E-Vision 16000i-WU and the E-Vision RGB 4K+. They are equipped with the novel High-EfficiencyPixel (HEP) DMD from Texas Instruments for increased color accuracy. The company will also introduce its Nexus electronic platform, which offers greater accuracy and control in projector installations.
Sharp NEC Display Solutions Europe will showcase its new range of professional audiovisual solutions under the Sharp brand aimed at the transport, meetings, control centers, museums or higher education segments. The common element of its professional displays and projectors offering will be a significant reduction in energy consumption, with its Sharp ePaper Display electronic paper screen leading the way. Sharp will also show its new generation of dvLED screens for large digital signage, designed to be integrated into different environments.
Brompton Technology will also be present to showcase its latest advances in LED video processing technology. At its booth, this manufacturer will offer innovations in terms of image quality and hardware, as well as the advanced tools of its Tessera software, aimed at virtual production, live or broadcast events.
Leyard Optoelectronics will take part in ISE through its European subsidiary, Leyard Europe. In addition to the space dedicated to LED products, the brand's booth will focus on LCD screens, DLP rear projection cubes, controllers and visual wall management software by its subsidiaries Planar and Eyevis. Likewise, the firm anticipates the announcement of new display products, which will be communicated shortly.
Another relevant reference to follow will be ROE Visual, which will present LED solutions designed for corporate environments, live events, education, broadcast or virtual
production. The most outstanding products found in its booth will include Coral, Topaz Curved, Vanish ST, Black Marble BM2i and Jet LED panels series.
Systems for content production
Another of the show’s highlights will be the technological solutions aimed at the production of content in real time. And one of the brands to be watched in this field is Clear-Com, which will take advantage of this event to launch a new product, along with a series of updates to its current solutions for live shows, educational institutions, corporate or museums, among other segments. It will also allocate significant space in its booth to exhibit Arcadia Central Station, which combines wired and wireless communication workflows in an Intercom system.
Manufacturer Grass Valley, on the other hand, will show to the market GV Media Universe and the AMPP platform, solutions
that are designed for the live events, sports, places of worship, corporate or education sectors.
Amino will launch a new user interface for its Orchestrate Remote Device Management platform and will showcase its proAV solutions, such as H200 media players, designed for environments where real-time video is key.
Another relevant firm will be LiveU, which will showcase at ISE its IP video portfolio, backed by its LRT (LiveU Reliable Transport) protocol. Among the novelties, it will exhibit for the first time its new LU-REQON1, a device designed as a tactical video encoder for public safety applications.
Also worth mentioning is Telemetrics, which will feature improvements in some of its products related to the control of robotic cameras, aimed at improving the workflows of production studios and broadcasters. Among its novelties, the inclusion of new functions in its control panel for RCCP-2A robotic cameras or the latest version of its pan/tilt LP-S5 heads.
Another prominent brand is Vizrt, which will offer visitors its various AV Broadcast solutions for content creation. At the show, this manufacturer will focus on innovations aimed at improving corporate communication, enhancing the experience in auditoriums, promoting hybrid education,
promoting brand content or centralizing control rooms.
Blackmagic, to mention another key brand in this field, will focus on showcasing its IP workflow-related products.
Connectivity and interoperability
Another element of interest for the industry is equipment interoperability. Of note here is Matrox Video, which will seek with its solutions to strengthen a commitment towards open standards and the IPMX protocol, which promotes interoperability between devices from different manufacturers within the audiovisual industry. Among them, Avio 2, the first IP KVM extender that supports the IPMX and ST2110 standards.
Lightware Visual Engineering will make live demos to test its devices for transmission of signals through USB-C and AV over IP connections, which enable integration with brands such as Cisco, Poly, Netgear or Sennheiser. In this framework Taurus TPN will be showcased, a solution designed for medium and large conference rooms.
On the other hand, Utelogy will show its advances in AV management through a series of Cloud products to monitor, control and automate connected infrastructures. Visitors will have the opportunity to see its Tech Tool, which allows device troubleshooting remotely from the Cloud.
Also at the show will attend Vivolink, which will offer visitors a range of accessories, connectivity and AV cabling. Among its novelties, the firm will present its new USB-C cables and extenders, its Flexi-lift wall mount and its universal wallboard system for HDMI and USC-C. Alongside this, it will also offer USB-C connectivity solutions for videoconferencing and meeting rooms, in addition to a catalog of hubs, switchers, arrays, extenders and professional AV cables.
Another relevant brand will be BlackBox, which will focus on showcasing its solutions to improve real-time operability in control rooms. Emerald DESKVUE PE will play a prominent role, offering simultaneous access to 16 systems, customizable designs and control through 4K and 5K screens. Visitors will also have the opportunity to explore Kiloview’s complete AVover-IP streaming solutions, powered by the unified Media Center Cradle RF02. This complete solution integrates Kiloview's full
range of products and is designed to enhance how AV systems are managed, connected, and optimized.
Blustream, for its part, will offer a wide range of new products and updates. The company will showcase its innovations in video walls, audio/ARC, Dante, power management, USB, as well as its wireless and IP solutions.
Professional audio
In regard to sound, there are several reference brands present at the show that should be highlighted. One of them is Calrec and its range of connected technologies, which allow broadcasters and content providers to scale their processing directly in the Cloud. The firm will showcase for the first time at ISE its new production tool, True Control 2.0, along with its IP-based mixing systems, the 24-fader Argo M and the 12-fader Type R. The manufacturer intends this solution to adapt to the changing requirements of content providers, from regional sports to large live events such as the US elections.
It is a must to mention Sennheiser, which will have its Pro Audio and Business Communication solutions in the same space for the first time at ISE. Through this combined presence, this manufacturer will showcase its full portfolio, with solutions for classrooms, meeting rooms, scenarios and studies.
Lawo will showcase its IP-based solutions. Visitors will be able to see the Crystal broadcast console, a mixing solution built on RAVENNA/AES67 networking standards. Fully compliant with SMPTE ST2110-30/-31 for audio and ST2022-7 for redundancy, this console supports a wide range of audio sources, including AES67, MADI, analog, AES3, and Dante.
Also present will be DiGiCo, which will showcase its live mixing consoles, designed for a wide range of events, from worship places to opera houses. Another reference worth keeping in mind is Klang and its immersive personal monitoring mixing system. At ISE, the company anticipates making a special announcement.
DirectOut, a provider of end-to-end audio solutions, will also showcase its most outstanding innovations at the show. Specifically, it will focus on the new Dual Audio Network cards for its Prodigy processor and its Maven platform. These interchangeable cards are aimedat improving the interoperability of both series.
Another prominent firm will be d&b audiotechnik, which will present its novelties
for professional audio systems. On the occasion of the event it will endeavor to showcase its new CCL (Compact Cardioid Line Array) to the market, which will be tested through a live demonstration.
This review of novelties includes the manufacturer Follow-Me, which offers artist-tracking solutions for lighting, immersive audio, video mapping and stage automation applications. Two new equipment
devices will be showcased at ISE: the Track-iT Weatherproof Anchor and the Track-iT Tag.
And Holoplot puts and end to this chapter of ISE's outstanding references. This manufacturer will showcase, among other products, its X1 system with the innovative Array Matrix technology, as well as the X2, a more compact version that raises the standards of speech readability.
Sennheiser MKH 8030 & 8060
A winning team, part of a big family.
With the recent addition of the 8030, the latest member of the family, we bring to our pages the lab test of a couple of microphones featuring the latest in technology and quality for the capture of clear, clean and natural sound.
Text:
We would like to begin by thanking Sennheiser Europe for their cooperation and time granted to us with these two test units to enable us to carry out this lab test.
Audio is an absolutely critical part of any serious AV production, and while it is true that initially it may not be as striking as spectacular visual effects, absence of sound or defects found in it can certainly ruin the best production.
We could say that sound is the basis that creates and supports most of
the emotions that are generated in practically all productions. It is capable of reaching our brains directly, without us being able to prevent it. It is not possible to close our ears as we do with our eyes.
We do not mean that audio is better or worse than image. We simply want to highlight that there is a symbiosis, a synergy, between sound and image, in which one enhances and boosts the qualities of the other to levels that would be hard to achieve in any other way. But we must not forget that there are still many productions where
sound is the only element, while visual shows without associated audio are hardly conceived.
For the perfect capture of any type of sound we may want to achieve, whether to amplify it in a live show or to include it in our production, the best manufacturers have been for many years developing a myriad of microphone systems with multiple purposes, and improving results -which seemed impossible to achieveon each successive try. As we have already mentioned, it is the case here.
As a result of this research and technological development, today we bring you two microphones from Sennheiser that, in addition to being excellent as standalone equipment, make up an even more outstanding set when working together. Furthermore, they are part of an even larger family that shares many of those key characteristics to successfully tackle almost any situation or need thanks to their multiple combinations.
The family in question is the MKH-8000 series, which offers a wide range of microphones whose main difference lies in their different polar patterns. Just by listing them, the complete range currently comprises 7 models: MKH-8020, an omnidirectional mic with no proximity effect. The recently added MKH-8030, bidirectional in a figure of eight. MKH-8040, a classic cardioid. MKH-8050, a supercardioid. MKH-8060, supercardioid/lobular. MKH-8070, lobular. And MKH-8090, a wide cardioid.
All of them are designed to meet the highest standards in broadcast,
cinematography and concert hall environments.
One of the features that we find quite remarkable is that the entire 8000 series shares the same timbre response, thus improving off-axis linearity. In this way, it is very easy to combine any model in the range without concerns about shades in response.
And although in all types of microphones it is normal that the attenuation is somewhat more noticeable for higher frequencies, the uniformity of coloration in the polar response has also been dealt with to avoid significant alterations as well in this regard.
But, wait a second: Do we all know what linearity or coloration are? Knowing that they are fundamental and generic concepts in any audio environment, and the better part of our readers being acquainted with them, we are also aware that a some of them may not be familiar with all the concepts, or have come across information that is not sufficiently clear.
For this reason and in view of the importance of these concepts, we have decided
to write this content in a somewhat different way than usual: instead of making brief clarifications for each term throughout the introduction, we will continue with the content ignoring these clarifications so as not to distract our higher-level readers. Instead we will list and define the concepts at the end of this piece, thus adding deeper and more detailed explanations to also take into account readers that may be a little less familiar with the impact and influence of each of these concepts on the final performance of the microphones.
Returning to our MKH-8000 family, it is especially important to have available a whole range of models in which not only a high uniformity in the intrinsic response of each microphone is maintained regardless of polar pattern (that is: neatness of coloration in the linearity of each microphone), but also that all the models in the range share the same timbre response (that is: there is also uniformity of coloration between different microphone models).
Similarly, there are other outstanding features that are common to all models in the range, so we will discuss them in general before delving into the specifics of the models we have tested.
The first very important concept to clarify relates to the type of RF capacitor capsule. But let's not confuse the RF microphone that refers to the radio frequency emission to reach the amplifier in a wireless transmission, with the RF capacitor that makes up the capsule, and this is one of the technical features enabling the wide range of frequencies and the neatness and quality of the sound obtained, which will then be conveyed by cable.
These types of capsules provide significant advantages: more precise polar patterns, greater bandwidth, better linearity in response, lower internal noise levels, and better results and resilience in adverse environmental conditions of temperature and humidity. Also dispensed with is the usual compromise between all of them in classic designs.
The entire 8000 series shares the same timbre response, thus improving off-axis linearity
The MKH-8060 is a short-barrel model just 18 cm in total length, whose supercardioid/lobular polar pattern presents 10 dB of attenuation at 90º from the axis, and about 12 dB backwards (180º) with occlusion cones centered around 120-135º.
All these intrinsic qualities are housed in all instances in solid, robust and compact units, 19 mm in diameter and with different body lengths, with a rough matte paint finish that facilitates their presence in all types of environments without adding reflections or light bounces. Also in all of them, the end with the XLR connector is a separate (MZX 8000 XLR) module provided as standard and removable by thread.
This modularity provides enormous versatility of use,
by allowing a filter (MZF 8000) to be inserted directly into the microphone body; or to separate the capsule from the bulky XLR connector by means of a specific cable (MZL 8003 or MZL 8010) and then install it in the least intrusive way possible; and even replace it with a digital module (MZD 8000) so as to obtain the audio output directly in AES digital mode.
But going into the details of the specific models that we have had the opportunity to test and enjoy:
The MKH-8060 is a short-barrel model just 18 cm in total length, whose supercardioid/lobular polar pattern presents 10 dB of attenuation at 90º from the axis, and about 12 dB backwards (180º) with occlusion cones centered around 120-135º.
The first use that comes to mind is to have it mounted on camera or located in booms or studios where a directionality greater than that of the conventional cardioid is required without totally losing the back of the axis.
And thanks to its design and build qualities, it will also provide magnificent results in outdoor environments in an uninterrupted and predictable way. According to specifications, it features a wide frequency response between 50 and 25,000 Hz, a sensitivity of -24dBV (63 mV/Pa), a sound pressure level of 129 dB SPL, and an equivalent noise level of 11 dB(A), which when it comes to yielding results, translates into a clean sound capture
with an extremely natural result even with faint sounds.
Unlike the abovementioned model, the MKH-8030 is a very small unit with a total length of 9 cm. This bidirectional or figure-of-8 model is characterized by mounting the capsule in a way that directs maximum sensitivity to the sides symmetrically, with the occlusion cones forward and backward. In this way, when mounted in parallel to a directional microphone, it is capable of capturing the signal that is required to recreate the stereo space in the mix.
It can be combined with any other model, as its natural partners can be a cardioid MKH-8040 or the MKH-8060 that we have tested. But let's continue with the technical part before going into the results of our tests and impressions.
Its specifications are impressive: Frequency response 30 to 50,000 Hz, sensitivity -31 dBV (28 mV/Pa), sound pressure level
of 139 dB SPL and an equivalent noise level of 13 dB(A).
If in the 8060 we already saw figures that exceed our physical capabilities, in this case even more so. Remember that the frequency range of the human ear falls between 20 and 20,000 Hz. But anyone who has toyed with a frequency generator has felt like long before you get to 20 Hz and 20,000 Hz at the other end, an unmistakably attenuated sound is clearly perceived. Few people actually get to hear anything clearly when reaching the thresholds.
Realistically speaking, this is where the real difficulties begin, because how do you describe the quality of sound that a microphone is capable of picking up? Graphs are great for understanding the concepts, but in the end listening is a subjective process, different for each person, and one that also depends on the equipment enabling us to "listen" to the microphone: mixing consoles, recorders, cameras, amplifiers, speakers...
So, the only option left has been to compare these microphones. Not only between themselves, but also with other microphones to whose response we are already accustomed: a good quality cardioid featuring good response in a wide range of frequencies; and another with a short mid/high range camera barrel that, although it has been in use for some time now, keeps yield good results. Both are "first-class brands", as we are looking for different scenarios and situations, using different media, to finally reproduce them in different environments.
We recorded human voices, some musical instruments, and different types of noises, from faint ambient sounds to even bangs and hits. We tested them outdoors on a street with traffic, inside a theater during rehearsals, inside a small and inadequately soundproofed studio, and in a bedroom of a house. We recorded on an independent recorder (WAV format, quality 96 kHz, 24 bits), and on the audio tracks of a video camera (linear PCM, 48 kHz, 16 bits).
Then we played back from the recorder itself, from the camera and from a computer, listening through studio headphones, through equipment with class A amplifiers, and through mere passive computer speakers connected to the standard sound card integrated in the motherboard.
In short, taking into account the qualities and cautions of each means of playback, the differences in result are noticeable. And very much so, even on the simplest playback device. Although, at first directionality is the one that is more striking, which is logical considering the different polar patterns, when listening more carefully the linearity of the response is worth noting as the sound source deviates from the axis. Especially when comparing the result between both short-barrel microphones that share similar patterns.
In all cases, both Sennheisers have been capable of capturing more nuances, in a neater way, and with a more natural feel than our “good old microphones”.
It is true that ours are not laboratory reference equipment, but even in the case of equipment that provides steadily good results, we must recognize that our devices have been outperformed by the Sennheisers.
That neat and natural feel in the sound captured while keeping proportions and balance in all situations, surely comes from many of those frequencies that, although not part of the directly audible spectrum, do end up being perceived somehow. Above all, when we have the least number of media between capturing and listening.
Regarding the MKH-8030 microphone, it is the perfect tool to use in M-S (Mid-Side) recordings. This recording technique also uses two microphones to capture the stereo field, although unlike the conventional method that resorts to two equal microphones, it combines a directional microphone (“Mid”, pointing towards the sound source) with a bidirectional microphone (“Side”, capturing the sound from the sides).
Subsequently, the "Side" signal is recorded in two channels -one of them inverted- in order to create the stereo image. The great benefit of this M-S method is that it provides a more accurate control over the stereo field width in the final mix. It is so well thought out for this purpose that not only is its pickup axis perpendicular to the build axis, but it even comes with the clamps to couple with the "Mid" microphone on a single support. Having obtained good results with the MKH-8060, we would have loved to get the opportunity to compare it with the
MKH-8040 too, because of its conventional cardioid pattern that perhaps would have provided a bit more coherent common sound space.
To close, we should point out that both are supplied with the foot support and the windbreaker appropriate to their respective size and pattern. The 8060 comes with a very compact case to protect it in transport, for example in the bag of the sound engineer or the camera bag. However, the 8030 comes in a hard case that also houses the pair of additional clamps mentioned for that M-S configuration.
In short, even though we have only tested a couple of microphones, we have sufficient information on the qualities and results that can be expected from any other member of the family. As for the tested units, we especially welcome this new two-way microphone with flying colors, as this is a piece that will surely provide great satisfaction to those lucky professionals who know how to squeeze the whole juice out of it.
Because, as we always say, no matter how good the tool is, it will always work better in good hands.
Regarding the MKH-8030 microphone, it is the perfect tool to use in M-S (Mid-Side) recordings.
As we had mentioned, please find below the list of technical aspects by conceptual order, which will make grasping this content an easier task.
Coloring: A key characteristic of microphones that refers to the difference in the processing carried out by capturing different frequencies, slightly altering the tonal features of the original sound and providing "warmer" or "brighter" results in general. Each range from every manufacturer features a sound "style", being the end use, the desired result or even personal taste what will make us choose one model or other.
Axis: Direction in which a microphone is most sensitive to sound waves. Frequently, the axis of a microphone coincides with the main axis of its body, although there are a fair number models in which this axis is perpendicular to the body. It is on this axis that the polar pattern develops.
Attenuation: A parameter related to the reduction of the magnitude of any signal for different reasons. For example, in the case of frequency, it would be the different responsiveness to different frequencies.
Polar pattern: A representation of the different levels of sensitivity attenuation of each microphone depending on the direction in which the sound originates in relation to its axis. The most common are: omnidirectional, cardioid, figure-of-8, lobular, etc.
Linearity: A quality of a microphone to reproduce the variations in sound pressure proportionally and faithfully to the sound it is capturing without altering either the tone or the amplitude of the original source.
dB (decibel): A submultiple of a bel, a logarithmic, relative and dimensionless scale unit of measurement that always expresses the relationship between two values. It is used both for measuring sound signals and in electronics, signage and communications. Very useful when it is necessary to compare magnitudes featuring huge variation ranges. We must always have a reference value: thus 0 dB would be the reference value, an increase of 3 dB is practically doubling the magnitude, an increase of 10 units means multiplying the magnitude by 10 and an increase of 20 units is equivalent to multiplying the magnitude by 100. The reason for using a logarithmic scale is that it coincides with the natural response of the human senses to differences in the intensity of stimuli, such as light or sound.
Timbre: It is the feature of sound that allows us to distinguish voices or instruments from each other, even if they are producing the same sound or the same musical note. It is created from the combination of each fundamental frequency with the harmonics of each emitter, its way of generating sound and its soundboard.
Frequency range: The interval of frequencies that can be accurately captured. That is, maintaining the actual level within the indicated attenuation margins. For example: 20 Hz to 20,000 Hz < 10 dB (with an attenuation of less than 10 dB)
RF Capacitor Capsule: In conventional capacitor capsules, one plate in the capacitor is fixed and the other is the membrane that vibrates due to the effect of sound in order to generate variations in the current passing through it. However, in an RF capacitor microphone the membrane vibrates freely between two fixed plates subjected to a radio frequency voltage that is the carrier. The movement of the diaphragm modulates the frequency of said carrier, and this modulated signal is sent as output. This construction technique provides multiple advantages.
Impedance: It is the resistance that opposes the passage of the current. This is a parameter to take into account especially when connecting the microphone correctly to other equipment such as preamplifiers or mixing consoles.
Own or internal noise: It is a measure of the amount of electricity generated in the circuits when no acoustic signal is being
recorded. The lower the value the better, as it will allow for a cleaner and clearer record.
Sensitivity: It is the ability of a microphone to convert sound pressure into an electrical signal. Although it is one of the values related to the minimum signal threshold that the microphone can pick up, this feature becomes more significant when indicating the accuracy in capturing up weak sounds.
Sound Pressure Level (SPL): It is an indicator of the maximum sound pressure measurement that the microphone is able to convert into an electrical signal without distortion. It is expressed in dB over 20 micropascals, a reference of the human ear's hearing threshold. The higher the value, the better, because it indicates that higher sound levels can be neatly captured.
We hope that the whole of this text will be useful and pleasant to everyone: to readers who are more acquainted with the terms, as they will focus better on the main arguments; and to the less familiar readers, to make it easier for them to delve more in this interesting and very important field. We look forward to receiving your feedback.