TM Broadcast International #134, October 2024

Page 1


EDITORIAL

Welcome to the October issue of TM Broadcast International!

As always, we bring you the innovators and game-changers who are de fi ning what is possible in broadcast and audiovisual production. In this issue, we bring our readers some of the industry’s leading lights as they share their experiences using next-generation technology to deliver results for broadcasters and audiences.

Among our cover stories and within the world of public broadcasting, TM Broadcast International talks to Adde Granberg, CTO of Sweden’s SVT, about its recent adoption of the Agile Live OTT management platform and its plans for the future. Readers were then treated to a one-on-one with Lucien Peron, CTO of PhotoCineLive, and its recent acquisition of an OB van, integrated by OBTech in partnership with Sony, to take OB to distances never before dreamed of.

Editor in chief

Javier de Martín editor@tmbroadcast.com

Key account manager

Patricia Pérez ppt@tmbroadcast.com

Editorial staff press@tmbroadcast.com

Keeping up with the latest developments, our sports broadcast feature looks at CAMB.AI’s groundbreaking real-time subtitling and translation platform.

Founder and CTO Akshat Prakash explains how the technology won over viewers when it was used by Eurovisionsports.tv for its coverage of the World Athletics Championships. Continuing the trend, the October issue features a close-up with AI technology expert Rubén Nicolás-Sans, who analyzes the role of artificial intelligence in sports production, focusing on the recent Olympic Games in Paris.

Finally, we met with Stuart Campbell, a commercial DoP at heart, who delivers high impact work for successful series such as ‘Special Ops: Lioness’, ‘Mayor of Kingstown’ or the recently awarded film ‘Drive Back Home’.

This issue looks into the heart of an ever-changing and evolving business to give our readers both the hottest current trends and a glimpse of where we’re all going next.

Creative Direction

Mercedes González mercedes.gonzalez@tmbroadcast.com

Administration

Laura de Diego administration@tmbroadcast.com

Published in Spain ISSN: 2659-5966

TM Broadcast International #134 October 2024

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

Sveriges Television (SVT), Sweden’s public broadcaster founded in 1956, has always pushed the boundaries of innovation without abandoning its public service vocation. At the forefront of this technological transformation is Adde Granberg, SVT’s Chief Technology Officer, who brings more than three decades of broadcast technology experience to his role.

48 OB | PHOTOCINELIVE

PhotoCineLive’s new OB: big capabilities in a compact package

PhotoCineLive, a leading production company known for its cinematic approach to live events, has recently unveiled its latest technological marvel: a compact 4K Ultra HD OB van.

AI | SPORTS PRODUCTION

From automated cameras to personalized content How AI is transforming sports broadcasting

Artificial Intelligence is revolutionizing sports broadcasting, offering unprecedented opportunities for content creation, audience engagement, and operational efficiency.

In an era where global reach is paramount, CAMB.AI emerges as a game-changer in the world of sports broadcasting. Co-founded by Akshat Prakash and his father, this innovative company is pushing the boundaries of real-time, multilingual content delivery.

DOP | STUART CAMPBELL

Stuart Campbell

From ad man to award-winning cinematographer

Despites his youth, Stuart Campbell is already an accomplished Director of Photography who has made significant contributions to the world of cinematography through his work on notable projects such as Special Ops: Lioness, The Handmaid’s Tale, and Mayor of Kingstown.

Vizrt expands TriCaster possibilities with new flagship and entry-level solutions: TriCaster Vizion and TriCaster Mini S

Vizrt has introduced two new models to its renowned TriCaster family: the TriCaster Vizion and the TriCaster Mini S. According to the company’s statements, these launches cater to both advanced production teams and smaller content creators.

TriCaster Vizion: The future of live production

At the forefront of this release is the TriCaster Vizion, a new flagship model that combines IP connectivity, configurable SDI I/O, and powerful audio mixing with industry-leading graphics. Built with the future of broadcast and live event production in mind, TriCaster Vizion features advanced graphics powered by Viz Flowics and integrates AI-driven automation to simplify complex tasks, freeing production teams to focus on creative execution.

“TriCaster Vizion is the hub of a complete digital media production ecosystem,” said Ulrich Voigt, Global Head of Product Management at Vizrt. “It’s designed to meet the everevolving needs of modern media production, empowering users to push creative boundaries, streamline workflows, and deliver the content they envision.”

The flexibility of TriCaster Vizion extends to its licensing options, offering both perpetual and subscription models. Users can also choose between two hardware platforms supported by HP, making it adaptable for different production environments, from broadcast

studios to enterprise media networks. With its combination of AI, IP connectivity, and configurable hardware, TriCaster Vizion is engineered for broadcasters, sports networks, and live event producers looking for high-end, versatile production tools.

TriCaster Mini S: Lowering the barriers to professional production

For smaller production teams and content creators, Vizrt has introduced the TriCaster Mini S, an entry-level, software-only solution that brings professional-grade production capabilities within reach. With support for IP connectivity, 4Kp60 streaming, and built-in TriCaster Graphics powered by Viz Flowics, the Mini S offers everything needed for top-tier video production in a more accessible package.

“TriCaster Mini S is equipped to bring you everything needed to start your live production journey,” Voigt added. “It puts the choice in your hands—select your hardware, and you’re ready to create, produce, and stream your stories, your way.”

The TriCaster Mini S eliminates barriers related to budget and experience, offering users the flexibility to choose their own hardware while benefiting from the reliability of the TriCaster brand. Whether it’s live switching, virtual sets, social media publishing, or web streaming, the Mini S ensures that creators at any level can achieve high-quality results with a simple software download.

Expanding the TriCaster legacy

Since its introduction in 2005, the TriCaster line has continuously evolved, providing users with more flexibility and options. The release of TriCaster Vizion and TriCaster Mini S marks a significant expansion of this legacy, offering both top-tier and entry-level solutions without compromising on quality. These models are designed to adapt to the fastchanging media landscape, accommodating both existing and emerging technologies.

Both the TriCaster Vizion and TriCaster Mini S are now available for order, offering production teams more choices and enhanced capabilities to meet the demands of today’s media environment. 

Krotos unveils AI Ambience Generator in Krotos Studio 2.0.3 for custom sound design

Krotos has launched Krotos Studio 2.0.3, introducing a groundbreaking AI Ambience Generator. This new tool offers content creators across film, TV, gaming, and beyond the ability to design custom soundscapes with a simple text-based prompt. By leveraging Krotos’ extensive library of professionally recorded sounds, users can now generate high-quality ambiences tailored specifically to their projects, providing enhanced flexibility and creative control.

AI-powered soundscapes

With the AI Ambience Generator, users of both Krotos Studio and Krotos Studio Pro can create immersive sound environments by merely describing a scene. Whether it’s the serene sounds of a “forest at dawn” or the bustling atmosphere of an “urban city night,” the tool taps into Krotos’ ever-expanding library of sounds, generating fully customizable presets in real-time. Unlike static audio files, the presets created by the AI Ambience Generator are fully editable, enabling users to fine-tune soundscapes according to the unique needs of their projects. The generated

soundscapes can then be exported directly to popular DAWs and NLEs such as Adobe Premier and DaVinci Resolve, ensuring a seamless integration into any production workflow.

Streamlining sound design with AI

The introduction of the AI Ambience Generator drastically simplifies the traditionally time-consuming process of sound design. Rather than manually searching through vast sound libraries, creators can now describe their desired ambience and receive precise, high-fidelity results in seconds. This powerful tool allows sound designers, filmmakers, and game developers to focus on their creative vision rather than the technicalities of sound selection.

George Dean, Lead Software Developer at Krotos Studio, highlighted the significance of this new feature: “The AI Ambience Generator is proving to be one of our most popular features. It unlocks a seamless workflow where users can create high-quality, customizable soundscapes in seconds, using only their imagination and a

description. Instead of spending time searching through libraries, they can focus on their creative vision and move faster through their projects.”

According to the developers, this new solution’s key benefits would be:

› Precision: Users can go beyond traditional sound libraries to generate the exact background ambience they need in seconds.

› Customization: Easily adjust elements of the soundscape with a few simple controls, creating ambiences of any desired length.

› Quality: The tool uses pristine, high-fidelity audio, recorded by top-tier sound engineers, ensuring the final output is free from synthetic artifacts.

Workflow efficiency

This new AI-driven feature enhances the efficiency of the creative process, allowing users to remain fully immersed in their work without the need to comb through endless audio files. By offering both speed and precision, the AI Ambience Generator represents a major leap forward in sound design, empowering creators to deliver exceptional audio experiences more efficiently than ever before.

Krotos’ new AI Ambience Generator sets a new standard in customizable sound design, unlocking greater potential for content creators across the media landscape. 

Pixellot launches next-gen AI camera Air NXT, redefining multi-sport coverage and analytics

The portable Pixellot Air NXT introduces enhanced video quality, sound, and AI-powered features, tailored for soccer, basketball, hockey, lacrosse, and football

Pixellot has unveiled its latest innovation: the Pixellot Air NXT, a next-generation portable camera designed to revolutionize multisport broadcasting. Approved by FIBA, this camera is packed with features aimed at delivering superior video coverage for a wide range of sports, including soccer, basketball, ice hockey, football, lacrosse, and field hockey.

The Pixellot Air NXT sets new standards for AI-powered sports streaming with notable improvements in video clarity, sound, and integration into Pixellot’s comprehensive coaching, analytics, and OTT ecosystem. The camera now features digital stereo microphones, smooth NVidia-powered video processing, and up to four times the storage capacity, ensuring every moment of action is captured with precision.

Designed for all levels of sports, from grassroots to elite

The Pixellot Air NXT is versatile enough to cater to sports families, local teams, and elite organizations. Its unique Tournament Mode simplifies the live production of consecutive games, making it ideal for streaming large events or backto-back matches. With a storage capacity of up to 512GB and a battery life of five hours, the Air

NXT offers robust performance for long recording sessions. Moreover, the battery charges 33% faster than previous models, ensuring minimal downtime between events.

The camera’s dual-stream functionality is a standout feature. Users can choose between a TVstyle dynamic video stream, which automatically zooms in on the ball and key action, or a full-field panoramic view, offering a bird’seye perspective of the entire game. This flexibility appeals to both fans and coaches, providing them with the option to watch the game from any angle.

Pixellot has also emphasized ease of integration with other platforms. Its open architecture allows seamless connectivity with third-party services such as analytics tools, scouting platforms, advanced graphics solutions, and remote commentary systems. In addition, sports organizations can stream directly to YouTube, social media, third-party OTT platforms, or use Pixellot’s white-label streaming solution tailored for professional and amateur sports leagues.

Next-Generation AI architecture: accuracy and automation

At the heart of the Pixellot Air NXT is a newly developed AI architecture that

sets a new benchmark in sports broadcasting. This AI system is powered by deep learning algorithms trained on over five million games from various sports over the past eight years, offering unparalleled precision in ball tracking and game-state recognition.

A key innovation in the Air NXT is its automated highlight generation, capable of capturing both game and individual player highlights. Whether it’s a last-minute equalizer or a game-winning three-pointer, users can easily share clips on social media with just a few clicks. The system also supports manual highlight creation, offering greater flexibility for broadcasters and teams.

FIBA-approved technology

The Pixellot Air NXT has been officially approved by FIBA through its Equipment & Venue Centre’s Approved Software program, validating its suitability for professional basketball competitions. This endorsement underscores Pixellot’s commitment to delivering top-tier sports video solutions trusted by teams and venues worldwide. 

WDR enhanced UEFA Euro 2024 coverage with Riedel Backbone

Westdeutscher Rundfunk (WDR), the German regional public broadcaster, has partnered with Riedel Communications to bolster its communications and signal distribution infrastructure for the broadcast of UEFA Euro 2024. This collaboration, facilitated by Broadcast Solutions and Riedel’s Managed Technology Division, enabled WDR to conduct efficient remote production for linear TV, radio, online, and social media platforms from its Broadcast Center Cologne (BCC).

Opting for a centralized approach from Cologne, WDR successfully minimized travel and operational costs while ensuring comprehensive coverage of UEFA Euro 2024. The broadcast feeds were consolidated at WDR’s control room in Mainz, decoded, and transmitted via fiber to Cologne. Additionally, signals from up to six unilateral cameras with embedded audio were transmitted from each stadium to Cologne, where they were integrated into production workflows across various platforms.

Felix Demandt, Project Manager at Riedel Communications, highlighted the strategic advantages of this setup: “ARD aimed to combine on-location presence with centralized operations. Previously, large OB vans were required at stadiums, but now, with centralized control from Cologne supported by a lean on-site team, production efficiency has significantly improved. This setup allowed for agile responses to dynamic production demands throughout Euro 2024.”

To accommodate the influx of signals at BCC, WDR expanded its infrastructure with 17 MediorNet MicroN UHD nodes. Ten nodes featured the Standard App to synchronize UEFA feeds with the house clock and flexibly distribute video and audio signals, while the remaining seven nodes used the MultiViewer App for scalable multiviewing capabilities. Control and configuration of this infrastructure were managed via the hi human interface from Broadcast Solutions.

Humphrey Hoch, Product Manager at Broadcast Solutions, emphasized the seamless integration: “WDR, familiar with MediorNet and hi human interface from their Ü3 OB van, appreciated the user-friendly, scalable, and reliable interaction between these technologies. With 29 hardware panels and software licenses, WDR had flexible access to the hi system across their operational hubs.”

In addition to enhancing the existing intercom system, WDR deployed a rented Riedel Artist Node with MADI cards to connect commentator stations in stadiums, alongside additional intercom panels to accommodate expanded operations during Euro 2024.

The integration of Euro 2024 into WDR’s ongoing operations required meticulous planning to ensure uninterrupted regular broadcasts. For instance, the Euro 2024 control room was isolated from the main WDR control room to focus exclusively on championship content. Close collaboration from planning to execution ensured that the system met WDR’s stringent requirements and operated seamlessly throughout the event.

“WDR delivered outstanding coverage of Euro 2024 through advanced and efficient remote production,” concluded Demandt. “We are proud to have contributed to this success story with our cutting-edge services and technologies.” 

German regional public broadcaster Westdeutscher Rundfunk (WDR) has implemented a Riedel backbone for communications and signal distribution for the ARD broadcast of the UEFA European Football Championship 2024.

Slovak Telekom chooses MediaKind’s Aquila Broadcast for cutting-edge TV headend transformation

Slovak Telekom, a subsidiary of Deutsche Telekom, has partnered with MediaKind to overhaul its TV distribution headend with the deployment of MediaKind’s Aquila Broadcast solution, marking a major step towards a cloud-ready, next-generation infrastructure. This strategic transformation aims to enhance video quality and optimize operational e ffi ciency, ensuring Slovak Telekom remains at the forefront of broadcast technology.

The multi-year agreement reinforces the strong partnership between Slovak Telekom and MediaKind, focusing on a seamless migration to modern technologies and the future-proo fi ng of content

delivery. The deployment will initially be on-premises, leveraging Slovak Telekom’s existing infrastructure, but with the fl exibility to transition into hybrid cloud environments. MediaKind’s advanced AI-driven compression technology is central to the upgrade, o ff ering signi fi cant improvements in both video quality and bandwidth e ffi ciency.

Juraj Matejka, Technology Tribe Lead TV & Entertainment at Slovak Telekom and T-Mobile Czech Republic, highlighted the importance of this move: “Aquila Broadcast represents the next chapter in our longstanding partnership with MediaKind, ensuring that our viewers receive the highest quality content, delivered with the e ffi ciency and reliability they

expect. This migration not only preserves the investments we’ve made in our current headend but also positions us to meet the growing demands of our customers by embracing cloud-native solutions that are both scalable and adaptable.”

One of the key components of this deployment is MediaKind’s AI Compression Technology (ACT), which dynamically optimizes video quality in real-time by adjusting codecs based on the content being processed. This allows Slovak Telekom to deliver exceptional video experiences across various platforms while minimizing bandwidth usage—essential when supporting multiple codecs and resolutions, including MPEG-2, MPEG-4 AVC, and HEVC.

According to Boris Felts, Chief Product O ffi cer at MediaKind, this project exempli fi es how MediaKind’s technology can bridge the gap between traditional broadcast systems and modern cloudenabled infrastructures:

“Our partnership with Slovak Telekom is a prime example of how MediaKind’s solutions can provide a smooth transition

from legacy systems to modern, cloud-ready architectures. Aquila Broadcast enhances operational e ffi ciency and o ff ers a clear path for future upgrades, enabling Slovak Telekom to tailor the viewing experience to meet the demands of tomorrow’s viewers.”

This deployment showcases the scalability and adaptability of MediaKind’s solutions, allowing

telecom operators like Slovak Telekom to navigate the rapidly evolving media landscape while maintaining service continuity and high-quality content delivery. With AI-powered compression and the ability to extend into hybrid cloud environments, Slovak Telekom is well-positioned to meet future broadcast challenges and expectations. 

BCE expands control room with Custom Consoles’ Module-R desks and MediaWall at Luxembourg NOC

Broadcasting Center Europe (BCE), a key media services provider supporting over 400 clients worldwide, has completed a significant expansion of its Network Operation Centre (NOC) at its Luxembourg headquarters. The project involved extending the control desk and monitor wall setup, all sourced from Custom Consoles, which has been BCE’s partner since the construction of RTL City in 2017.

Rosa Lopez Sanchez, Project Engineer at BCE, explains the decision to continue working with Custom Consoles: “We initially chose Custom Consoles’ Module-R and EditOne desks, along with MediaWall and MediaPost monitor display supports, for their flexibility and unified visual style. Sourcing all the desks and display supports from a single supplier ensured consistency in equipment installation and wiring, and it made future maintenance easier.”

The recent upgrade took full advantage of the modular nature of the Module-R system, which was designed for easy expansion and reconfiguration. BCE added eight new 19-inch sections, equipped with 3U equipment

pods and monitor support arms to the existing control desk.

The MediaWall was also expanded, growing from three sections supporting 12 large display screens to seven sections accommodating a total of 28 displays. “The ease of extending both the Module-R desk and MediaWall was a major benefit. The new components fit seamlessly, and we’re extremely satisfied with the outcome,” added Lopez Sanchez.

The expanded control desk retains its wrap-around, inverted-U design, with increased width to accommodate three or more operators simultaneously. The new layout consists of a 14-bay central section, flanked by two-bay-width corner sections and three-bay-width wings on both sides. Each bay is outfitted with 3U-high angled equipment pods, with additional pods placed in the corner sections. Deskmounted 24-inch monitors are positioned along the central, corner, and wing sections, providing an optimal working environment for operators.

Custom Consoles’ Module-R system is built for flexibility,

allowing users to configure their desks in various sizes and layouts to suit specific operational needs. Unlike fixed furniture, Module-R desks can be expanded or reconfigured throughout their lifecycle. The design supports broadcast industry-standard 19-inch rack-mounting equipment, which can be added at the time of installation or later. Furthermore, individual desk components such as desktops or pods can be replaced or modified to meet evolving requirements.

The MediaWall system, a key feature of BCE’s expanded NOC, enables the construction of large flat-screen monitor displays with its customizable horizontal and vertical support elements. MediaWall can be either fully self-supporting or attached to the studio wall, and it allows screens to be positioned edge-to-edge for a continuous display. All cabling is concealed within the structure, ensuring a clean and efficient installation. MediaWall is available in a variety of configurations, tailored to the specific needs of the control room.

This latest expansion at BCE highlights the importance of adaptable and scalable solutions in broadcast environments. The ability to easily upgrade both the desk and monitor systems allows for future growth and ensures that BCE remains at the forefront of media operations, supporting its global client base with cuttingedge technology and efficient control room setups. 

Zattoo partners with Sky Switzerland as early adopter of new stream API product suite

Zattoo, a provider of TV streaming and TV-as-a-Service solutions in Europe, has announced that Sky Switzerland, an OTT pay-TV and VOD provider in the Swiss market, is now an early adopter of Zattoo’s newly launched Stream API product suite. This partnership highlights Sky Switzerland’s role as a lighthouse customer, integrating Zattoo’s advanced streaming services

into its IPTV platform, powering applications, frontends, and devices.

The Stream API suite, which debuted at the IBC 2024 show in Amsterdam, is designed to offer operators cutting-edge tools for next-generation streaming. Among its standout features are the Advertising API, Playback SDK, and Playback Telemetry, which are crafted to meet the complex needs of

today’s streaming environment. These tools are built on Zattoo’s carrier-grade, fully managed head-end video platform, supported by one of Europe’s largest and most reliable content distribution networks. With over 20 years of technological innovation, Zattoo’s platform is the largest and fastest-growing unicast content delivery solution in Europe, ensuring superior streaming quality. 

EVS closes acquisition of MOG Technologies, expanding MediaCeption solution

EVS has officially completed its acquisition of MOG Technologies, marking a significant step in expanding its solutions portfolio and broadening its market reach. This move enhances EVS’s ability to deliver innovative end-toend solutions tailored to the media and broadcast industry, integrating MOG’s expertise in software-defined video solutions.

Since the initial agreement in August, both companies have made strides in merging their operations, with teams from EVS and MOG working together to ensure a smooth transition. The process has been highly collaborative, and MOG’s team is now fully integrated into EVS.

Strategic synergy to drive innovation

Serge Van Herck, CEO of EVS, emphasized the importance of this acquisition, stating, “The completion of this acquisition is a pivotal moment for us. We are thrilled to welcome the talented team at MOG Technologies and to integrate their state-of-the-art software-defined video solutions into our portfolio. Together, we are better positioned to deliver

innovative, end-to-end solutions for the media and broadcast industry, ensuring that our customers continue to receive the best technology and support available.”

This integration is set to bolster EVS’s MediaCeption® solution by incorporating MOG’s ingest and transcoding technologies, enhancing the company’s live production ecosystem. The alignment of the two companies’ products was showcased at the IBC trade show in Amsterdam, where the integration was met with positive customer feedback. A major customer project stemming from this partnership is expected to be announced soon, underscoring the benefits of the acquisition.

Leadership and vision for the future

Luis Miguel Sampaio, former CEO of MOG Technologies, will continue to play a key role in guiding the integration process. He expressed his optimism about the future, stating, “Since the signing of the agreement, we have seen tremendous collaboration between our teams,

and the integration is progressing smoothly. The potential of our combined expertise is already becoming evident, and I am excited about the future we are building together.”

This partnership unlocks new avenues for innovation, particularly in the realm of software-defined ingest solutions and cloud-based content management. With MOG now part of EVS, the combined companies are poised to offer an even broader range of solutions designed to streamline workflows from ingest to distribution.

Strengthened innovation and customer support

The acquisition brings with it key strategic benefits, including the expansion of EVS’s R&D capabilities. With the addition of a new R&D center from MOG, EVS now operates six centers globally, accelerating the company’s innovation pipeline and attracting new talent. The collaboration will also enhance customer support by leveraging the expertise and combined portfolios of both companies, ensuring clients receive best-in-class service and solutions.

According to the statements of both companies, as MOG’s team officially joins EVS, the acquisition marks a pivotal moment for both companies, strengthening their commitment to technological excellence and customer success. 

ARRI Showcases Alexa 35 Live – Multicam System

At the NAB Show the company will have two systems on display at:

> Booth #321 The Studio-B&H

> Booth #421 FUJINON Lenses

ARRI's Francois Gauthier and Peter Crithary will be on hand to answer any questions and guide you through the groundbreaking innovations that the ALEXA 35 Live can bring to your next live production, including the outstanding cinematic ARRI Look.

Blackmagic PYXIS 6K

Blackmagic PYXIS 6K is a high end digital film camera that produces precise skin tones and rich organic colors. It features a versatile design that lets you build the perfect camera rig for your production! You get a full frame 36 x 24mm 6K sensor with wide dynamic range with a built in optical low pass filter that's been designed to perfectly match the sensor. Plus there are 3 models available in either EF, PL or L-Mount.

With multiple mounting points and accessory side plates, it’s easy to configure Blackmagic PYXIS into the camera you need it to be! PYXIS’ compact body is made from precision CNC machined aerospace aluminum, which means it is lightweight yet very strong. You can easily mount it on a range of camera rigs such as cranes, gimbals or drones! In addition to the multiple 1/4″ and 3/8″ thread mounts on the top and bottom of the body, Blackmagic PYXIS has a range of side plates that further extend your ability to mount accessories such as handles, microphones or even SSDs. All this means you can build the perfect camera for the any production that’s both rugged and reliable.

Brainstorm to showcase advanced virtual production solutions

Visitors can expect to see innovations in Virtual Production, Real-Time 3D Graphics, Newsroom Workflows, and Immersive Presentations that are poised to transform the industry.

Partnering with ATX Event Systems and collaborating with Ikan, ProCyc, and Zeiss, Brainstorm will showcase Suite 6.1, featuring the most recent updates to InfinitySet, Aston, and eStudio. These tools are designed to revolutionize broadcast workflows, virtual production, and XR/AR experiences, delivering high-quality virtual content and elevating the art of storytelling.

Key Features of Suite 6.1

Seamless Unreal Engine Integration:

Suite 6.1 boasts deep integration with Unreal Engine 5.4, allowing InfinitySet to control Unreal Engine (UE5) directly from its interface. This tight integration empowers users to manage and edit UE blueprints, objects, and properties, blending Unreal Engine’s rendering power with Brainstorm’s proprietary eStudio render engine. Users can easily decide which engine renders specific parts of the scene, and the enhanced Unreal Compositor simplifies the blending of real video feeds, such as keyed characters, within UE scenes. This innovation combines the flexibility of Unreal Engine with Brainstorm’s unique TrackFree technology, expanding creative possibilities.

High-Performance Virtual Production on One Workstation:

Powered by the new NVIDIA RTX 6000 Ada Generation GPU, Suite 6 enables high-performance

virtual production from a single workstation, reinforcing Brainstorm’s commitment to sustainability. This efficiency reduces hardware demands, which helps clients minimize power consumption, lower costs, and reduce their carbon footprint.

Dual-GPU Support for InfinitySet:

InfinitySet now supports Dual-GPU workstations, splitting rendering tasks across two GPUs to improve performance and visual quality, particularly in XR environments. The inclusion of tools like XR Config and CalibMate simplifies the calibration process with mathematical precision, ensuring a faster setup and reducing potential human error.

Aston’s Next-Level Motion Graphics

For broadcasters, Aston delivers enhanced motion graphics capabilities with updates designed to streamline collaboration. The new Graphic Instances feature simplifies the workflow for design teams by allowing lead designers to create templates that can be modified by other team members without compromising consistency or design integrity. Aston’s StormLogic also introduces advanced transitions and interactions to amplify the impact of on-screen graphics, while new tools for touchscreen interactivity enable designers to create rich, dynamic experiences using multi-touch gestures.

Edison Ecosystem Expands with New Offerings Brainstorm will also showcase exciting updates to its Edison Ecosystem, which caters to the corporate and educational sectors. New additions include:

> EdisonGO: A mobile capture solution utilizing an iPad for video and tracking, perfect for on-the-go content creation.

> Edison OnDemand: A browser-based tool for delivering immersive presentations, ideal for remote learning or corporate training.

> Edison OnCloud: A cloud-based, subscription-based solution that brings Edison’s powerful features to users wherever they are, without requiring significant hardware investment. This flexible platform is perfect for large institutions with diverse needs, offering tailored functionality based on user roles.

Industry Expertise

“As virtual production becomes increasingly prevalent in both broadcast and film, we’re thrilled to demonstrate how Brainstorm’s solutions can empower creators of all sizes to produce top-tier virtual content,” says Ruben Ruiz, Brainstorm’s EVP and Country Manager for the US and Canada.

“With our decades of experience, we’re not only pushing the boundaries in traditional media but also expanding our reach into the Corporate and Education markets.”

Join Us at NAB New York 2024

Don’t miss the opportunity to witness Brainstorm’s latest innovations firsthand. Visit booth #801 at NAB New York 2024 to discover how we continue to drive the future of real-time 3D graphics and virtual production.

Canon U.S.A.

Canon U.S.A. announces its participation in the 2024 NAB Conference, where it will showcase its latest developments and solutions, including a new next-gen broadcast lens. Throughout its expanded booth (#C3825), Canon will offer attendees new opportunities to explore the future of production and the latest Canon gear, including its new broadcast lens, the CJ27e×7.3B, Canon’s first 2/3” portable lens with a class leading 27x optical zoom (among portable lenses for 2/3-inch 4K cameras with ENG-style design). The theme of Canon’s NAB booth this year is “Connect, Create, Collaborate.”

“Canon's exhibit at NAB 2024 showcases our cutting-edge technology and forward-thinking innovations,” said Brian Mahar, senior vice president and general manager, Imaging Technologies & Communications Group.

“We welcome any and all visitors to see all that Canon has to offer in the industry.”

The booth will feature Canon’s line of outstanding RF cinema prime lenses and flex zoom lenses, as well as the versatile RF24-105mm F2.8 L IS USM Z lens for hybrid shooters. Canon will also showcase the improved PTZ tracking system, the EOS VR System featuring the Canon RF 5.2mm f/2.8 L Dual Fisheye lens, and the enhanced functionalities of the EOS C500 Mark II, which now supports three new Cinema RAW Light formats through a new firmware update. These displays demonstrate Canon's continuous innovation across broadcast, house of worship, cinema, VR, and social capture solutions.

An LED Virtual Production set will be at the front of the booth, and toward the back Canon will have a display of Canon’s low-light SPAD (Single Photon Avalanche Diode) Sensor technology. Education is also fundamental to Canon's mission, evident in interactive engagements at every counter and all product demos. Professionals across all levels can explore Canon's complete product lineup and gain insights into Canon’s latest advancements in imaging technology through the lens displays and PTZ control room area. Canon will also showcase how our volumetric video technology, which is currently being used to capture live sports, can also be used in a virtual production studio. This technology can help unlock fresh realms of creative and technical exploration for professional production companies, advertising agencies, filmmakers, and beyond.

Clear-Com to Showcase Arcadia and Eclipse Frame

Clear-Com will spotlight the latest version of its EHX configuration software, EHX v14, which now includes fully integrated SIP support—a unique feature in the market. With this capability, Clear-Com is the only intercom provider offering direct SIP integration into the system, making it easier for users to connect with VoIP systems and external communication networks without the need for additional interfaces or thirdparty hardware. EHX v14 enhances system scalability and flexibility, allowing seamless communication across traditional and IP-based infrastructures. The latest update ensures that production teams benefit from the most advanced communication capabilities available today, making it ideal for broadcasters, live event producers, and other complex operations.

Visitors to the Clear-Com booths will have the opportunity to connect with product experts and explore how Clear-Com’s solutions can meet specific audio and broadcast challenges. Clear-Com emphasises a professional and engaging booth environment, offering attendees meaningful insights into the latest communication technologies.

DTV Innovations will unveil its new OX Platform

The platform is designed to be very robust while still maintaining the compact form factor of the previous OX-1 platform. Combined with DTV Innovations’ current modular software, it will serve as a base for multiple new products that DTV Innovations will launch soon.

The OX Platform has more than enough built-in capabilities to support advanced IP workflows, the latest processor with internal GPU to support video encoding/decoding, and a lot of hardware options to support baseband video, compressed domain video, and multi-standard terrestrial and satellite modulator/demodulator.

Various key usages of the platform include but are not limited to, secure video streaming over the internet, video baseband to IP encoding/decoding, PSIP and EPG generator/rebrander, terrestrial edge transcoding, secure video backhauls for confidence monitoring, ATSC 3.0 receiver/translator/capture, ATSC 3.0 lab signal generator. It can also serve as an integrated satellite receiver-decoder complete with support for multiple IP streaming and terrestrial RF inputs. The OX Platform’s adaptability allows for the development of affordable and easily implementable solutions tailored to specific broadcast needs, making it an ideal choice for broadcasters seeking to enhance and streamline their operations.

“We’ve been working on this platform for a while. We would like to have a compact yet versatile platform which can be used in various applications”, said Benitius Handjojo, Chief Executive Officer of DTV Innovations. “The new OX Platform is designed to be very flexible and can be used for various applications. Not only it is very compact, I believe it will come at a very attractive price point. Combined with other offerings from DTV Innovations we can provide the customer with a complete broadcast workflow.”

Farmerswife to showcase scheduling & management platforms

Attendees will have a chance to explore the latest advancements to the Cirkus scheduling and project management platform (booth 647), which is designed to enhance collaboration, optimize resource scheduling and automate project management. Key updates include streamlined workflows that prevent missed communications and ensure deadlines are met; automation tools that free up teams to focus on more meaningful work by handling repetitive tasks; and a new public requests feature that simplifies project and booking requests, even for users without a Cirkus login.

“Cirkus is evolving to meet the diverse needs of media professionals,” explains Stephen Elliott, CEO of Farmerswife. “We’re excited to unveil these new feature updates to our valued customers and industry colleagues at NAB New York. These updates reflect our ongoing commitment to improving collaboration and workflow efficiency in the media sector.”

In addition to the Cirkus updates, Farmerswife also plans to showcase the newly-released version of its software, Farmerswife 7.1. Released just last month, this version introduces over 100 new features, including advanced equipment tracking, scheduling enhancements and improved budgeting capabilities. The company will also reveal its AI-enhanced product plan, anticipating the growing role of AI in the media industry. Additionally, Farmerswife is developing new features to help clients monitor carbon emissions, aligning with sustainability goals.

"Continuing our legacy of success from previous years, where we welcomed esteemed clients like PBS Fort Wayne, Boutwell Studios and more, we remain committed to delivering cutting-edge solutions that streamline workflows and drive our customers towards success,” notes Jodi Clifford, managing director, USA, at Farmerswife. “With the introduction of Farmerswife 7.1 and the latest updates to Cirkus, we are confident in our ability to provide unparalleled efficiency and productivity for media operations. In line with our continued efforts, we will be unveiling our newest AI enhancement product plans."

Lawo releases Tenth HOME App: The HOME DSK Multi-Layer Downstream Keyer

At this year’s NAB NY (Booth 1013), Lawo will unveil HOME Downstream Keyer, a sophisticated and powerful HOME App for video. Its feature set is closer to what a vision mixer provides than to what operators expect from a downstream keying function of the past.

“HOME Downstream Keyer allows operators to simultaneously and independently transition up to three key layers over an A/B background mix,” comments John Carter, Lawo’s Senior Product Manager, Media Infrastructure. “As Lawo’s first mixing processing app, it comes with eight ST2110-20 receivers and two ST2110-20 transmitters. Each of the three Keyers can perform Luma, Linear or Self keying to provide the desired processing for high-end broadcast graphics designed with transparency and drop shadows.”

The HOME Downstream Keyer app provides comprehensive keying and mixing capabilities to meet production needs and enable additional channel branding within a state-of-the-art, agile infrastructure, based on triggers issued via the VSM control system where desired.

As part of the HOME Apps suite, HOME Downstream Keyer can be deployed in both SDR and HDR workflows. It supports SMPTE ST2110 (incl. JPEG XS), NDI, SRT, and Dante AV* transport as required. Like all HOME Apps, and a growing number of other Lawo solutions, HOME Downstream Keyer can either be licensed perpetually or solicited on an ad-hoc basis through Lawo Flex Subscription credits.

“With HOME Downstream Keyer, Lawo’s HOME Apps have reached a new level of sophistication,” adds Jeremy Courtney, Senior Director, CTO Office. “Architecting the container-based HOME Apps platform from scratch for bare-metal compute has allowed Lawo’s software developers to avoid the pitfalls and limitations of lifting and shifting existing technology to a different platform. The advantages of this clean-slate microservice approach are now becoming apparent.”

(*) Available in a future release of HOME Apps, licensable.

Marshall showcases latest line of PTZ Cameras

Marshall Electronics will highlight a range of its cutting-edge cameras at NAB New York 2024 (Booth 1352). Marshall offers a wide selection of PTZ and POV cameras, including three new PTZs, the CV612 camera, CV620 camera and CV625 camera.

The CV612 PTZ camera, available in black (CV612-TBI) and white models (CV612-TWI), features the ability to automatically track, follow and frame presenters using AI facial learning for accurate and smooth self-adjusting maneuvers. With advanced AI tracking, the PTZ camera “learns” who is the prime subject and won’t “lose” the presenter when other persons or objects enter the shot. Equipped with 12X optical and a 15X digital zoom, the CV612 offers a 4.1mm49.2mm (6.6-70.3 degrees) field of view. It is built around a professional-grade 2-Megapixel 1/2.8inch, high-quality HD CMOS sensor, which provides format resolutions from 1920x1080, 1280x720 down to 640x480, making this the ideal solution for smaller venues, interviews, web production, live presentations, classrooms and HOW, to name a few, as it has a maximum range of 18M (almost 60-feet).

Marshall will also showcase its new CV620 20x camera, which is the perfect fit for a range of midsized rooms, including corporate communications and teleconferencing, as well as traditional broadcast applications. With 3G-SDI, HDMI, USB 3.0 and IP outputs and 20x zoom, the new camera has a 57-degree field of view and supports up to 1080p/60. The CV620 features a high-quality Sony sensor and is also available in black (CV620-B12) and white models (CV620-W12) to fit varying aesthetics. It also features Mic and Line-in audio options as well as a range of IP protocols, including RTSP, RTMP, RTMPS, SRT and MPEG-TS.

The CV625 camera will also be on display at NAB NY and features a 25x zoom that is ideal for large venue auto tracking, broadcast events, show stages, educational lecture halls and large format houses of worship. Available in black (CV625-TB) and white models (CV625-TW), the camera features a high performance 8-Megapixel 1/1.8-inch sensor to capture crisp high-quality UHD video. The camera delivers up to 3840 x 2160 resolution at 60fps with

an 83-degree horizontal angle-of-view. The CV625 provides versatility for various applications with its range of outputs including HDMI, 3G-SDI, Ethernet, RTSP streaming and USB 3.0.

Mediaproxy to spotlight A3SA capability

Mediaproxy is to highlight its recently acquired A3SA certification on booth 1243. The company will demonstrate its full range of compliance monitoring and multiviewer systems, which now provide decryption for ATSC 3.0-based broadcasts being used in the US for NextGen TV.

Mediaproxy announced during August that it is now certified by the ATSC (Advanced Television Systems Committee) 3.0 Security Authority (A3SA) to decrypt ATSC 3.0 streams as part of the LogServer and Monwall product ranges. This gives broadcasters a comprehensive set of software-based tools to monitor and analyze ATSC 3.0 streams without the need for bespoke hardware devices.

The introduction of A3SA security protocols into the LogServer logging and monitoring engine provides broadcasters with a cost-effective option allowing them to monitor both encrypted to-air and off-air signals. For to-air and hand-off applications, an onpremises LogServer system is able to simply take the encrypted STLTP (Studio to Transmitter Link Transport Protocol) output of the packager directly from the stream. This guarantees confidence in what is sent to the transmitter straight from the local IP network.

In off-air monitoring situations it is possible to use inexpensive and familiar integrated receiver/decoders (IRDs) that do not provide decryption but do have outputs of the encrypted IP streams through DASH/ ROUTE (Dynamic Adaptive Streaming over HTTP/Realtime Object delivery over Unidirectional Transport) or the recently approved ALP (A/330, Link-Layer Protocol). This enables ATSC 3.0 compliance on IRDs for streaming platforms such as HDHomeRun, with LogServer handling the DRM aspects for all channel sources.

The new ATSC 3.0 security feature is also available on Mediaproxy's ever-expanding Monwall multiviewer, which accommodates low-latency monitoring of both encrypted outgoing and return signals. Alongside the features on Monwall and LogServer, Mediaproxy has developed an extended toolset for advanced IP packet and table analysis of live broadcast streams or PCAP (packet) captures, which can be accessed via easy-to-use user interfaces.

"We are excited to be debuting LogServer and Monwall with the recently acquired A3SA certification at NAB Show New York," comments Mediaproxy chief executive Erik Otto. "Recent figures show that NextGen TV is now reaching 75 percent of US television households so broadcasters need to be aware of ways they can efficiently and cost-effectively monitor and decrypt ATSC 3.0 streams. This is an important advance for Mediaproxy and NAB Show New York is the ideal place to showcase it."

Panasonic Connect launches auto framing capabilities to further strengthen production workflows

The livestreaming market is projected to grow 23 percent from 2024 to 2030. Production teams need flexible and scalable AV tools to streamline operations in fast-paced and challenging environments to keep up with this growth. At NAB New York (booth #611), Panasonic Connect North America will showcase its latest AV technology to create powerful workflows for the broadcast and live event markets. This includes its new auto framing capabilities, available as a built-in Auto Framing Feature for the AW-UE160 PTZ Camera and an Advanced Auto Framing plug-in for Panasonic’s

Media Production Suite. These new capabilities will help operators create on-site efficiencies and streamline the delivery of high-quality content.

Automating the production workflow

Panasonic’s new auto framing feature incorporates auto tracking, image recognition and auto framing technologies to deliver precise, user-defined camera framing for high quality, natural content. Framing presets can be combined, including multi-subject group shots, while advanced human body detection allows consistent subject headroom.

The built-in Auto Framing Feature for Panasonic’s AW-UE160 PTZ camera will be available from CY2025 Q1 via firmware updates and the Advanced Auto Framing plug-in for Panasonic's Media Production Suite software platform will be available from CY2025 Q2. The Advanced Auto Framing plug-in will enable auto framing for various PTZs across the Panasonic

lineup and will support multi-camera setup for auto framing to enhance subject-detection accuracy and operability while facial recognition technology enables optimal framing for specified individuals.

Capturing cinema-quality, versatile points of view

Panasonic’s new AW-UB50 and AW-UB10 box-style 4K multi-purpose cameras bring cinema-quality shooting to a compact, versatile form factor ideal for remote scenarios across live events, corporate, and higher education. The AW-UB50 and AW-UB10, available in early 2025, are equipped with large-format sensors and deliver exceptional image quality with expressive detail. The AW-UB50 uses a full-size MOS sensor with 24.2M effective pixels while the AW-UB10 features a 4/3 type 10.3M effective pixel MOS sensor to give production teams the ability to document multiple subjects in a single frame. With Panasonic’s remote system, these cameras can remotely facilitate video production workflows.

Similarly, the recently launched AW-UE30 PTZ camera boasts a compact design and quiet operation to ensure that in-person experiences across corporate and higher education environments go uninterrupted by technology. Its enhanced video streaming with 4K/30p images make remote participants feel as if they are in-person while its horizontal wide-angle coverage of up to 74.1° and a maximum 20x*2 optical zoom allow teams to capture more points of view - even in challenging room layouts.

Navigating complex environments with easy-to-use solutions

LED lights and display walls bring shifting lighting conditions and color accuracy challenges to live event venues. The AK-UCX100 4K Studio Camera addresses these challenges with its new imager featuring color science to account for the specific colors generated by LED lighting and displays walls, as well as increased resolution and dynamic range that easily adapts to different environments. The new imager is less prone to moiré and contains an optical filter wheel setting to assist in managing these artifacts. Venues can create the same visual experiences for viewers regardless of location. Creators can operate the AK-UCX100 with other Panasonic studio cameras utilizing the same accessories, including the UCU700 Camera Control Unit, or they can operate the AKUCX100 as a standalone device.

The new KAIROS Emulator lets creators pre-program a show without requiring a physical hardware connection, ensuring production teams always have access to adjust shows. Creators can also use it as a training tool to familiarize themselves with the KAIROS feature set. The new smart routing feature maximizes input capacity by switching sources between active and idle states, enabling more inputs than the KAIROS Core can handle alone. Additionally, built-in cinematic effects such as film look and glow enhance a production's creativity, eliminating the need for external processing.

QuickLink

QuickLink is launching the new QuickLink StudioPro™ Controller control panel at NAB New York (Booth 1145). Also, at NAB New York, QuickLink will be demonstrating StudioEdge™, QuickLink’s one stop video conferencing solution, fresh off its triumphant success at IBC 2024.

Complementing the award-winning and Best of Show at IBC Show 2024 QuickLink StudioPro multicamera production suite of products, the StudioPro Controller is the ultimate control panel for productions of small-, medium-, and large-sizes, boasting 134 fully customizable LCD buttons for scene switching, project configuration, macros, audio control, and more.

“The StudioPro Controller revolutionizes video production switching by placing LCD screens underneath all 134 tactile push buttons on the device,” said Richard Rees, CEO of QuickLink. “These keys can display virtually anything users choose including text, images, animation, and video. Users can display motion graphics, video sources – even remote guests whose images appear beneath the button – making it easier to find and switch to the exact item needed without having to memorize placement.”

StudioPro Controller can natively control over 600 devices, including DMX lights, PTZ cameras, and many other studio elements, offering fingertip control of every aspect of a production. A single StudioPro Controller unit can control a production program and four additional mix outputs with complete flexibility and endless possibilities for productions in conjunction with StudioPro.

Rees continued, “StudioPro Controller is built from the ground up with forward-thinking simplicity in mind aimed at eliminating the complexity and antiquation of legacy systems. For all of its incredibly powerful and advanced features, StudioPro Controller is easy for anyone to use, including absolute beginners to multicamera production while still providing creative power on par with far more expensive and complex systems.”

Built into StudioPro, or offered as a standalone solution, StudioEdge is a simple, elegant, and easy to use solution for introducing high-quality remote guests from virtually anywhere. StudioEdge incorporates every major video conferencing platform into production starting with 4K broadcast-quality QuickLink StudioCall and also including Microsoft Teams, Zoom, Skype, and more.

TAG to bring technologies that maximize data utilization, simplify troubleshooting capabilities and improve efficiency

TAG Video Systems is presenting its Realtime Media Performance platform at NAB SHOW New York 2024 enhanced with numerous award-winning technologies. The platform will be demonstrated with capabilities designed to increase user confidence, maximize data utilization, simplify troubleshooting, and improve efficiency. These highlights and more can be seen at TAG’s Booth #659.

QC Elements

NAB SHOW New York marks the introduction of TAG‘s QC Elements (Quality Control Elements) to the North American broadcast community. Designed to enrich visualization and simplify troubleshooting, QC Elements enables a vast array of elements to be arranged on a multiviewer tile for a comprehensive view of critical technical data, bringing engineers a built-in quality control station within the display. Engineers can tailor each tile with details on codecs, frame rates, resolutions, closed captions, audio channels, transport streams, and more. This powerful tool expedites the identification and analysis of technical issues, helping users achieve a high-level of IP workflow stability, ensuring seamless quality of service at scale and a flawless viewer experience.

Operator Console

TAG will also present its award-winning Operator Console, an intuitive touch panel console. Operator Console provides a live, interactive view of the multiviewer output on any touch-controlled device bringing a simplified, focused, and intuitive approach to manage TAG multiviewers at any scale. Operators can effortlessly switch between layouts, modify tile content, and isolate audio sources for verification. Additionally, they gain instant access to pre-defined and customizable configurations, simplifying complex workflows, even in the most demanding operational environments, allowing operators to focus on creativity and workflow integrity.

Language Detection

Language Detection, another multi-award-winning feature, is a revolutionary innovation that transforms how operators ensure quality and compliance across large scale operations with multiple closed captions and language subtitles. This innovative technology not only identifies the subtitles’ language but also calculates and analyzes quality based on language-specific dictionaries. Language Detection’s cutting-edge automation eliminates the manual labor traditionally associated with language identifi cation, minimizing the risk of human error, and freeing up operators to focus on more strategic tasks.

SSIM and QoS Tools

TAG will show its Realtime Media Performance platform integrated with the Structural Similarity Index Measure (SSIM) with advanced Quality of Service (QoS) tools to provide a comprehensive, real-time video quality assessment. This awardwinning combination ensures that objective quality assessments and technical performance metrics are continuously monitored, guaranteeing an exceptional viewer experience. By leveraging TAG’s Content Matching technology, SSIM, and QoS features work together to detect, alarm, and report quality issues in real-time, ensuring seamless video delivery.

Advanced User Management

TAG’s new advanced user management (RBAC – Role Based Access Control) functionality defines and assigns roles within the organization for strengthened security and reduced human error. Knowing that access is limited by specific outputs, sources, or networks brings an additional measure of confidence to those managing multiple networks under the same TAG Media Control System (MCS).

Group Lens

New York broadcasters will get a preview of Group Lens, a functionality designed to simplify the monitoring and troubleshooting of massive media networks. Group Lens is an interactive visual tool that provides an at-a-glance overview of the entire network's status on a single screen. The intuitive tilebased display combined with intuitive color coding to instantly highlight potential issues enabling quick

root cause analysis and efficient troubleshooting for faster issue resolution preventing minor glitches from escalating into major disruptions.

Telos Alliance® emphasizes media solutions initiative

“In today’s media environment, settling for ‘good enough’ audio processing is definitely not good enough to retain your audience,” says Telos Alliance M&E Strategy Advisor Costa Nikols. “The audio requirement for today’s TV broadcaster isn’t just meeting standards, it’s about surpassing viewers’ expectations to deliver exciting, compelling audio with every program.”

“Telos Alliance is focused on helping broadcasters deliver the most engaging content possible. Today’s viewers are demanding better loudness control, better language options, and immersive audio,” Nikols notes. “We’re highlighting our new Media Solutions Initiative at NAB New York to let broadcasters know that the tools they need to achieve these goals are available from a single, trusted source.”

Visitors to the Telos Alliance booth at NAB New York will be able to meet Costa Nikols, Sharon Quigley, and other key members of the Media Solutions team, and see how automated file-based Minnetonka Audio® AudioTools® Server software seamlessly integrates into a larger Next Generation Audio (NGA) processing workflow.

Booth visitors will also experience the seamless accessibility of the new Telos Infinity® VIP Virtual Commentator Panel (VCP), which replicates physical commentator panels in the virtual domain, making full-featured commentary and intercom features available using any computer, tablet, or smartphone.

In addition, Telos Alliance will also be introducing the new family of Linear Acoustic® AERO-series DTV audio processors, the new Axia® StudioCore console engine, StudioEdge high-density I/O endpoint device, Telos VX® Duo broadcast VoIP telephone system, Telos® Zephyr Connect multi-codec gateway software, Omnia® Forza FM audio processing software, and Axia Quasar AoIP mixing consoles.

ARRI Solutions, VCI, and ROE Visual deliver flagship virtual production facility in Gran Canaria

› Gran Canaria Studios supports the island’s initiative to become one of Europe’s audiovisual production hotspots

› The project comprised lead design, structural engineering, and LED wall installation, all designed and delivered by ARRI Solutions, Video Cine Import (VCI), and ROE Visual

› Flexible facility design maximizes stage utilization and efficiency

The initial phase of Gran Canaria Studios, one of Europe’s largest virtual production stages to date, has been completed, offering a cutting-edge environment for international content producers looking to leverage Gran Canaria’s film location incentives.

Located on the 1,200 sqm stage 1 of the Gran Canaria Platos studio lot, the virtual production infrastructure has been designed and delivered by ARRI Solutions, integrator Video Cine Import (VCI), and ROE Visual, a manufacturer of premium LED products.

The Economic Promotion Agency of Gran Canaria (SPEGC) commissioned the studio to accommodate both independent and high-end national and international productions. It will strengthen the island’s reputation as a thriving film production hotspot.

“The film industry is one of the region’s economic pillars, we are very commited to the development of this industry on the island

and Gran Canaria is spearheading efforts to become a leading hub for TV and film,” states Cosme García Falcón, Managing Director of the SPEGC. “Creating this virtual stage offers new services to content companies from across the world, with the best state-of-the-art studio and premium technical and operational services created by companies of recognized prestige such as ARRI, VCI, and ROE Visual. Gran Canaria Studios will help production companies develop their stories and position the island of Gran Canaria, the region, and Spain as one of the best prepared and most profitable audiovisual production hubs in the market.”

In close cooperation with ROE Visual and VCI’s specialist teams, ARRI Solutions’ experts delivered comprehensive consultancy, system design, supervision, and commissioning for the project. The team prescribed a unique fixed curved space with highly flexible, adjustable elements to accommodate a wide range of productions, maximizing stage utilization.

The volume comprises a fixed 40 m x 8 m main horseshoe-shaped wall and two adjustable ceilings (90 sqm and 45 sqm) suspended from variable speed motors, allowing for height and tilt changes. To optimize the studio space, the whole volume is suspended from an innovative internal steel structure designed to hold the volume and prepared for holding lighting.

Further configurations can be delivered through two supplementary 3 m x 5 m mobile LED walls or “side wings,” providing continuity with the main wall or as part of the physical scene set.

The whole volume uses ROE Visual Black Pearl BP2V2 LED panels, known for their exceptional image quality and reliability. These panels offer high-resolution visuals and provide filmmakers with high-contrast, detailed in-camera visuals and the flexibility and precision needed for cuttingedge virtual production and XR applications.

“We are very pleased that ARRI Solutions had the chance to lead the technical realization of this flagship project. We would like to thank every partner involved for the excellent collaboration and the SPEGC for the extremely positive feedback on the result,” emphasizes Kevin Schwutke, Senior Vice President Business Unit Solutions ARRI Group. “Gran Canaria already has a rich heritage of attracting major features, commercials, and independent productions. With this new studio, production companies now have access to next-generation technologies within a flexible, efficient facility. We are sure that the Gran Canaria Studios will become a key asset in enhancing the production infrastructure in the region.”

“We are thrilled to have contributed to the development of the Gran Canaria Studios,” says Olaf Sperwer, Business Development Manager for ROE Visual. “This project underscores our commitment to delivering top-tier LED solutions that meet the highest industry standards. Collaboration with key partners has been instrumental. Leading the integration, ARRI Solutions delivered seamless and efficient implementation. ROE Visual is proud to have completed this advanced virtual production studio, enabling European and broader international film- and TV-producing communities with the best virtual production technology available. Raising the bar in virtual production will help put locations like the Gran Canaria Studios on the map for premium international film productions. We thank the SPEGC and our partners VCI and ARRI for the successful teamwork in this unique project.”

“We are excited to announce the launch of the largest, most advanced virtual production studio in Spain, which is also among Europe’s most significant, cutting-edge virtual production studios. This landmark project, coordinated by VCI, was made possible through our collaboration with ARRI’s cutting-edge engineering and ROE’s state-of-the-art LED wall technology. We’re pleased to have worked with SPEGC, whose commitment and foresight were essential in making the project a reality. This studio sets a new benchmark for virtual production, advancing the development of a comprehensive digital ecosystem, including skilled professionals in this emerging field,” comments Fernando Baselga, CEO of Video Cine Import.

https://youtu.be/Ougpl7_a0Ag?si=Gs7ifFNgeLytYNJh

Cloud, AI, and Viewer-First: Inside SVT’s Technological Transformation

Sveriges Television (SVT), Sweden’s public broadcaster founded in 1956, has always pushed the boundaries of innovation without abandoning its public service vocation. At the forefront of this technological transformation is Adde Granberg, SVT’s Chief Technology Officer, who brings more than three decades of broadcast technology experience to his role. Adde Granberg, SVT’s Chief Technology Officer

In a recent interview with TM Broadcast International, Granberg shared insights into SVT’s ongoing journey towards a more efficient, flexible and viewer-centric broadcasting model. From implementing cloud-based production solutions to developing in-house content delivery networks, SVT is embracing cutting-edge technologies to improve its operations and viewer experience.

Granberg’s vision for SVT’s future is clear: moving from hardware-centric to software-based solutions, prioritizing end-user quality, and leveraging artificial intelligence to democratize content creation.

This conversation provides insight into the challenges and opportunities facing public broadcasters in an era of rapid technological change, and how SVT is positioning itself to remain relevant and influential in the evolving media landscape.

You have a wide experience in broadcast tech, I think, including outside vines. And can you tell us a little

about the changes you’ve lived during your professional life?

It’s not that simple, I guess. My journey in broadcast technology has been quite extensive. It began in 1992 when we launched YouthTunnel in Sweden, which was intended to be the Swedish equivalent of MTV. I started as a sound engineer and later transitioned to video editing. Over the years, I’ve been involved with various production companies, including commercial ventures, event organizations, and OB truck operations where I was instrumental in building outside broadcast vehicles.

Throughout this period, spanning from 1992 to the present, the industry has largely maintained its traditional methods of operation. However, we’re now at a critical juncture. The advent of smartphones in the early 2000s initially didn’t signi fi cantly impact our industry, but that’s rapidly changing.

After more than three decades in this field, I believe we’re on the cusp of a monumental shift. The democratization of content creation, fueled by AI and smartphone technology, has revolutionized what’s possible. Tasks that were once the exclusive domain of professionals can now be accomplished by anyone with a smartphone. These devices have become all-in-one production studios, capable of filming, editing, graphics creation, translation, publishing, and even broadcasting from home.

This technological leap represents both a challenge and an opportunity for our industry. As professionals, we must acknowledge and adapt to this new reality. From my perspective, the broadcast production industry needs to evolve rapidly to remain relevant. We must embrace these changes and find ways to leverage new technologies to enhance our offerings. The alternative is to risk becoming obsolete in an increasingly dynamic and democratized media landscape.

SVT recently partnered with Agile Content to develop Agile Live. How do you see this cloudbased production solution changing SVT’s workflow in the coming years?

It’s a journey we just started, but the main focus with this cloud solution is the quality for the end user. Our partnership with Ateliere to implement Agile Live marks the beginning of a significant transformation in our production workflow. The primary focus of this cloud-based solution is to enhance the quality of experience for our end users. We believe that our viewers should be the ones setting the standard for production quality, not our internal production teams.

This shift represents a fundamental change in our approach. We’re moving away from a hardwarecentric industry towards a software-based one. This transition allows us, for the first time, to truly prioritize the end user’s needs and expectations.

The beauty of this software-based approach is that it eliminates the need

for specialized hardware to process video data. Instead, we can now use software running on standard hardware to achieve the same results, if not better. This not only provides us with more flexibility but also opens up new possibilities for innovation and cost-efficiency.

As well, this cloud-based solution offers substantial benefits in terms of environmental impact and operational costs. Our initial deployment for the Royal Rally of Scandinavia demonstrated significant reductions in both production costs and carbon footprint.

I really anticipate that this technology will revolutionize our entire production process. It will enable us to be more agile, responsive to viewer preferences, and environmentally responsible. We’re at the dawn of a new era in media production, and SVT is committed to leading this change, always keeping our audience at the forefront of our innovations.

The Royal Rally of Scandinavia broadcast

using Agile Live showed significant cost and environmental savings. Can you elaborate on the specific challenges and successes you encountered during this implementation?

The implementation of Agile Live for the Royal Rally of Scandinavia broadcast was a significant milestone in our ongoing journey towards more efficient and sustainable production methods. We’ve been pioneering remote and

home production since 2012, and this latest project represents a culmination of our efforts and learnings.

The success we achieved with Agile Live builds upon our long-standing commitment to remote production. This approach has allowed us to significantly reduce our environmental impact, optimize our investments, and streamline our workflows. As a public broadcaster, these cost

savings translate directly into our ability to produce more content for our audience, which is crucial to our mission.

One of the key challenges we faced was adapting our existing infrastructure and workflows to this new cloud-based system. However, the flexibility it offers is remarkable. For instance, during the pandemic, we realized the full potential of this technology. If a host

or technician fell ill, we could seamlessly bring in replacements from different locations without any travel requirements.

The Royal Rally broadcast demonstrated that we could take this concept even further. With a stable network connection, we can now potentially conduct productions from anywhere - even from our homes. This level of flexibility and redundancy is a gamechanger for our industry.

Moreover, this shift towards software-based production aligns perfectly with the future we envision for broadcasting. It’s not just about cost savings or environmental benefits - though these are significant. It’s about reimagining how we create and deliver content in a world where technology is rapidly evolving and audience expectations are constantly changing.

The success of this implementation has reinforced our belief that cloud-based, softwaredriven production is the way forward. It’s allowing us to be more agile,

more responsive to our audience’s needs, and more sustainable in our operations. As we continue to refine and expand this approach, we’re excited about the possibilities it opens up for SVT and the broader broadcasting industry.

How does SVT plan to balance the adoption of innovative technologies like Agile Live with maintaining traditional broadcast infrastructure?

Balancing the adoption of innovative technologies like Agile Live with our traditional broadcast

infrastructure is a complex but necessary process. The good news is that the new infrastructure is generally more cost-effective, which allows us to gradually build it up and transition over time.

However, this transition presents significant challenges. It requires a complete shift in mindset, new support structures, and an entirely different business model. Moving from hardware to softwarebased solutions introduces a new set of challenges in terms of system stability, upgrade processes, and overall management.

This transition is particularly tricky because we need to maintain our 24/7, 365-day operations while simultaneously undertaking this transformative journey. Based on our current projections, I estimate it will take about three to five years to complete this transformation. This timeline accounts for not only the technical implementation but also the crucial aspects of staff education and adaptation to the new systems.

A significant part of this process involves training our crew and allowing them time to become comfortable with the new technologies. We also need to ensure the new systems are stable and reliable before fully integrating them into our critical operations.

I don’t see any alternative to this path forward. Embracing these changes is essential for SVT to remain competitive and efficient in the rapidly evolving media landscape. If I were to resist this transition, I’d be acting like an outdated, resistant-to-change CTO, which is certainly not my

approach. Our industry is at a pivotal point, and it’s our responsibility to lead the way in adopting these innovative technologies while ensuring a smooth transition for our team and our viewers.

With the development of the Elastic Frame Protocol (EFP), how do you envision this open-source technology impacting the broader broadcast industry?

The development of the Elastic Frame Protocol (EFP) represents a significant shift in our industry’s approach to content delivery. While I can’t definitively claim it’s the future protocol, I believe its underlying principles are crucial for the evolution of broadcasting.

The key advantage of EFP lies in its focus on enduser quality and efficiency. Traditional protocols often introduce unnecessary overhead, but EFP is designed to minimize waste in the system. This aligns with our goal at SVT to prioritize the viewer experience above all else.

Furthermore, EFP embodies a broader trend in our industry - the move from hardwarecentric to software-based solutions. This transition allows for greater fl exibility and scalability, which is essential in today’s rapidly changing media landscape. However, it’s important to note that adopting new technologies like EFP isn’t just about the technical aspects. It requires a fundamental shift in mindset throughout the organization. We need to ensure that our teams, especially those managing these technologies, are equipped to handle this new approach.

Protocols like EFP are tools that enable us to better serve our audience. They allow us to focus on what truly mattersdelivering high-quality content to our viewers in the most efficient way possible. As we continue to explore and implement these technologies, we always keep the end-user experience at the forefront of our decisions.

SVT has been transitioning to IP-based infrastructure. Can you update us on the progress of this transition and any unexpected challenges or benefits you’ve encountered?

Our transition to IP-based infrastructure at SVT is progressing, but it’s important to clarify that we’re focusing on IP in general, not specifically on SMPTE 2110. This distinction is crucial because SMPTE 2110 is a production industry standard that isn’t necessarily optimized for end-user needs, which is a key priority for us.

One of the main challenges we’re facing is the shift from hardware-centric to software-based production. This requires a significant transformation in our workforce and infrastructure. Our network and hosting personnel need to develop a deep understanding of broadcast demands, which is a new requirement for many of them. We’re also building a support organization capable of managing

software-based systems rather than traditional hardware.

The biggest hurdle in this transition is the mindset change required at all levels of our organization. For instance, we’re reimagining user interfaces for production tools. Traditional vision mixers have numerous buttons and functions that aren’t always necessary for modern production workflows. We’re working on creating more intuitive, user-friendly interfaces that anyone can use, not just specialized engineers.

This simplification is challenging because we have many skilled professionals who are experts with traditional equipment. Now, we need to develop interfaces that are accessible to a broader range of users, potentially including myself. The goal is to make production tools as user-friendly as smartphones - you shouldn’t need to be an engineer to operate them effectively.

Ultimately, we’re striving to adopt the ease-of-use principles from consumer

technology into our broadcast industry. This is a difficult but necessary evolution to ensure we remain efficient and relevant in the changing media landscape.

With the move towards cloud-based solutions, how is SVT addressing potential concerns about data security and sovereignty?

At SVT, we’re taking a multi-faceted approach to address data security and sovereignty concerns as we move towards cloudbased solutions. First and foremost, we’re developing our own on-premises cloud infrastructure. This allows us to maintain direct control over our most sensitive data and critical operations, ensuring we can manage security in the best possible manner.

Simultaneously, we’re investing significantly in building a robust IT security department. This is a new and crucial initiative for us, as we recognize the evolving nature of digital threats in the cloud environment. While it’s certainly a challenge, we’re not starting

from scratch. We’re looking to industries that have already navigated these waters successfully, such as banking and automotive, to learn from their experiences and best practices.

It’s important to note that while we’re seeing cost savings in our transition from hardware to softwarebased solutions, we’re reallocating a substantial portion of these savings into strengthening our hosting capabilities, network infrastructure, and IT security measures. This reallocation of resources is essential to ensure we can fully leverage the benefits of cloud technology while maintaining the highest standards of data protection and sovereignty.

Summarizing, we’re taking a balanced approachembracing the advantages of cloud solutions while remaining acutely aware of the responsibilities and challenges they bring. Our goal is to create a secure, efficient, and future-proof infrastructure that serves both our operational needs and our commitment to protecting our viewers’ data.

SVT Play has been a focus area for technological improvement. What recent enhancements have you made to improve user experience and content delivery on this platform?

SVT Play has indeed been a key focus area for our technological improvements, and we’re continuously working to enhance it. The most significant change we’re implementing is shifting our perspective on distribution quality. We’re aiming to make SVT Play’s distribution quality the benchmark for our entire production process.

This shift requires a fundamental change in how we operate. Historically, our distribution

and production teams have worked somewhat independently, each with its own culture and processes. We’re now working to merge these teams and structures, recognizing that the distribution aspect of SVT Play will be crucial in shaping how we produce content.

This integration is not just about technology; it’s about fostering closer cooperation between our distribution and production teams. We’re breaking down silos and encouraging these previously separate units to work as one cohesive entity. This approach ensures that distribution considerations are factored in from the very beginning of the production process, rather than being an afterthought.

We’ve been investing significant effort internally to facilitate this cultural and operational shift. It’s a challenging process as we’re essentially merging two distinct cultures into one unified approach. However, we believe this integration is essential for improving the user experience and content delivery on SVT Play. By aligning our production methods with the distribution capabilities and requirements of SVT Play, we aim to create a more seamless and high-quality viewing experience for our audience. This approach will allow us to optimize our content for digital platforms from the outset, ensuring better performance and user satisfaction on SVT Play.

While we’ve made progress, this is an ongoing process. We’re committed to continuing this transformation to keep SVT Play at the forefront of digital content delivery in Sweden.

How is SVT leveraging AI and machine learning in its production and distribution processes? And what developments in this area have you seen in the past year?

We’re actively integrating AI and machine learning into various aspects of our production and distribution processes. The most visible application for our audience is in real-time transcription and subtitling. Right now, we’re able to provide AI-generated subtitles for 50 to 60% of our live transmissions across our broadcast, linear channels, and OTT platforms. This has significantly improved accessibility for our viewers.

Beyond this, we’re exploring AI’s potential in several other areas. We’re investigating its use in content recommendation systems to enhance the viewer experience on

our platforms. In postproduction, we’re testing AI-assisted editing tools, which could streamline our workflow considerably. We’re also leveraging AI for more efficient archival searches, allowing us to better utilize our vast content library.

Additionally, we’re experimenting with AI for content optimization, such as article summarization and spell-checking. However, it’s crucial to emphasize that all our AI implementations are overseen by human professionals. We maintain a ‘human-in-the-loop’ approach, ensuring that AI outputs are reviewed and refined by our team before being released to the public.

This past year has seen rapid advancements in AI capabilities, and we’re continually evaluating new technologies that could benefit our operations and, ultimately, our audience. However, we’re mindful of maintaining the high standards of quality and accuracy that our viewers expect from SVT. Our goal

is to harness AI as a tool to enhance our capabilities, not to replace human creativity and judgment in content creation and delivery.

How is SVT preparing for the potential widespread adoption of 8K broadcasting, and what timeline do you foresee for this technology?

When it comes to 8K broadcasting, we need to approach this from the perspective of our end users. The key question isn’t when we’ll be ready to implement 8K technology, but rather when our audience will truly benefit from it.

At SVT, we’re constantly monitoring technological advancements and their potential impact on viewer experience. However, we’re also mindful of the practical realities of widespread adoption. The timeline for 8K broadcasting will largely depend on factors such as consumer demand, the availability of 8K-capable devices in homes, and the infrastructure needed to support such highresolution content delivery.

Currently, our focus is on optimizing our existing HD and 4K off erings, ensuring we deliver the best possible quality within these formats.

We’re investing in fl exible, software-based production systems that can adapt to future resolutions, including 8K, when the time is right.

That said, we don’t see 8K becoming mainstream in the near future. The jump from 4K to 8K isn’t as noticeable to the average viewer as previous resolution upgrades were, especially on typical home screen sizes. Additionally, the bandwidth requirements for 8K are substantial, which presents challenges for both broadcast and streaming delivery.

Our approach is to remain technologically prepared while letting consumer needs and market trends guide our implementation timeline. We’ll continue to experiment with 8K in controlled environments, but widespread adoption will likely be a gradual process over the

next 5-10 years, depending on how quickly the ecosystem evolves.

At the end of the day, our goal at SVT is to provide the best viewing experience possible to our audience. If and when 8K becomes truly beneficial and feasible for our viewers, we’ll be ready to embrace it.

SVT has been working on its own CDN. Can you provide an update on this project and how it’s impacting content delivery?

Our investment in developing our own Content Delivery Network (CDN) has been a significant success for SVT. Currently, we’re delivering approximately 80% of our online content through our proprietary CDN. This initiative has been crucial for us on multiple fronts.

Firstly, it gives us greater control over our distribution costs. In the rapidly evolving digital landscape, managing expenses is vital for a public broadcaster like SVT. Our in-house CDN allows us to optimize these costs effectively.

Secondly, and perhaps more importantly, it provides us with enhanced quality control. By managing our own CDN, we can ensure a higher and more consistent quality of service to our viewers. This aligns with our commitment to delivering the best possible user experience across all our digital platforms.

The decision to handle our own CDN has proven to be a strategic advantage. It not only improves our operational efficiency but also allows us to be more responsive to our audience’s needs. As we continue to expand our digital offerings, having this level of control over our content delivery infrastructure will be increasingly valuable.

Looking ahead, we see our CDN as a key component in our broader strategy to adapt to the changing media consumption habits of our audience while maintaining the high standards expected of SVT as Sweden’s public broadcaster.

Looking ahead, what emerging technologies do you believe will have the biggest impact on broadcasting and content production at SVT in the next 5 years?

I believe artificial intelligence will have the most significant impact on broadcasting and content production at SVT in the next five years. AI is driving a democratization of technology that will fundamentally change our industry.

Many of the specialized tasks we’ve traditionally performed within our broadcast facilities will become more accessible and easier for a wider range of people to accomplish. This democratization means that the tools and

capabilities once exclusive to professional broadcasters will be available to a much broader audience.

We need to recognize and adapt to this shift. The work that has been our privilege and expertise for decades is now becoming accessible to many. This doesn’t mean our role becomes obsolete, but it does mean we need to evolve.

At SVT, we’re focusing on how to leverage this democratization to enhance our content creation and delivery. We’re exploring ways to use AI to improve our production efficiency, personalize content for our viewers, and create new, innovative formats that engage our audience.

The challenge and opportunity for us lie in how we can use our professional experience and judgment to elevate the content we produce, even as the technical barriers to entry in our fi eld continue to lower. We need to focus on adding value through our storytelling expertise, our understanding of our audience, and our commitment to quality and ethical journalism.

This technological shift will require us to be more agile, innovative, and open to new ways of working. It’s an exciting time that presents both challenges and opportunities for SVT and the broader broadcasting industry.

PhotoCineLive's new OB: big capabilities in a compact package

PhotoCineLive, a leading production company known for its cinematic approach to live events, has recently unveiled its latest technological marvel: a compact 4K Ultra HD OB van. This cutting-edge mobile production unit, designed in collaboration with Sony and OBTruck.tv, represents a signi fi cant leap forward in the company's capabilities, combining versatility with high-end performance.

At the heart of this new OB van is Sony's XVS-G1 switcher, a powerhouse that packs 24 inputs into a mere 4U of rack space. This compact yet robust setup allows PhotoCineLive to handle up to 20 cameras

simultaneously, making it ideal for fashion shows and concerts in urban settings where space is at a premium. The van's ability to fit into standard delivery truck parking spaces gives the company

an unprecedented level of flexibility in its choice of locations.

Perhaps most intriguing is PhotoCineLive's innovative workflow, which incorporates live grading from log signals.

This approach allows for the recording of full-resolution log footage in-camera while simultaneously providing a lighter record of isolated feeds with or without applied LUTs. Combined with an EDL of the live switching, this system offers clients the ability to export high-resolution edits of the program within minutes of the show's conclusion, setting a new standard for rapid turnaround in live production.

What specifications did PhotoCineLive ask for to Sony and OBTruck.tv for the 4K Ultra HD OB van completion?

For our new 4K Ultra HD OB van, we had very specific requirements that aligned with our unique approach to live production. We asked OBTruck.tv to design a compact vehicle that could navigate urban environments easily while still providing a comfortable and professional working space. The key was to create a non-expandable truck that could fit into standard delivery parking spots, giving us

unprecedented flexibility in location choices. On the technical side, we specified a system capable of handling up to 20 cameras simultaneously, which is crucial for our high-end fashion shows and concert productions.

We approached Sony with these requirements, seeking a switcher that could meet our needs for both compactness and performance. Sony recommended the XVS-G1, which perfectly fit our specifications. This powerful switcher packs 24 inputs into just 4U of rack space, making it ideal for our space-constrained setup.

Additionally, we requested integration of our preferred VENICE cameras and Sony reference monitors to maintain the cinematic quality we're known for in live productions.

The overall goal was to create a versatile, efficient system that could be set up quickly while still providing the premium feel and technical capabilities our clients expect from PhotoCineLive.

Can you walk us through the key features of PhotoCineLive's new 4K Ultra HD OB van and how it differs from your previous setups?

Our new 4K Ultra HD OB van represents a significant leap forward in terms of versatility and efficiency. The key features that set it apart are its compact design, advanced technical capabilities, and streamlined workflow.

Firstly, the van's compact size allows us to access urban locations that were previously challenging for larger OB trucks. This non-expandable design fi ts into standard delivery parking spots, giving us unprecedented fl exibility in location choices. Technically, we've upgraded to handle up to 20 cameras simultaneously, powered by Sony's XVS-G1 switcher. This powerful unit provides 24 inputs in just 4U of rack space, signi fi cantly enhancing our routing capabilities and multiviewer options.

In order to contribute to our signature cinematic look in live productions, we've also integrated Sony VENICE 2 cameras. These full-frame cameras output log signals in UHD, allowing us to maintain high image quality throughout our workflow.

Another crucial feature is our innovative approach to live grading from log signals. This allows us to record full-resolution log footage in-camera while simultaneously providing lighter isolated feeds with or without applied LUTs. Combined with an EDL of the live switching, we can deliver high-resolution edits to clients within minutes of a show's conclusion. While our core workflow remains

similar to our previous setups, these upgrades significantly enhance our capabilities to set up faster. The integration of tactical multi-fiber cables has also streamlined our setup process, reducing cable runs and improving efficiency.

What were the main considerations in choosing Sony's XVS-G1 switcher for this OB van, and how does it enhance your production capabilities?

Our choice of the Sony XVS-G1 switcher was driven by several key factors that align perfectly with our production needs and the unique constraints of our new OB van.

First and foremost, we required a Sony switcher to maintain consistency with our existing workflow and equipment. Given the limited space in our compact OB van, which has only two racks, we needed a solution that could deliver high performance in a small footprint. The XVS-G1 met these requirements exceptionally well. It offers full 12G-SDI capability and provides 24 inputs in just 4U of rack space, making it an ideal fit for our space-constrained environment. This compact design allows us to maximize our production capabilities without compromising on available workspace within the van.

Additionally, the XVS-G1 provides a wide range of creative functions that can quickly adapt to our specific needs. For instance, it can easily handle loads of FX to transition to a 9/16 image format, which is increasingly important in today's content landscape. The switcher's reliability and production comfort, without technical compromises, allow us to maintain our signature high-quality, cinematic approach to live productions. This is particularly important when working with our VENICE cameras, as the XVS-G1 can fully leverage their capabilities in live environments.

How do the VENICE 2 cameras contribute to achieving the cinematic look PhotoCineLive

is known for in live productions?

The Sony VENICE 2 cameras are instrumental in achieving the cinematic look that has become PhotoCineLive's signature in live productions. These fullframe cameras offer several key features that contribute significantly to our visual aesthetic. For starters, the VENICE 2's ability to output log signals in UHD resolution is crucial for our workflow. This allows us to capture a wide dynamic range and preserve maximum flexibility for color grading, which is essential for achieving that cinematic quality in a live environment.

The camera's dual base ISO of 800/3200 provides exceptional low-light performances, allowing

us to maintain image quality even in challenging lighting conditions often encountered in live events. This feature, combined with the camera's 16 stops of exposure latitude, gives us the flexibility to capture both bright highlights and deep shadows in a single frame, contributing to that film-like look.

Additionally, this camera’s color science is particularly well-suited for accurately rendering skin tones, which is critical in our fashion show productions. The natural, lifelike color reproduction helps us maintain the high-end, cinematic feel our clients expect.

We've been working with these series for quite some time now, and in our experience, we haven't

found another camera system that can match its efficiency and quality for our specific needs in live production, with the exception of ARRI Alexa 35, which we usually utilize; recently, we produced the Balenciaga Show at Milan Fashion Week with eight of them using our ERECA "universal CCUs" in a flypack.

Could you elaborate on the decision to prioritize a compact design over an expandable one for this OB van?

Our decision to prioritize a compact design over an expandable one for our new OB van was driven by the specific nature of our work and the challenges we often face in urban environments.

PhotoCineLive specializes in high-end productions, primarily fashion shows and concerts, which frequently take place in city centers. These locations typically have limited parking options that aren't designed to accommodate full-size OB trucks. By opting for

a compact design, we've significantly increased our ability to access these venues efficiently.

The non-expandable design allows us to fit into standard delivery truck parking spots, which are much more common in urban areas. This flexibility is crucial for our operations, as it enables us to position our OB van closer to the event location, reducing cable runs and setup time.

Moreover, this compact design doesn't compromise on capabilities. Despite its size, our new OB van can handle up to 20 cameras and offers 12 workspaces for technicians and creative staff. We've managed to maintain a high level of comfort and quality within the vehicle, creating an atmosphere similar to a fixed control room.

While an expandable design might offer more space, we found that the benefits of improved urban accessibility and faster deployment outweighed the potential advantages of extra room. This approach aligns with our

goal of providing top-tier production services with maximum efficiency, even in challenging urban environments.

How does the integration of Sony's PVM-X2400 and X3200 reference monitors impact your color grading and quality control processes?

The integration of Sony's PVM-X2400 and X3200 reference monitors has significantly enhanced our color grading and quality control processes. These monitors are specifically designed for color grading and offer exceptional calibration capabilities, which is crucial for our workflow.

The PVM-X Series monitors provide consistent color matching with Sony's high-end BVM-HX Series 4K HDR Master Monitors, ensuring that we maintain color accuracy throughout our production chain. This consistency is vital when we're working across multiple environments, from on-set to in-truck monitoring.

One of the key features that impacts our color grading process is the monitors' ability to display a wide color gamut accurately throughout the entire luminance range. This allows our colorists to work with precision, especially in the darker areas of the image where subtle details are critical.

The monitors also offer various Black Detail Modes, which are particularly useful when we're working on fashion shows or concerts with challenging lighting conditions. These modes allow us to inspect shadow details without affecting the overall gamma, ensuring we maintain correct colors and grayscale.

In short, these Sony reference monitors have become an integral part of our workfl ow, allowing us to color grade with maximum precision and quality. Their accuracy and consistency ensure that what we see in our OB van translates faithfully to the fi nal output, maintaining the high standards our clients expect from PhotoCineLive.

What challenges did you face during the implementation process with OBTruck. tv, and how were they overcome?

The implementation process with OBTech.tv presented a few challenges, but their open communication style greatly facilitated our collaboration. The primary hurdle we faced was the geographical distance between our team in Paris and OBTech. tv's location in Denmark. To overcome this, we relied heavily on video conferencing technology, which allowed us to maintain regular and detailed communication throughout the project.

Another significant challenge was the need to maximize capabilities within the compact space of our non-expandable OB van. This required meticulous planning and careful equipment selection. We worked closely with OBTech.tv to design a layout that could accommodate up to 20 cameras and

12 workspaces while maintaining a comfortable working environment.

To address this space constraint, we opted for highly efficient equipment like the Sony XVS-G1 switcher, which offers 24 inputs in just 4U of rack space. This choice exemplifies our approach to equipment selection - prioritizing powerful, compact solutions that don't compromise on quality or functionality.

Throughout the process, OBTech.tv's flexibility and willingness to adapt to our specific needs were crucial. Their expertise in custom OB van design complemented our vision for a compact yet highperformance mobile production unit.

By leveraging technology for communication, carefully selecting spacee ffi cient equipment, and maintaining open dialogue with OBTech.tv, we were

able to overcome the challenges and create an OB van that perfectly suits our unique production requirements.

How does this new OB van facilitate remote production workflows, and what benefits have you seen so far?

While our new OB van is equipped with cuttingedge technology that could potentially support remote production workflows,

we haven't yet fully explored this capability. Currently, our focus remains on our core expertise of multi-camera "live-to-tape" productions, particularly for fashion shows and concerts.

That said, the compact and versatile nature of our new OB van does offer increased flexibility in terms of location accessibility, which indirectly supports a more distributed production model.

The van's ability to fit into standard delivery truck parking spots allows us to get closer to event venues, potentially reducing the need for extensive on-site crew.

Our innovative workflow, which includes live grading from log signals and the ability to make fast, high-resolution edits, aligns with some aspects of the remote production philosophy that emphasizes efficiency and quick turnaround. This capability allows us to deliver highquality content to clients almost immediately, which could be beneficial in a remote or distributed production scenario.

Looking ahead, we're open to exploring remote production workflows as the industry evolves. For now, however, our primary focus is on maximizing the capabilities of our new OB van for our current production model, which emphasizes high-end, cinematic quality for live events.

Can you discuss any specific features or customizations in this OB van that cater to fashion shows or theater productions? One of the key customizations in our new OB van is our innovative workflow that incorporates

live grading from log signals. This feature is particularly beneficial for fashion shows and theater productions, where visual quality and quick turnaround are paramount.

Our system allows us to record full-resolution log footage directly in the camera while simultaneously providing a lighter record of isolated feeds. These isolated feeds can be delivered with or without applied LUTs (Look-Up Tables), giving us flexibility in postproduction. Besides, we generate an EDL (Edit Decision List) of the live switching. This combination of high-quality log footage,

isolated feeds, and live switching data enables us to offer a unique service to our clients: the ability to export high-resolution edits of the program within minutes of the show's conclusion.

This workflow is especially valuable in the fast-paced world of fashion shows, where designers and brands often need immediate access to high-quality footage for social media and press releases. For theater productions, it allows for quick review and potential re-edits of performances.

The system leverages the capabilities of our Sony

VENICE 2 cameras, which output log signals in UHD, ensuring we maintain the highest possible image quality throughout the process. This approach allows us to preserve the cinematic look we're known for, even in a live production environment.

By combining this custom workflow with our compact, urban-friendly OB van design, we're able to deliver high-end, cinema-quality productions in challenging locations, meeting the unique demands of fashion and theater events.

With the capability to handle up to 20 cameras, how do you

manage and optimize the workflow for large-scale productions?

Our new OB van has been designed from the ground up to efficiently manage productions with up to 20 cameras, so handling 10 or 15 cameras is well within our comfort zone.

The key to optimizing our workflow for these large-scale productions lies in our strategic use of technology and our streamlined setup process.

One of the most significant upgrades in our workflow is the implementation of tactical multi-fiber cables. This approach allows us to drastically reduce the number of cable runs required for our setup. Instead of running multiple individual cables for each camera, we can now run a single multi-fiber cable that carries signals for multiple cameras. This not only simplifies our setup process but also reduces potential points of failure and makes troubleshooting more efficient. The use of these tactical multi-fiber cables

means we only need to run the longest cable paths once or twice, rather than for each individual camera. This is particularly beneficial in complex venue layouts or when we need to cover large distances.

Additionally, our compact yet powerful Sony XVS-G1 switcher, with its 24 inputs in just 4U of rack space, allows us to manage all these camera feeds efficiently within our limited space. The switcher's advanced routing capabilities and multiviewer options help our team monitor and control all camera feeds effectively.

Our workflow is further optimized by our innovative approach to live grading from log signals, which allows us to maintain high image quality while providing quick turnaround times for our clients.

Looking ahead, what emerging technologies are you most excited about for future integration into your OB vans?

While we're very satisfied with our current compact OB van solution, we're always looking ahead to future possibilities that could enhance our production capabilities.

One concept we're particularly excited about is the potential to apply our current technological approach to a slightly larger, expanding semi-trailer format.

This evolution would allow us to maintain the core benefits of our current setup - such as our innovative workflow with live grading from log signals and our efficient use of space - while providing additional room for emerging technologies and expanded production capabilities. An expanding semi-trailer could offer us the best of both worlds: the ability to access urban locations that full-size OB trucks can't reach, combined with the extra space of an expandable unit when needed. This could potentially allow us to incorporate more advanced audio solutions, expanded graphics capabilities, or

even integrate emerging technologies like augmented reality or AIassisted production tools.

However, it's important to note that any future developments will still prioritize our commitment to delivering cinematic quality in live productions. We're not looking to expand just for the sake of size, but rather to find ways to enhance our unique approach to live event coverage.

As always, we'll be closely monitoring industry trends and client needs to e nsure that any future upgrades or expansions align with our mission of providing high-end, efficient, and flexible production services.

How do you balance the need for cuttingedge technology with the practicalities of day-to-day operations in various shooting environments?

Our approach to balancing cutting-edge technology with practical operations is centered on prioritizing

quality, particularly in our camera and lens choices. We've found that investing in high-end imaging equipment pays dividends in our day-to-day operations across various shooting environments..

In essence, we balance technology and practicality by investing in versatile, high-quality equipment that enhances our capabilities across various shooting environments. This strategy allows us to maintain our high standards while adapting to the diverse

challenges presented by different production scenarios.

Based on your experience with this new OB van, what do you envision as the next big step in live production technology for the broadcast industry?

Based on our experience with our new 4K Ultra HD OB van and the evolving demands of the industry, I believe the next significant step in live production

technology will be the widespread adoption of higher quality streaming capabilities.

Ultimately, the goal is to bridge the gap between the quality we can achieve in our OB van and what the end viewer experiences, whether they're watching a live stream or a broadcast.

As we continue to push the boundaries of what's possible in live production, I believe better quality streaming will be key to sharing these advancements with a wider audience.

How AI is transforming sports broadcasting

Artificial Intelligence is revolutionizing sports broadcasting, offering unprecedented opportunities for content creation, audience engagement, and operational efficiency. As the industry gears up for the Paris 2024 Olympic Games, AI technologies are set to play a pivotal role in shaping the viewing experience for millions of fans worldwide.

To gain insights into the latest AI innovations in sports production, we spoke with Rubén Nicolás-Sans, a leading expert in artificial intelligence applications for the broadcast industry.

Our AI expert guest shared his perspectives on the most groundbreaking AI

technologies being deployed for the upcoming Olympics, from automated content creation to personalized viewer experiences.

With this exclusive interview, TM Broadcast International explores the transformative impact of

AI on sports broadcasting, delving into topics such as real-time data analysis, adaptive fixed cameras, and the ethical considerations surrounding AI implementation. Nicolás-Sans also offers valuable advice for audiovisual professionals

Rubén Nicolás-Sans, a leading expert in artificial intelligence applications for the broadcast industry.

looking to stay ahead of the curve in this rapidly evolving landscape.

As an expert in AI, what do you consider to be the most innovative AI applications used in the audiovisual production of the Paris 2024 Olympic Games?

The most innovative AI applications in the audiovisual production

of the Paris 2024 Olympic Games include several revolutionary aspects. First, the automation of content creation has become essential, using algorithms that analyse images and data in real time to generate visual narratives without direct operator intervention. In addition, the personalization of the viewer experience is another highlight, where AI adapts the content offered to individual user preferences and optimizes

their interaction with the events. The implementation of augmented and virtual reality has improved immersion, allowing viewers to experience events in a more dynamic and engaging way. Adaptive cameras that automatically follow athletes are also being used to improve coverage by accurately capturing key moments without the need for a traditional production crew.

The use of sport-specific adaptable fixed cameras with operator-free streaming has been mentioned. What advantages and challenges does this technology present for Olympic coverage?

Adaptable fixed cameras offer several significant advantages for Olympic coverage. These cameras use advanced tracking and recognition technology to autonomously follow the movements of athletes. This not only reduces operational costs by eliminating the need for a human operator for each camera, but also ensures consistent and continuous image quality because these cameras are designed to automatically adapt to different sports and situations. However, they also present significant challenges. The technology depends on the accuracy of the software, which must be able to track the athletes adequately without losing focus. Any technical failure could affect the quality of the real-time broadcast, which is critical for an event as important

as the Olympics. In addition, the integration of these cameras into different sports may require specific adjustments to optimize their performance.

How is AI transforming the process of creating highlights and summaries of sports events in real-time?

Artificial intelligence is radically changing the way highlights and recaps of sports events are produced in real time. Using video analytics algorithms, AI can automatically identify the most exciting and relevant

moments of an event, such as goals, records or dramatic moments, in a matter of seconds. This not only speeds up the content production process, but also allows viewers to receive instant and personalized summaries, which is especially valuable in an environment where audience attention is limited. Besides, this real-time analytics capability enables media organizations to deliver relevant content to viewers at the right time, enhancing the overall viewer experience and keeping them engaged in the event.

Intel has announced AI solutions to create immersive and interactive experiences for viewers. How do these technologies work, what is the role of AI in these applications, and how do they impact the user experience?

The AI solutions presented aim to create immersive and interactive experiences for viewers during the Olympic Games. These technologies work by using advanced algorithms that process data in real time, allowing users to interact with the event in

new ways. For example, viewers can receive realtime information about athletes, statistics and performance data, allowing them to gain a deeper understanding of the event. AI also plays a critical role in augmented reality, where data and visual information can be overlaid on the live broadcast. This approach not only enriches the viewer’s experience, but also fosters a deeper connection to the event, as users can customize their interaction based on their interests and preferences, making each experience unique.

What role did AI play in improving accessibility for people with visual/hearing disabilities during the Olympic broadcast?

In terms of accessibility, arti fi cial intelligence has had a signifi cant positive impact on people with visual and hearing disabilities during the broadcast of the Olympic Games. AI enables the automatic generation of captions and audio descriptions that enrich the experience for those who rely on these tools. For example, speech recognition technology can transcribe and translate dialogue and commentary in real time, while computer vision systems can create detailed descriptions of what’s happening on screen. This not only improves inclusion for people with disabilities, but also sets a standard for future audiovisual productions where accessibility will be a central consideration in content design.

What do you think about the use of neural object cloning to transform video from Olympic collections into 3D digital artifacts? What are the implications for the preservation of the Olympic legacy?

Neural object cloning to transform videos from Olympic collections into 3D digital artifacts is a fascinating innovation with important implications for preserving the Olympic legacy. This process uses advanced AI techniques to recreate historic objects and scenes, making collections accessible in a new and immersive way. By creating highquality 3D models, viewers can have an interactive experience where they can explore iconic moments and objects from Olympic history. This not only helps preserve the legacy, but also provides a way to educate future generations about the history of the Olympic Games, providing deeper and more meaningful access to sports culture and heritage.

How did AI help overcome language barriers in international coverage of the Olympics?

AI has been instrumental in breaking down language barriers for international coverage of the Olympic Games. Through automatic translation tools and natural language processing, it’s possible to provide content in multiple languages in real time, allowing a global audience to access information and enjoy the events regardless of their native language. This capability not only improves content accessibility, but also encourages greater participation from international audiences, ensuring that viewers from different regions of the world can enjoy the Olympic experience on their own terms.

In terms of automation in data processing, what significant advances have been made in sports audiovisual production thanks to AI?

When it comes to automating data processing, artificial intelligence has enabled significant advances in sports audiovisual production. Thanks to algorithms that analyse large amounts of data in real time, production teams can generate detailed statistics and informative visualizations that enrich the narrative of the event. This not only improves the quality of the content on offer, but also provides commentators and analysts with valuable information that can be used to enrich live discussions, making the broadcast more informative and engaging for viewers.

What role did AI play in personalizing content for different audiences during the Olympic broadcast?

The personalization of content for different audiences during the broadcast of the Olympic Games was greatly facilitated by artificial intelligence. By analysing user data, AI can identify viewer preferences and

behaviours, allowing broadcast platforms to deliver content tailored to each individual’s specific interests. This can include the selection of highlighted events, specific sports analysis, and the delivery of content in formats that better resonate with diverse audiences, thereby increasing viewer engagement and satisfaction.

How is AI changing the work of sports commentators and analysts during live broadcasts?

Artificial intelligence is transforming the work of sports commentators and analysts during live broadcasts by providing real-time data and predictive analytics. These tools enable commentators to offer deeper and more contextual insights into events as they unfold, enriching the viewer experience. AI can identify patterns and trends in athletes’ performance, providing a solid foundation for on-air discussion and transforming the way sports events are told.

What are the ethical challenges of using AI in sports audiovisual production, especially for an event the size of the Olympic Games?

The use of artificial intelligence in sports audiovisual production, especially in an event of the magnitude of the Olympic Games, raises several ethical challenges. One of the most important is the privacy and security of viewers’ personal data. As more data is collected and analyzed to personalize the experience, it’s critical to ensure that it’s handled responsibly. There’s also the risk of algorithmic bias, where AI systems

could reinforce preexisting stereotypes or preferences, which could affect the representation of certain athletes or sports. It’s also important to consider accountability and transparency in the use of AI technology, ensuring that viewers understand how this information is being used and how it will impact their experience.

How do you see the future of augmented reality and AI integration in sports broadcasts?

The future of augmented reality and AI integration in sports broadcasting is exciting. As these technologies continue

to evolve, we’re likely to see richer and more interactive experiences that change the way viewers engage with events. Augmented reality can provide additional layers of information, while artificial intelligence will enable even deeper personalization, making each broadcast unique for each viewer. The combination of these two technologies has the potential to fundamentally change the sports experience, creating more immersive and engaging environments.

How is AI changing sponsorship and advertising strategies in the context of sports audiovisual production?

Artificial intelligence is transforming sponsorship and advertising strategies in sports audiovisual production by enabling more precise targeting and personalization of messages. Through data analytics, brands can identify audience behaviour patterns and preferences, allowing them to create advertising campaigns that better resonate

with viewers. This not only increases campaign effectiveness, but also improves the relevance of ads during broadcasts, allowing sponsors to connect with their target audience in a more meaningful way.

Finally, what advice would you give to audiovisual professionals to keep

up with the rapid advances in AI applied to sports production?

My advice to audiovisual professionals is to stay abreast of the rapid advances in artificial intelligence as it applies to sports production. This includes attending training and conferences on new technologies, collaborating with experts in AI and data analysis, and being open

to experimenting with new tools and methods. It’s also important to consider ethics and responsibility in the use of AI, ensuring that innovations are used in ways that benefit both viewers and industry professionals. Remaining agile and adaptable in this ever-changing environment will be crucial to making the most of the opportunities that AI offers in sports.

Breaking barriers

a solution for multilingual sports commentary

In an era where global reach is paramount, CAMB. AI emerges as a game-changer in the world of sports broadcasting. Co-founded by Akshat Prakash and his father, this innovative company is pushing the boundaries of real-time, multilingual content delivery. With its proprietary AI models MARS and BOLI, CAMB. AI is transforming how sports events are experienced worldwide, breaking down language barriers and opening up new markets for broadcasters and leagues alike.

Akshat Prakash, the Chief Technology Offi cer of CAMB.AI, brings a wealth of experience to the table. A graduate of Carnegie Mellon University, Prakash’s journey includes stints at Apple working on Siri and founding a Tech Stars-backed startup. This background in AI and language models has been instrumental in developing CAMB.AI’s cutting-edge technology, which can translate and dub content

into over 140 languages while preserving the original speaker’s voice and emotion.

The company’s recent collaboration with Eurovision Sport for the 2024 World Athletics U20 Championships marks a signi fi cant milestone in sports broadcasting. By providing real-time AI-generated commentary translation from and to almost any language,

CAMB.AI is not just facilitating global viewership but also democratizing access to sports content. As the industry moves towards more inclusive and globally accessible broadcasting, TM Broadcast bring its readers this interview with the CTO of a company that stands at the industry forefront, ready to meet the growing demand for multilingual, simultaneous broadcasts.

Can you explain the core technology behind CAMB.AI’s real-time translation and dubbing system?

At its heart, CAMB.AI is a speech-to-speech translation system. This means we can take any audio or video input, whether it’s pre-recorded or live, in any language, and transform it into over 140 different languages.

The significance of this technology is immense, particularly for industries like sports broadcasting, movie dubbing, content creation, education, and healthcare. We’re essentially enabling people to communicate and distribute their content globally, breaking down language barriers. Our core technology is powered by two proprietary models that work in tandem. The first is our translation model

called BOLI, which handles the linguistic conversion. The second is our speech model called MARS, which manages voice synthesis and preservation of original vocal characteristics. MARS is particularly innovative. It’s able to emulate voices with a high level of prosody and realism from just a few seconds of audio input. This allows us to capture the emotion, performance, and meaning in the original speech.

Together, these models allow us to take speech input in one language and output speech in a different language while maintaining the original speaker’s voice, tone, and emotion. This is crucial for applications like live sports commentary, where preserving the excitement and nuances of the original broadcast is key.

We’ve recently opensourced the English language version of MARS5, our latest iteration, to encourage further development and research in this field7. Our technology has already been used for groundbreaking projects, such as the first AI-dubbed movie and the first live-streamed multilingual broadcast of a Major League Soccer game.

What specific AI models or techniques does CAMB.AI employ for live sports commentary translation?

At CAMB.AI, we utilize two proprietary AI models for live sports commentary translation: MARS and BOLI. MARS is our speech

model, while BOLI handles translation. These models work in tandem to perform real-time speech-to-speech translation. We recently open-sourced the English version of MARS, which has gained significant traction in the AI community. It trended as the number five model on the Hugging Face leaderboard, one of the most popular platforms for open-source AI models.

Handling the rapid pace and unique vocabulary of sports commentary presents unique challenges. We’ve addressed this through extensive finetuning of our models on sports-specific data. This process has enabled our AI to effectively account for sports lingo, nuances, and the fast-paced nature of live commentary. Our approach to data training has been crucial. In cases where we’ve been granted permission, we’ve fine-tuned our models on actual sports commentary data. This has significantly enhanced our ability to capture the specific cadence and terminology used in sports broadcasting.

The combination of our proprietary technology and specialized training has allowed us to excel in the sports domain. We’re able to deliver high-quality, real-time translations that maintain the excitement and accuracy of live sports commentary, setting us apart from other companies in this space.

What were the main technical challenges in implementing real-time translations for the World Athletics U-20 Championship?

Implementing real-time translations for the World Athletics U-20 Championship didn’t present any specific technical challenges for us. We had already pioneered this technology earlier this year with Major League Soccer, where we performed the first-ever live stream of a sporting event with real-time AI translation. That happened in April, and it was truly groundbreaking. By the time we got to the World Athletics event, we had refined our process through private

collaborations with various broadcasters and leagues. Eurovision Sport recognized the potential of our technology and decided to make it public, which was a great step forward for us.

Operationally, it’s a straightforward process. We familiarize ourselves with the client’s team, explain how our platform works, conduct a few trial runs, and then go live. It’s not particularly complex. We’ve actually streamlined this process even further with our new product, DubStream. It’s a unique interface that allows streaming companies or managers to simply input their streaming link and output it in multiple

languages. It’s as user-friendly as using any online interface. So, while the technology behind it is sophisticated, the implementation is designed to be simple and efficient for our clients.

Is there any specific language more difficult to work with? How did you ensure the accuracy and naturalness of the AIgenerated commentary in every language?

One of the great advantages of building our technology from the ground up is our ability to handle a wide range of languages, including those with very limited

resources. We’re not just focusing on the most widely spoken languages - we’re working with languages that may only have 50 to 100 speakers. For example, we recently partnered with Seeing Red Media to work on indigenous languages in North America, specifically in Canada.

Our approach is flexible. As long as stakeholders can provide data for a particular language they want to scale, our models can fine-tune and learn it. While many companies might focus on the top 20 or 30 most spoken languages, we cover close to 150 languages, including various locales. This attention to locale is crucial.

The ability to capture these subtle differences in language and accent is something very few companies in the world can do. It’s not just about translating Portuguese - it’s about getting the exact regional accent right. This capability is a testament to the sophistication of our AI models and our commitment to authentic, localized content.

What measures were taken to minimize latency in the live translation process?

Currently, our system’s latency ranges from a few seconds to about a minute, which is quite fast for real-time translation. We’ve also proven its robustnessfor instance, our stream with Eurovision ran uninterrupted for 40 to 50 hours over a 4-5 day period.

The latency can be adjusted based on customer preferences. If they prioritize higher quality translations, it might take a bit longer. There’s always a trade-off between quality and speed. However, even with higher quality settings,

we’re still operating in near real-time. We’ve designed our system to be flexible. Customers can choose the balance between latency and quality that best suits their needs. Some might prefer the fastest possible turnaround, while others might be willing to accept a slightly longer delay for more accurate translations.

It’s worth noting that even with the longer end of our latency range, the translation is still considered “live” by industry standards. We’re constantly working on optimizing our processes to reduce latency while maintaining highquality output.

Our goal is to provide a solution that’s both fast and accurate, giving our clients the ability to broadcast multilingual content with minimal delay. As we continue to refine our technology, we expect to see further improvements in both speed and quality.

As we can understand from your words, CAMB.AI doesn’t need to be integrated in the existing infrastructure, does it?

That’s correct, CAMB.AI doesn’t require integration into existing infrastructure. We’ve designed our technology to be highly flexible and user-friendly. Broadcasters and leagues can simply input their original stream into our interface and output it as an additional stream in the desired language.

The setup process is remarkably straightforward. We typically spend just a few hours showing clients how to use the system. They can then test it for a day or two to get comfortable with the interface. After that, they’re ready to go live.

This approach makes our solution incredibly scalable. We don’t need to spend time integrating with each new broadcaster or league individually. The flexibility of our technology allows for quick adoption across various platforms and systems.

Currently, we’re working with numerous sports leagues, teams, and broadcasters. They’ve found our system to be

compatible with their existing setups, which has contributed to our rapid growth in the industry. The ease of use and minimal setup time have been key factors in why many organizations are choosing to work with us for their multilingual broadcasting needs.

What improvements or refinements are planned for future iterations for this technology?

Now, we’re streaming 8 to 10 hours of content daily with a major provider, and our technology is continuously improving. This enhancement comes not only from AI’s inherent ability to learn over time but also from our team’s growing expertise in sports broadcasting and live streaming operations.

What we’re seeing now is just a glimpse of the future. Our AI models are becoming more sophisticated, allowing us to handle more complex translation scenarios and improve the overall quality of our output.

We’re also refining our operational processes to make the integration of our technology even smoother for broadcasters.

I anticipate that within the next six months, we’ll see widespread adoption of AI translation solutions like CAMB.AI among sports leagues and broadcasters. The ability to offer viewership in multiple languages at scale is becoming increasingly crucial in the global sports market. As more organizations recognize the potential to expand their audience reach, we expect to see a significant shift towards multilingual broadcasting.

We’re also focusing on reducing latency further and expanding our language offerings. Our recent partnership with Eurovision Sport for the World Athletics U20 Championships is a testament to the growing interest in our technology. As we continue to refine our models and processes, we’re excited about the potential to transform how sports content is consumed globally.

So you expect growing in a near future.

Absolutely, we’re experiencing significant growth already, which is very encouraging.

At this point, the pace of adoption really depends on the industry. The sports broadcasters, leagues, and teams that have already partnered with us are gaining a substantial advantage over those still contemplating this technology.

I’d say that if you’re not already exploring or implementing AI translation solutions like ours, you’re likely falling behind competitors who are. This technology is rapidly becoming an industry standard in global broadcasting. It’s a crucial time to at least invest in understanding these advancements, even

if you’re not ready for full implementation.

We’re seeing this trend across various sectors. For instance, we recently worked with Eurovision Sport to provide real-time AI commentary translation for the World Athletics U20 Championships. This kind of application is just the beginning. Our technology is not just about translation - it’s about breaking down language barriers and expanding global reach. Whether it’s live sports commentary, film dubbing, or content creation, the ability to instantly deliver content in multiple languages while preserving the original voice and emotion is transformative.

The industry is moving fast, and those who adopt early will have a significant edge in reaching global audiences. It’s an exciting time in broadcasting, and we’re thrilled to be at the forefront of this technological revolution. Does this technology support multiple languages simultaneously?

We’re already providing simultaneous translations in multiple languages. Our current system can handle up to 10 languages in parallel without any issues. This capability is particularly valuable for live sports broadcasting and other real-time content where reaching a diverse, multilingual audience is crucial. For example, in our work with Major League Soccer, we’ve demonstrated the ability to provide live commentary in multiple languages simultaneously. This isn’t just about translating words - our AI models preserve the excitement and nuances of the original commentary across all languages.

We’re continuously working to expand this capability. While 10 languages is our current standard offering, we’re pushing the boundaries of what’s possible in terms of simultaneous translations. Our goal is to make content accessible to as many people as possible, breaking down language barriers in real-time broadcasting.

Are you planning to escalate this technology in the future?

We’re constantly looking to expand and enhance our technology. Currently, our system is capable of handling more languages and streams than what’s typically requested by broadcasters or leagues. However, we’re always preparing for larger-scale events.

For instance, consider an event like the Olympic Games, where multiple sports are happening simultaneously, each potentially requiring commentary in different languages. Our technology is already capable of handling such complex, multi-language distributions. We’ve designed our system to be highly scalable, anticipating the needs of global, multifaceted sporting events.

In fact, we’ve already demonstrated this capability in various contexts. Our recent collaboration with Eurovision Sport for the World Athletics U20

Championships showcased our ability to provide real-time AI-generated commentary translation. We’re not just planning for the future - we’re actively building and implementing solutions that can meet the most demanding multilingual broadcasting needs of today and tomorrow.

Our goal is to stay ahead of the curve, ensuring that as the demand for multilingual, simultaneous broadcasts grows, our technology is ready to meet and exceed those requirements.

And what potential impact do you foresee this technology having on the future of sports broadcasting?

I believe our technology has the potential to significantly democratize access to sports content and inspiration worldwide. Sports play a crucial role in our culture, inspiring people to strive for excellence and be the best versions of themselves. In a world that sometimes seems to be growing complacent,

sports continue to remind us of the importance of competition, hard work, and self-improvement.

With CAMB.AI’s technology, we can spread this powerful message of sports to a much wider audience, breaking down language barriers. Consider the impact of a new sports icon like Serena Williams. Currently, her inspirational performances might only reach those who understand English or have access to highquality broadcasts. But, what about that young girl in a remote village, with just a basic phone, who doesn’t speak English? Our technology can bridge that gap, allowing that child to hear Serena’s words in her own language, potentially igniting a passion for sports or personal growth.

Unfortunately, in today’s world, those with more resources often get even more access to information and inspiration, while those with less continue to be left behind. At CAMB.AI, we’re working to level this playing field. By making sports content accessible in

over 140 languages, we’re ensuring that inspiration and motivation from sports can reach anyone, anywhere, regardless of their linguistic background or economic status.

This aligns with our work with major sports organizations like Major League Soccer, Tennis Australia, and Eurovision Sport. We’re not just translating words; we’re transferring the emotion, excitement, and inspirational power of sports across language barriers. Our goal is to make sure that the next generation of sports stars can emerge from anywhere in the world, inspired by the greats of today, no matter what language they speak.

Can you share any metrics or feedback from viewers who experienced the AI-translated commentary?

The feedback we’ve received has been incredibly positive and validating. One particularly noteworthy example comes

from our work with Eurovision Sport. During a recent stream, one of Portugal’s top sports commentators provided feedback on our AI-generated commentary. What struck him - and us - was the precision of the accent. He specifically noted that the AI voice had a Portuguese accent from Portugal, not a Brazilian Portuguese accent.

This level of nuance is exactly what we’re aiming for at CAMB.AI. It’s not just about translating words; it’s

about capturing the subtle differences in dialect and accent that make language feel authentic and local. This kind of accuracy is crucial in sports broadcasting, where the energy and flavor of the commentary are so important to the viewer experience.

We’re particularly proud of this feedback because it demonstrates the sophistication of our MARS model. The ability to distinguish between different regional accents within the same language

is a significant technical achievement. It’s this kind of precision that sets us apart in the field of AI-powered translation and dubbing.

While we’re continually gathering more quantitative metrics, this kind of qualitative feedback from industry professionals is invaluable. It shows that we’re not just meeting the basic requirements of translation, but we’re actually delivering a product that can stand up to the scrutiny of top professionals in the field.

Videos

Dubstream demo | https://vimeo.com/990426631/79807f066b?ts=0&share=copy

General promotional sizzler | https://vimeo.com/954554847?share=copy#t=0

Despites his youth, Stuart Campbell is already an accomplished Director of Photography who has made significant contributions to the world of cinematography through his work on notable projects such as Special Ops: Lioness, The Handmaid’s Tale, and Mayor of Kingstown. With a unique background in advertising, Campbell transitioned to cinematography after a decade-long career as an art director, bringing a fresh perspective to visual storytelling. His versatility and adaptability have allowed him to excel in various genres, from high-profile television series to award-winning documentary films like Caribou Legs.

In this exclusive interview with TM Broadcast International Magazine, Campbell shares insights into his creative process, technical preferences, and collaborative approach with directors. He discusses his go-to camera setups, lens choices, and lighting techniques, offering a glimpse into the mind of a modern cinematographer. Campbell also reflects on the evolving role of the DoP in today’s digital filmmaking landscape and provides valuable advice for aspiring cinematographers

on balancing technical knowledge with creative vision.

How did you first get into cinematography, and what inspired you to pursue it as a career?

Cinematography was never something I ever considered doing growing up, even though I was around cameras a fair bit. It wasn’t until I decided to get out of advertising that I decided it was worth pursuing.

I worked as an art director in creative advertising agencies for about 10 years in Toronto and had a 4-year stint at one agency called John St., which was a very creative and commercial-heavy agency. I was constantly on set working with directors and cinematographers and just started to see some similarities in what a cinematographer did and what I was doing, minus the technical aspect. When I felt like I hit my ceiling in advertising, I decided to make a go at it, kind of knowing I had advertising to fall back on. But luckily, it worked out and I never had to go back.

You’ve worked on notable projects like Special Op: Lioness and The Handmaid’s Tale, but your work also reaches to documentary films as ‘Caribou Legs’, which won the 2017 Robert Brooks and Canadian Society of Cinematographers awards. How do you approach the visual

style for different genres and narratives?

I think an important part of the job is to be malleable and versatile, being able to work on different kinds of projects that may have different styles. I try to follow great ideas or concepts and those aren’t ever specific to a genre. At the start of a project, I’ll sit with the director, and we’ll talk about it and what they had in mind and how they see the style, visuals and vibe of the project. Normally there’s a lot of talk about visual references, feelings, and life experiences, and ultimately, we come to a look they we feel best represents the project. I remember trying to explain to my mother what I do, and after my long explanation she said “…. so, you’re an interpreter?” and I’ve always thought that was very accurate.

What’s your go-to camera setup for most productions, and how do you decide which camera system to use for a specific project?

I’m an Arri Alexa guy. There are some instances where I need a different tool, like the Sony Venice in Rialto mode if I really need a small camera or the Phantom camera so something super highspeed. But for the majority of the time, I’m on an Alexa. And since the Alexa 35 came out, I’ve only used that.

Can you tell us about your preferred lens choices? Do you have any favourite vintage or modern lenses for achieving particular looks?

When I first started shooting, I would try every brand of lens I could. As soon as a new lens came out or was available, I would try to get my hands on it. Spherical, anamorphic, modern, vintage, gimmick lenses… I’d try everything. But in the last few years, I’ve stopped and basically use either Arri Master Primes or Panavision Panaspeeds. The way that I like to work, I want a versatile lens that isn’t restrictive and then with filters, along with the other parts of production

like lighting, production design, wardrobe, etc. create the look. I remember someone said, “People in the 60’s saw the world as sharp as we see the world today” and I agree with that. I prefer having a lens that’s optically “clean” and then work it to suit the situation. I just know some lenses can force you to work and frame a certain way and I just don’t always like being locked into that.

In what ways has virtual production technology impacted

your work, especially in projects like The Handmaid’s Tale?

At this point in my career, it hasn’t really. I’ve done some testing with virtual production but so far, the choice has always been to shoot on location or build something on stage. And to be honest, right now, I kind of prefer it. I really do like going on location because you never know what you’ll find in terms of places to shoot, framing, light... I’m not against virtual production at all and it has

HANDMAIDS

its place, but I do like the freedom of being able to discover something in a practical space or location.

How do you approach lighting design for different scenes, and what are some of your favourite lighting techniques?

I try to approach lighting in a way that compliments the scene. Lately, I’ve had to light in a very naturalistic way. A lot of the projects that I work on are meant to feel very real and grounded. I often describe the way I light as daylight dominant. I love “natural” light through windows so I’m often using big sources outside to fill a room or a space. If there’s a window, I’m going to use it. If there isn’t a window on a set and I can put one in,

I’ll put one in. Interiors I like a lot of soft light, diffusing and bouncing light. It just feels the most real to me.

What role does color grading play in your workflow, and do you have preferred color grading software?

Colour grading plays a HUGE role in my workflow. I’m lucky enough that I don’t have to colour grade my own projects and get to work with some pretty phenomenal artists. Colour grading is such an art, when it’s done well it can really transform a project. LUTs are an important part of my workflow as well and often I’ll ask the colourist I’m working with to help create a LUT that’s appropriate for the project. Also, colour grading does help me work

faster. You really need to work at a pace these days that doesn’t always allow time to finesse things. Knowing what’s possible in post really can allow you to be more efficient on set.

What do you think about the use of filters? Do you usually work with them?

I do use filters often. I like being able to play a change the look of a lens without it being permanent. On my last project, Lioness, I created a filter by combining two styles of filters from Tiffen and blending them into one. I often will stack different filters to create a unique look but sometimes using multiple filters at once creates some unwanted flares or double reflections.

DRIVE BACK HOME

Blending those filters got rid of those issues and also allowed me to add another filter on top if I wanted to while using less glass.

Can you tell us about the challenges you found during the “Drive Back Home” production / shooting?

“Drive Back Home” was a super challenging film. It’s seems quite contradictory to be both low-budget and a period, winter road trip. It was a really, really, tough shoot. We had a very barebones crew shooting at the height of winter in northern Ontario. We had a process trailer for only two days, a hero truck that broke down multiple times, and zero PA’s. But it’s a really great example of what can

get done when you have a great story and enough talented people who believe in it and are determined to see it through. I’m still blown away on what we were able to achieve every time I watch it…

How involved are you in the location scouting process, and how does it influence your cinematography choices?

I love location scouting because often you fi nd a place that off ers so much more than you’re looking for and that can elevate the production value of a project. I remember there’s a scene in “Drive Back Home” where we were shooting a conversation between Welden and

his brother in a church. There was a giant, round, textured window at the very top of the church that would create a beautiful backlight from the sun at a certain time of day where they were sitting. We didn’t have the means to light it any other way so we built the day so we could take advantage of that. I’m constantly using Sun Seeker to use the sun whenever possible –it may be the most used app on my phone. It’s a great example of a location infl uencing the cinematography for sure. I’m sure we would have fi gured out a way to light that scene somehow if that window wasn’t there but there’s no way it would have felt as grand.

“Drive Back Home” was a super challenging film. It’s seems quite contradictory to be both low-budget and a period, winter road trip. It was a really, really, tough shoot.

DRIVE BACK HOME

With the rise of 4K and 8K production, how has this affected your approach to capturing and delivering content?

It really hasn’t I have to say. I think there’s so much more to an image that just the amount of resolution. And, I don’t know how many people are watching content in 8K, even 4K. So much content is watched on a phone, laptop, or iPad these days. Some theatres are still projecting at 2K. But I don’t remember a time where I heard someone complain about resolution. I think there’s a perception that you need a sharper image, so you need higher resolution but for film and narrative projects, higher resolution doesn’t necessarily equal a better image. It’s hilarious how bad something can look once you remove the motion blur…

Can you share your experience working with drones for aerial cinematography? How has this technology enhanced your storytelling capabilities?

I mean, drones are used constantly these days as a quick and easy way to bring scale to a production. The cameras you can get a drone these days are pretty ridiculous, so it seems like a no-brainer to use them whenever it’s appropriate. Not everyone has the means to have an 80 ft lift on set or a camera helicopter, so drones get used all the time, rightfully so. Again, using “Drive Back Home” as an example, having a drone really helped tell the story of the journey to and from Toronto across Eastern Canada. It was important to see and feel the landscape and we could not have captured that any other way.

What are your thoughts on the shift towards LED lighting in film production, and how has it affected your lighting setups?

On both Mayor of Kingstown and Lioness, we use at least 3 cameras shooting at the same time 90% of the time, so lighting can be really challenging. We’re looking 360º often so I’m using LED lighting constantly. We’re always screwing LED tubes into door frames, or putting NYX bulbs into lamps, or replacing fluorescent tubes in rooms and hallways, or even throwing little lights on car dashes at night. It really makes my job easier but also more creative. It’s also a really great way to minimize cables laying around and have light sources that are completely battery powered and remote controlled. They also really help us work more efficiently.

MAYOR OF KINGSTOWN
MAYOR OF KINGSTOWN

I think there’s a perception that you need a sharper image, so you need higher resolution but for film and narrative projects, higher resolution doesn’t necessarily equal a better image.

How do you think about the role of DoP actually and how do you believe it will evolve?

Like I said earlier, I feel like my job as a cinematographer is to interpret. And along with interpreting, being able to communicate and execute a vision. I love being hands on and operating the camera but in some cases that’s not what’s needed. At the end of the day, you need to have a great understanding of visual language and storytelling and be able to communicate in a creative way to achieve that vision. I think it’s going to evolve in terms of cameras, gear, and lighting etc. but I believe the most important part is always going to about being able to creatively come up with interesting visual solutions to tell a story.

I think as cinematographers, we’re going to have to keep up with and be open to the new, different kinds of content, but I think individual creativity through visual storytelling will always be the most important trait to have.

For aspiring cinematographers, what advice would you give about balancing technical knowledge with creative vision in today’s digital filmmaking landscape?

Having a creative vision AND being able to apply that vision to an idea or concept, to me, will always be the most important thing to work on and develop. Any technical knowledge you have can only help be more creative, knowing how the tools work and how to manipulate them. I know some cinematographers who don’t operate or touch cameras or know much about light fixtures – they often have a background in film to some capacity, but what makes them successful is their creative vision and ability to execute that vision. An image with a strong creative point of view will always be more interesting than a technically perfect executed image with no idea. It’s a two-part process. I’m always pushing folks to think that way, especially when you’re getting started. How do you make 2 people talking in a room interesting? What is the point of the scene? How do you make that part sing? Yes, a slow push on a 100 feet of dolly track would look very cool but does it make sense for what they’re talking about? That to me is what sets people apart.

LIONESS

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.