TM Broadcast International #104, April 2022

Page 1

NEWS - PRODUCTS

Titular noticia Texto noticia

1



EDITORIAL The global broadcast industry is at full capacity.

representative companies that will be attending

A sign of this is the return of a face-to-face

the fair.

NAB Show and the success of new production models, such as virtual production, which surely have arrived to stay. NAB Show, will finally and after a complex period marked by international difficulties, return to the traditional model. The return to face-toface contact is the return to normality. In such an important event, product contact between

By means of innovation and the development media companies, solutions from the gaming world, events and cinematography have come together to create a new, future-proof mode of production capable of delivering sustainability, efficiency and possibilities that are almost endless.

manufacturers, technology and end users will

How did it happen? What express benefits

only render maximum benefits by bringing all

does this new technology offer? How does it

these agents together in one single venue. It is a

affect traditional means of production? We

fact.

have analyzed the testimony of large and small

As a result, more than 900 companies have

companies such as ARRI or Garden Studios

made it in this edition to the city of Las

and partnerships such as SMPTE. Today we are

Vegas from April 23 to 27. We have analyzed

providing you with the answers to these and

the market and, through our work, we are

more questions about the present and future of

bringing you the list of highlights on the most

virtual production.

Editor in chief Javier de Martín editor@tmbroadcast.com

Creative Direction Mercedes González

TM Broadcast International #104 April 2022

mercedes.gonzalez@tmbroadcast.com

Key account manager Susana Sampedro ssa@tmbroadcast.com

Administration Laura de Diego

Editorial staff press@tmbroadcast.com

Published in Spain

administration@tmbroadcast.com

ISSN: 2659-5966

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

3


SUMMARY 6

20

NEWS NAB Show 2022 All the highlights of an industry in full capacity Finally, one of the two world fairs hosting the most important news in the broadcast industry will be held in the traditional and most appropriate way. After a complex period in which global circumstances had made it impossible, NAB Show will fill the halls with international exhibitors and professional visitors at the Las Vegas Convention Center from April 23-27, 2022.

40

52

4

ARRI Stage London A conversation with ARRI’s Business Development Director for Global Solutions, David Levy, who shared his experience, knowledge, insights, and perspectives on virtual production.

Garden Studios This company has created a virtual production facility with all the necessary technology and qualified and essential personnel to make these techniques an affordable option for any type of company.


62

SMPTE RIS On-Set Virtual Production We spoke to Kari Grubin, Project Leader at Rapid Industry Solution (RIS) On-Set Virtual Production.

70

How virtual production and decentralized workflows are enhancing the viewer experience, By Vizrt

74

5G – from utopia to reality and beyond, By LiveU

78

Specificity versus flexibility: are they really mutually exclusive? By Bridge Technologies

82

“How to Mojo” by Ivo Burum (Part 3)

5


NEWS - SUCCESS STORIES

Seven Production and Gravity Media to work together in the lead-up to Qatar 2022 World Cup broadcasting technical personnel and

The Gravity Media team

a total of four medium

can also work closely on

and large sized HD/4K

all projects via the Gravity

Outside Broadcast Trucks,

office in the Capital of

the smallest being twelve+

Qatar, Doha which was

cameras. All trucks have

launched in 2007. This hub

SNG capability for live

has allowed Gravity Media

transmission along with

to work for numerous

specialist ENG, 4G and

local clients including beIN

Remote Cameras.

Sports, Al Jazeera Media

Seven Production is an industrial pioneer in the MENA Region. The house began customizing the first Outside Broadcast Gravity Media and Seven Production will provide numerous locally-based Outside Broadcast facilities suitable for international rights and non-rights holders available for use. Both companies offer 4K and HD truck-based facilities including Gravity Media’s six camera HD OB vehicle with DSNG capability. Seven will be offering it’s renowned team of

6

Network, Qatar Television, Aspire Zone, Qatar National Olympic Committee, and the Supreme Committee for Delivery & Legacy.

Trucks to cater for local

“Gravity Media looks

sports productions and

forward to the collaboration

later, was the first to

with Seven Production.

introduce 4K to the region.

Working together makes

At present, Seven has

us the most powerful and

the largest independent

creative media broadcasting

fleet of Outside Broadcast

partnership, allowing

Trucks and the latest

us to offer cutting edge

in 4K equipment within

production services across

the MENA Region, and

the region. Our state-of-the-

the team specializes in

art OB vehicles are readily

the customization and

available for the upcoming

integration of technology

eagerly anticipated World

according to the clients’

Cup”, comments Eamonn

specific requirements for

Dowdall, Executive Director

each live event.

of Gravity Media. 


NEWS - PRODUCTS

7


NEWS - SUCCESS STORIES

Dutch broadcaster Omrop Fryslân becomes first in Europe to rely on LiveU’s Air Control solution Omrop Fryslân, a broadcaster from The Netherlands, has become the first European company to deploy Air Control from LiveU in this broadcast industry. Omrop Fryslân, based in the northern Netherlands, first became a LiveU customer back in 2013. Gradually has expanded its range of solution relying in LiveU capacity. As Martin Wijbenga, Head Programme Technician with Omrop Fryslân, said, “LiveU’s IP bonding technology, and remote control capabilities, makes the company’s hardware and software perfect for us.” The entire supply over the years has been carried out by Heynen. “At the start of the project – elections for city councils – we discussed the approach of combining the best of a TV and radio show. We wanted to have an attractive TV show with the speed and dynamism of

8

radio, covering as much of local city council elections as possible with as many interviews as possible. To be able to do that without having a reporter at every location where there was a possible interviewee, we needed to provide a stable connection to them in a simple way. And that’s where Air Control came in,” announces Wijbenga. “Besides the 10 reporters in the field that connected via their LiveU bonded transmission units/app, we had one LiveU receiver especially for Air Control. A few hours before the show, we invited all the different city council spokespeople to the Air Control solution and we quickly tested the video/audio. During our live show, all the spokespeople could connect to Air Control as soon as they had preliminary results of the elections. Omrop Fryslân could talk to interviewees via a simple USB headset on a laptop and could then switch that feed to air, if

desired, without having to use non-broadcast systems like Teams or Zoom,” explains Wijbenga. “We are very grateful to both LiveU and Heynen, providing us with not only the cloud solutions we need but also supplying the service required to get these up and running quickly. Air Control is easy to use, reliable and provides very good onair quality,” concludes Wijbenga. 


NEWS - SUCCESS STORIES

9


NEWS - SUCCESS STORIES

Orange Belgium relies on Viaccess-Orca’s content protection solutions for its OTT service

Orange Belgium has recently launched its OTT TV relying in ViaccessOrca’s Secure Video Player (VO Player). The solution is deployed on TV decoders and mobile apps and assures quality of experience on every screen. The VO Player allows Orange Belgium to comply with content owners’ security requirements. The VO player is pre-integrated with an array of thirdparty encoders, packagers, analytics, advertising, app vendors, and CDN, ensuring integration within the

10

Orange Belgium OTT live TV ecosystem. “We selected Viaccess-Orca as our strategic partner based on their expertise in OTT content protection while enabling superior viewing experiences,” said Guillaume Ducellier, TV Domain Manager at Orange Belgium. “With the VO Player we can deliver an exceptional-quality TV experience to customers on TV decoders and OTT apps, as well as a slimmed-down Orange TV Lite service that includes live OTT channels and nPVR.”

Orange Belgium has been a valued customer for several years, and this latest deployment reconfirms their confidence in our solution.” said Benoit Brieussel, Vice President of Product at ViaccessOrca. “The VO Player is a critical component not only to deliver an outstanding quality of experience but also to fulfill evolving Belgian advertising regulations, including the recent ad-skipping policies. By porting the same code across devices, the VO Player ensures a unified experience.” 


NEWS - SUCCESS STORIES

11


NEWS - SUCCESS STORIES

Gates Air to celebrate 100 years of history at NAB Show 2022 The birth of a company like Gates Air began with a trip. Henry Gates went on a business trip with his son Parker to Pittsburgh in 1920. Months earlier, the city’s AM radio station KDKA had opened. Young Parker was a radio enthusiast and father and son visited the station. That was the beginning of a passion that they both turned into a business. This year, the company that was founded as Gates Radio Company turns 100 years old. Gates Air is celebrating its 100th anniversary and offering a look back at its history. GatesAir will formally recognize this historic milestone at the NAB 2022 show (April 24-27) with a series of giveaways and prizes, including crystal radio sets in recognition of Parker’s early innovations. Following the sale of its first AM radio transmitter in 1936 (to WJMS in Ironwood, Michigan), transmission gradually became the

12

company’s core focus over the ensuing decades. GatesAir, then a part of Harris Corporation, would add TV transmitters to its product lineup in 1972. The company expanded its brand of wireless content delivery with the acquisition of Intraplex in 1999, then a specialist in legacy STL systems and today a leader in Audio over IP networking. These three businesses serving over-the-air broadcasters — radio transmission, TV transmission and Intraplex networking — remain GatesAir’s exclusive focus today, with new products and features across each business planned for the 2022 NAB Show. Some of them are Maxiva IMTX-70 Intra-Mast, a modular, multi-tenant transmitter solution that can house up to eight lowpower TV transmitters in one compact chassis; and Intraplex Ascent, a cloud transport solution that can move broadcast and

Gates Radio Mgmt 1957.

media content at scale. The company will also unveil Ascent’s benefits for moving large volumes of ATSC 3.0 TV content at this year’s NAB show. “GatesAir has outlasted and outperformed its competition over the long haul because we have remained focused on a business that we do very well,” said Joe Mack, Chief Revenue Officer, GatesAir. “That includes building the most reliable and efficient transmitters in the business, and it means moving heaven and earth when it comes to customer service and support. I think our partners and customers really value that.” 


ADVERTORIAL

See Next Generation Virtual Studio, AR and XR Graphics at Zero Density’s NAB Booth Are you interested in getting hands-on with the same virtual studio technologies used by broadcasters like Fox Sports, Warner Media, and The Weather Channel? How about seeing an end-to-end news workflow incorporating stunning AR graphics? Look no further. This year, Zero Density’s NAB booth will give you a chance to try it all. Featuring three separate spaces including an AR demo stage, state-of-the-art virtual studio, and demo pods, the booth will give anyone a front seat ticket to the real-time graphics technologies powering daily shows, live events and more from the world’s biggest broadcasters. Booth visitors will get a chance to:

● See what it’s like to control a professional virtual studio with the help of RealityHub 1.3, Zero Density’s universal control UI for broadcasters that will let you manage everything from real-time graphics to virtual cameras from a web browser. ● Discover the hardware that powers AR and virtual studios with a demo of RealityEngine AMPERE, a

● See Zero Density’s Reality Engine 4.27 at work. The real-time compositor which can key, comp and render photoreal 4K graphics in real-time — has been built to take advantage of Unreal Engine features that will make virtual studio graphics more photorealistic. ● Explore end-to-end news graphics workflow, featuring augmented reality, videowall and overlay graphics at the AR demo stage. ● Get hands-on with TRAXIS talentS an AI-driven stereoscopic tracking system that can track your movement in 3D space — all without any markers or wearables required. TRAXIS is taking the visual fidelity of virtual studio productions to new heights.

workstation that doubles the rendering performance of real-time graphics. ● See how creative professionals from PopUp Media, CGLA Studios, oARo Studio, DreamWall and more take advantage of real-time graphics with exclusive presentations and virtual studio demos that showcase the highest level of photorealism and advanced keying methods. Don’t forget to check the detailed schedule at zerodensity.tv. ● Ask Zero Density’s technical experts all your questions about AR, virtual studios, real-time graphics, and more.

Interested in learning more about how real-time graphics can help you tell your story? Visit Zero Density’s booth at North Hall N2636. Feel free to use Zero Density’s guest passcode LV2597 for an Exhibit Pass.

13


NEWS - SUCCESS STORIES

Oman TV relies on Aviwest for the production and distribution of the Tour of Oman 2022 Aviwest has recently announced that the cycling race 2022 Tour of Oman was broadcasted live using PRO3 Series bonded cellular transmitters and StreamHub transceiver. The national broadcaster Oman TV deployed motorbikes, shoulder cameras, and drones with PRO3 transmitters. This equipment was distributed in more than 13 locations around the country to follow the 891-km race. In addition, they relied in StreamHub transceiver to distribute the footage to affiliates and other

14

international broadcasters. Each PRO3 Series transmitter delivered video over 4G/3G cellular networks to AVIWEST’s StreamHub platform — located in Oman TV’s control room. That platform then received and decoded the live feeds. StreamHub allowed to be freely distributed over virtually any IP network to CDNs, media servers, streaming platforms, multiple affiliates and other broadcast facilities. It supports multiple output formats and features Safe Streams Transport (SST) protocol.

“AVIWEST solutions gave us more flexibility to cover the Tour of Oman live, allowing us to streamline our workflows and deliver greater coverage that captures different aspects of the competition,” said Tariq Al-Mawali, Satellite Broadcasting and Links Department at the Ministry of Information, Sultanate of Oman. “Our motorbikes, drones, and remote teams followed riders throughout the country’s rich and varied landscapes to capture the excitement of the race live over cellular networks.” 


NEWS - SUCCESS STORIES

15


NEWS - SUCCESS STORIES

Fédération Internationale de l’Automobile (FIA) establishes Riedel as main supplier of telecommunications is a result of a trusted relationship over two decades. It illustrates our shared commitment to use the most advanced technologies, with the safety and sustainability Riedel is now the official communications and signal distribution solutions provider for FIA motor sports events.

requirements that we have for all our championships.” “This partnership has

Riedel Communications

de l’Automobile motor

been a long time in the

has recently announced

sports events include:

making. Over the past 20

that the company has

MediorNet distributed

been named the official

video infrastructures, which

years, FIA and Riedel have

supplier of motor sports

combine signal transport,

telecommunications for the

routing, processing, and

Fédération Internationale

conversion in a redundant

de l’Automobile (FIA).

real-time network; the Artist

Riedel has a two-year

the Bolero wireless

relationship with the FIA, and from now on Riedel will supply hardware and software technologies to all FIA World Championship

digital intercom network; intercom system; and a variety of headsets and handheld radios. Peter Bayer, FIA secretary

been constantly inspiring one another to push the envelope of innovation even further,” said Thomas Riedel, CEO, and founder of Riedel Group. “We’re thrilled to be officially joining the Fédération Internationale de l’Automobile’s efforts to promote higher safety standards and the latest technology in the world

series.

general for sport and FIA F1 executive director,

of motor sports, making

Communications and

said, “The FIA’s decision to

the lives of pilots, teams,

signal distribution solutions

name Riedel as an official

officials, and the entire

provided by Riedel for

supplier of its motor

racing community both

Fédération Internationale

sports communication

easier and safer.”

16


NEWS - BUSINESS & PEOPLE

17


NEWS - BUSINESS & PEOPLE

24i acquires data specialist The Filter to expand its end-to-end OTT services 24i, a company dedicated to create end-to-end video streaming solutions, has recently acquired privatelyheld, UK data specialist The Filter. The Filter’s data science, analysis and machine learning technologies help consumers to find and watch more of the video content they love. The Filter’s managed service approach helps streaming providers to increase consumer engagement by surfacing more content. The Filter’s customers include Joyn in Germany, EPIX in the USA and BBC Studios-owned UKTV Play. “24i’s OTT and Pay TV customers know that increased consumer engagement is an essential element of their business success – it drives down churn for subscription services and increases the lifetime value per customer in ad-supported business models. The Filter’s managed service is based

18

on a ‘test, learn and refine’, AI-based methodology that sets it apart from other offthe-shelf recommendation engines and analytics packages. We’re excited to bring this expertise in advanced, data-driven personalization and recommendations into 24i’s modular offering,” said 24i CEO Neale Foster. The Filter’s team of data scientists will continue to be based in Bath, UK and will work alongside 24i’s existing data team to further develop the company’s data offering which already includes quality of service metrics, content usage and royalty reporting capabilities. “The Filter has a history of significantly improving people’s entertainment experience though using

cutting edge data science. We power everything from the ‘recommended for you’ rails in video streaming apps, to personalized recommendations in marketing and even entirely dynamic user interfaces that adapt to the individual consumer’s viewing habits. Continuously improving personalization is enormously important to our streaming customers and our powerful insights often help to shape their strategic thinking. We are very excited to be joining the 24i family as this will help us to further develop our offering and bring this game-changing technology to a wider range of entertainment companies worldwide,” said The Filter’s CEO Damien Read, who now becomes 24i’s SVP of Data Products.


NEWS - BUSINESS & PEOPLE

19


NAB SHOW 2022

Back to Face-to-Face Business Finally, one of the two world fairs hosting the most important news in the broadcast industry will be held in the traditional and most appropriate way. After a complex period in which global circumstances had made it impossible, NAB Show will fill the halls with international exhibitors and professional visitors at the Las Vegas Convention Center from April 23-27, 2022.

20

Such an event needs the extra factor that adds to the live and in-person experience. To be engulfed in the technology first-hand, enquire about and see the innovations that the companies show in this meeting has nothing to do with the experience offered by the methods that have been developed over recent times. The distance that still characterizes a virtual exhibition, no matter how much customized it is, does not allow knowing every specific detail


NAB SHOW 2022

that professional clients in this sector will be able to grasp when they see a product with their own eyes.

From remote to a transition to IP

The return to normality of a world fair like the NAB Show is a sign of strength. The industry has adapted to the changes and is going to Las Vegas to show how it has evolved to overcome challenges. The industry is really looking forward to show everyone the new capabilities it has developed to cope with novel, complex production methods.

as well the incorporation of resolutions and

infrastructure, or the inclusion of signals from meeting and video conferencing platforms, formats such as Ultra High Definition, HDR, and the 2020 color space. Everything will be unveiled in Las Vegas, but at this magazine, we guarantee you it will not remain there untold. Here you will be able find out what is going on at NAB Show 2022.

All the highlights of an industry in full capacity

21


NAB SHOW 2022

The Industry’s Strength Through recent communications kept by this magazine’s newsroom with the organizers of the fair, we have been able to make sure that the industry is back in full operation. More than 900 companies have relied on this event to show the world their novelties. Among them, worth mentioning are the 160 companies that will attend the fair for the first time ever. This significant number also indicates that the creation of successful new enterprises is very noticeable. The exhibition will be displayed in three of the largest buildings in the Las Vegas Convention Center. The organization has arranged the deployment of the international brands and companies making it to the event in specific spaces based on the production stages in which they are specialized. The Central Hall and North Hall hosts companies representing the production stage of multimedia content related

22


NAB SHOW 2022

to creation. This phase is very extensive, ranging from companies dedicated to the creation of cameras and lenses, to companies whose scope of action is content editing, as well as corporations that cater to the needs of the acquisition, transport and image and sound organization processes. After content is created, it must be streamed and broadcast globally. This requires connecting content creators to the world. This important part of the process is represented at the West Hall of the Las Vegas Convention Center. But where is the business? Of course, sustaining an industry requires generating a profit that will keep it growing. The companies providing these solutions will be represented in the North Hall of this convention center. It is the intelligent capability in technology what is setting the trend for the future of this industry. The power of computing, artificial intelligence and machine learning are technological capabilities that are enabling end users, viewers get more and better access to the content they want to enjoy and consume. NAB Show is a reflection of the market. We have investigated this thoroughly. We invite you to discover it with us. 

23


NAB SHOW 2022

North Hall

LIVEU LiveU is exhibiting at booth N1019, North Hall & Central Lobby. The company will be featuring a live stage for fireside chats, panels, and workflow demonstrations. Attendees can visit the booth to get a first-hand look at the company’s live video platform and solutions suite enabling customers to apply innovation and automation to the video production workflow. The company will demonstrate their Air Control solution, which is a broadcast orchestration cloud solution, enabling seamless control of the human elements associated with producing live shots with remote guests or talent. They will also showcase LiveU Matrix. This solution is their cloud IP video management and distribution solution with its new Dynamic Share service, enabling users to exchange live content with 3,000+ participating TV networks,

24

stations, producers, sports organizations and online channels around the world, part of the LiveU Global Directory

WHEATSTONE Wheatstone exhibits at booth N2631. The company showcases the new Tekton 32 and full line of audio elements for its WheatNetIP audio network, an AES67 compliant AoIP network that supports SMPTE-2110. Wheatstone introduces Tekton 32, its latest WheatNet-IP audio networked TV. This AoIP console features control interface for tight integration with all the major production

automation systems, providing mixing and automation in one native IP audio environment. With tactile faders on the one hand and a touchscreen interface on the other, Tekton 32 is easy to navigate for the busy producer or director.

CHYRON Chyron exhibits at booth N2613. The company will feature the PRIME Live Platform, supporting PRIME CG and a host of other graphics modules, including video walls, branding, touch screen, and augmented reality. The recently released PRIME Commander will


NAB SHOW 2022

also be featured. PRIME Commander executes switcher ME control of sources, cuts, and transitions; cues and plays graphics and clips; and makes audio mixer adjustments. All of that with the aim of orchestrate core production.

and illustrated replay.

Chyron will present a platform not-yet-released that offers users a single, unified interface for graphics, switching,

will be incorporated with

It is designed for cloud deployment.

VSN VSN exhibits at booth N925. The company showcases Metadata Rules Editor. It is its first introduction and it VSNExplorer. The module provides users with the structures of best-practice metadata management

without them needing any knowledge of how to structure metadata themselves, and at the same time allows them to fully customise their metadata arrangements. The second major addition to the core VSN range occurs within VSNCrea – VSN’s Broadcast Management System, which has been designed with remote HTML5 access and full VSNExplorer integration.

25


NAB SHOW 2022

The Ad pricing solutions have been expanded within the new Commercials Module to allow for the assignment of price blocks at different times of day, the selection of Ad rate types, and the direct exchange of this information with the commercial contracts on which they are based.

EDITSHARE EditShare showcases its new developments at booth N3008. The latest enhancements provide interworking, content security and availability.

latest RAW format updates

NAB’s visitors can see project synchronization with NLEs, including Media Composer, Adobe Premiere Pro and DaVinci Resolve, and EditShare’s FLOW workflow tools. The next release of the core FLOW software will include the first implementation of this integration, allowing complex projects to be readily moved between EditShare media management and the NLE environment.

For news and sports

Also added to FLOW is the ability to support the

EFS clusters in different

26

for RED and Blackmagic Design camera systems and impose LUTs in real time, both on full resolution material and on proxies. fast turnaround editing, FLOW can now ingest NDI contribution feeds. EditShare is adding new hardware and software content security. The EFS 60NL provides 60 drive bays in 4U of rack space. Also, in this area, EFS Multi-Site allows users with multiple locations to leverage built-in file acceleration to synchronize project storage between facilities.

EVERTZ Evertz Microsystem will be exhibiting at NAB 2022 at booth N5907. The company showcases Mediaton-X platform OvertureRT Modular Playout engine to handle Media Asset Management (MAM), Transmission Playout (TX) and Non-Linear Delivery (NLD) across thousands of channels globally. The platform can run on private or public cloud infrastructure. The system can be configured to run just one of the MAM, TX and NLD core functions, or all three combined. As an extensive playout engine, the OvertureRT deliver features including logo/ graphics insertion, rich


NAB SHOW 2022

SCTE metadata messaging, audio loudness correction, Nielsen watermarking, Dolby D encoding and more. Also, at NAB, Evertz is introducing StreamPro: a playout and automation solution for smaller broadcasters and for Overthe-top (OTT) applications. StreamPro delivers linear playout streams, preconfigured workflows, task options and a choice of redundancy models.

BRAINSTORM Brainstorm showcases its novelties in the stand N2608. It will present the new features of its new Edison Pro Suite 5. The solution for virtual studios includes features designed to simplify presentations and live events. These include workflow and user interface improvements, new virtual lights and actor enhancements, as well as a new chroma key.

Aston includes features designed to improve graphics creation and will also demonstrate the latest version of Aston Multichannel, which provides the ability to create Advanced Stacked Layers. This feature also offers multiple channel playout from a single or different workstations, and the capacity of compositing of large canvases in any resolution or aspect ratio.

27


NAB SHOW 2022

with Neuron, provide an overarching layer of control and monitoring with Cerebrum, which comes with a new SDN license.

ROHDE & SCHWARZ AND PIXEL POWER

WebControl, an HTML5based application designed to control the playout mode of InfinitySet, now include additional features such as full chroma keyer control, control of camera presets and positions, triggering of actions, or management and control of InfinitySet’s playlists.

EVS At NAB, on booth N2625 EVS will demonstrate its three core solution pillars: LiveCeption, MediaCeption, and MediaInfra. Through live demonstrations of each solution set visitors will see the ways that these products can be combined to create and manage content.

28

Making its first appearance is the IP-based LSM-VIA, a key component of the LiveCeption Signature solution that creates EVS’ new replay system. The LSM-VIA is now the common UX for all XT-VIA® and XT-GO® servers.

These companies will exhibit at booth N3625. Rohde & Schwarz and Pixel Power together present an integrated workflow solution from ingest to playout that includes broadcast wide monitoring, multiviewing and storage solutions.

Demonstrations of EVS’ LiveCeption solutions will also highlight the replay capabilities that are brought by XtraMotion. This process can be triggered from the LSM-VIA remote.

R&S showcases on-premise and cloud functionality for ingest, transcoding and studio playout using public cloud with existing storage hardware, which synchronises content between the facility and the cloud.

The company also offers a range of MediaInfra solutions for control and processing of all media workflows. The solutions incorporate HDR conversion and low latency JPEG XS compression

Other solutions includes CLIPSTER MKII platform for mastering and distribution of feature films and episodic TV; or Master Control and Playout Automation workflows from the cloud on behalf of Pixel Power.


NAB SHOW 2022

Finally, a live demonstration of 5G Broadcast will take place from the show floor and will show how the delivery of content on the move is being redefined.

MAXON Maxon is present at NAB at booth N5920. The following tools and solutions have been added

to Maxon’s product suite, and have never before been seen in the Maxon booth. ZBrush: sculpting and painting software. Forger: a mobile app that allows users to explore 3D sculpting without the complexity of typical desktop sculpting apps. Red Giant: essential tools for editing, VFX and motion design, including Trapcode Suite, Magic Bullet Suite, Universe, VFX Suite, and

29


NAB SHOW 2022

PluralEyes. Finally, Redshift: fully GPU-accelerated, biased renderer; first live showcase of the combined force of Maxon’s suite of 3D animation tools and renowned rendering solution, Redshift.

Central Hall

GRASS VALLEY Grass Valley is exhibiting at booth nº C2107 in the Central Hall. The company will show the industry how they see future of media and entertainment through IP, software and cloudbased technologies. The booth is designed to take visitors to the future production environment, following video content from contribution via Grass Valley cameras through to asset management, editing, and playout via GV AMPP (Agile Media Processing Platform). The booth will feature pods hosting key Grass Valley partners Appear, EditShare, Flowics, Haivision, and Telos Alliance. We will find production cameras, including the LDX

30

150 and LDX 90 camera series; AMPP, the platform that allows any media company to connect the solutions they have today into cloud-based remote productions; GV AMPP production tools, including live production, asset management and playout; and IP infrastructure and processing products such as GV K-Frame and GV Orbit.

DALET Dalet is exhibiting at booth C4423 (Central Hall). Dalet will also be present at the Amazon Web Services booth W3500 (West Hall) as a partner of this service. The company will be present for the first time at this world trade fair. Dalet Flex for Teams allows access to key functions in media logistics, such as file management and multiplatform content distribution through preconfigured and affordable workflows. This

solution is now offered as Software as a Service (SaaS). Dalet Pyramid solutions provide a natural evolution for those newsrooms using Dalet Galaxy five. It adds news production capability to their work processes. It features centralized planning capabilities with a Kanban-based planning tool that allows you to visualize, manage and assign stories at the corporate level. Also included is web browserbased remote editing for quick news editing. Finally, the platform also offers digital production functionality that enables video production, planning and editing, as well as digital news content distribution.

SOLID STATE LOGIC Solid State Logic exhibits at booth C8008, Central Hall. The company will show their advancements in its


NAB SHOW 2022

audio platform System T and the benefits of using their L650 audio control console together with Live V5.0 software.

CALREC

TSL

Calrec’s booth will be

The company will be showcasing the conjuction between Live V5.0 software and Live L650 console. This solution offers a set of integrated control solutions for L-Acoustics’ L-ISA and Meyer Sound Spacemap GO, Shure ULX-D and Axient Digital Mic Control.

independent consoles from

TSL exhibits at Booth C5240. The company will showcase its updates to its Power Distribution Units (PDU). As remote means of production are implemented, technological solutions must also adapt to these new processes. TSL’s enhancements respond to this need.

Also, SSL will be highlighting the many advancements of its System T broadcast audio platform. They include new TE1 & TE2 Tempest Engines featuring scalable Pay-As-You-Go licensing capabilities; flexible, multi-operator remote production workflows; NGA / Immersive production capabilities, channel and bus formats with 3D panning; Dante AoIP implementation, featuring full routing control directly from the console with AES67 and ST 2110 integration; and integrated control of Shure Axient wireless mics and Dante enabled Rednet MP8R Mic IO.

placed in Central Hall Stand, C8008. At NAB 2022, Calrec will run three a fully redundant pair of ImPulse cores; a 48 dual fader Apollo console, a 40 fader Artemis console and a headless console running Calrec Assist on a PC. ImPulse is a DSP engine and has native SMPTE 2110 connectivity. It’s compatible with the Calrec Assist web interface as well as Calrec’s Apollo and Artemis consoles. It provides 3D immersive path widths and panning for audio with height and 3D pan controls, flexible panning and down mixing built-in. For Type R there will be two novelties. Talent Panel is a unit that allows guests to switch between multiple sources via its integrated hi-res TFT and adjust headphone volume with a dedicated rotary control.

PDUs have been designed to handle up to 32 amps (A) with optional auto-failover/ auto supply and lifted the entry level amps from 16 to 20. The design also includes an upgrade from 12 to 14 outputs, each one capable of delivering up to 10A. The PDUs have a front panel LCD screen for local monitoring and control. Additional settings include power loss configuration (all-off, all-on and last-state); fast-configuration via USB or front panel LCD; data logging (Syslog or to USB); SNMP monitoring and alarms (optional control); email and SMS alerts and a power input fail alarm.

31


NAB SHOW 2022

Also included is an internal temperature sensor with adjustable alarm.

MAGEWELL Magewell demonstrates USB Fusion at booth C8508. USB Fusion features two HDMI inputs and one USB webcam input. ets users switch between 1080p60 HD sources or combine two inputs (picture-in-picture or side-by-side) into one output and bring the result into popular software via its USB 3.0 interface. USB Fusion offers three ways for users to compose and control their presentations. Ondevice buttons allow users to switch between full-screen sources or select a combined scene layout, while a browserbased web interface offers scene switching, volume adjustments, and device configuration.

CLEAR-COM Clear-Com showcases its novelties at booth C6908 in Central Hall. The Arcadia Central Station is an IP intercom platform

32

that integrates wired and digital wireless partyline systems along with thirdparty Dante devices in a single rack unit. It offers a base-level of 32 IP ports which can be expanded to up to 96 IP ports. Also on the stand will be the FreeSpeak Edge Base Station, an IP base station that supports the full range of FreeSpeak digital wireless intercom solutions as well as third-party Dante devices Trilogy Mentor system, now with dual gigabit network ports for improved network connectivity, will also be featured. The system generates test signals for video and audio and network timing signals for broadcast.

SENNHEISER, NEUMANN AND DEAR REALITY Sennheiser, Neumann and Dear Reality will showcase at booth No. C6715. The Sennheiser Group booth features an experience zone that will present a complete Dolby Atmos mixing workflow complemented by products from Dear Reality and Neumann. Sennheiser will showcase everything from premium shotgun microphones like the MKH 8060 and MKH 416 to wireless tools like the EK 6042 camera receiver. It will contain RF wireless solutions such as Digital 6000 wireless microphone series. For ENG


NAB SHOW 2022

professionals and creators, the company brings the AVX, its analog evolution wireless G4 camera system. Also for DSLR camera or mobile phone the MKE range of microphones. Neumann will show upcoming studio monitors with optional AES 67 interface. The new Neumann Miniature Clip Microphone System is also making its NAB debut.

Dear Reality will present plug-ins and application software for professional mixing studios and remote productions. VR PRO, VR SPATIAL CONNECT and VR MONITOR.

TEDIAL Tedial exhibits at booth C3036. The company will showcase smartWork, its Media Integration Platform.

Tedial has designed smartWork with an easyto-use toolset that distills complex workflows into simplified processes. The platform removes time-consuming and complex configurations via a common User Interface that guarantees an optimal user experience and easy access to all applications, external systems (including any legacy MAM), and features self-validation. Users are free to concentrate on creativity and can make the datadriven decisions necessary to quickly adapt to market or supply changes. Its no-code architecture will help non-technical users, and because it follows Infrastructure as a Code (IaaC) it can be deployed on-premises, on any cloud or in a hybrid architecture.

MARSHALL ELECTRONICS Marshall will be showing it solutions at booth C1307. The company will be highlighting its CV568/ CV368 Global Shutter Cameras with Genlock

33


NAB SHOW 2022

and CV566/CV366 Rolling Shutter Cameras with Genlock. The CV568 and CV368 POV camera models offer 1/1.8” Global Shutter 3.2MP sensor and 25 percent larger pixel size. The CV568 Miniature HD Camera is built into the same durable miniature sized body as other Marshall CV503/ CV506 cameras, and have rear panel protection, interchangeable M12 lenses, secure locking connections and remote adjust/match features. The Marshall CV566 and CV366 Genlock POV cameras are built around the next generation Sony 1/2.8” CMOS “rolling shutter” image sensor featuring 2.2 Megapixels and Tri-Level (Genlock) Sync feature, which is next generation performance on the company’s POV cameras. Both deliver video performance from a micro sized body through 3G-SDI and HDMI.

STUDIO TECHNOLOGIES Studio Technologies exhibits at booth C2625.

34

The company announces the Model 392 Visual Indicator Unit. The unit is compatible with the Dante protocol and requires only a Power-over-Ethernet for full operation. Full color LEDs provides the optical source for the distinctively shaped polycarbonate lens assembly. The Model 392 is housed in a compact enclosure that is intended for mounting in a U.S.standard 2-gang electrical box. The Model 392 can support a variety of status display applications, such as onair light, room occupied indicator, intercom call signal display, and can even serve as an audio level ‘meter’,” says Gordon Kapes, President of Studio Technologies.

TELESTREAM Telestream exhibits at booth C3007. The company will show devices which empower technical teams to ensure the highest quality customer experience while providing maximum business value. The latest software feature set of the PRISM waveform monitor will be on display. RISM has an optimal form factor for every production application from camera shading to color grading for both IP and SDI workflows. For IP workflows, a new, platform for the Inspect 2110 probe will be shown. Attendees will also see updates to QC software (onprem, virtual, and cloudnative), and Lightspeed Live Capture with new 12G SDI and ST 2110 interfaces. The Inspect 2110 + PRISM


NAB SHOW 2022

combination is an essential addition to the engineering toolkit used to monitor and troubleshoot in broadcast and production facilities.

VISLINK Vislink will be present at NAB at booth C7508. The booth will be shared with Mobile Viewpoint. Vislink’s chief AI offerings include IQ Sports Producer, a live sports production and streaming solution. Also, vPilotwill be present, an AI-driven studio content production system that creates professional productions easily and affordably without a camera operator or director team.

AEQ AEQ will be present at the NAB Show at booth C3205. XPEAK is an intercom system that supports up to 28 user terminals in different formats and with Internet connection. It also has Bluetooth and USB connections. It provides analog, USB digital and AES67/Dante AoIP inputs and outputs. ATRIUM is a digital audio mixer that handles up to 1000 channels of local or IP audio. It is built on the X_ CORE engine, which allows it to be securely shared between up to 6 consoles. It can host, through one or more surfaces, up to 90 motorized and pagable faders.

X_CORE is an audio matrix, mixer, processor and distributor. It is based on a 4 UR chassis, equipped with cards that can be configured redundantly. It is adapted to work over IP standards such as DanteAES67, RAVENNA-AES67, MPTE ST 2110-30 and SMPTE ST 2110-31 with NMOS control. TALENT is an audiocodec with Blutooth connectivity that allows to connect microphones and headphones for remote radio. FORUM LITE is a compact console that combines the Forum IP Split surface with the compact M_CORE engine.

Mobile Viewpoint and Vislink combine camera systems for sideline reporting, and remote production technologies. The result is a comprehensive, highly cost-effective technology platform that enables the production of sub-Tier 1 sporting events with video quality and production values equivalent to Tier 1 event coverage.

35


NAB SHOW 2022

LAWO Lawo exhibits at booth C6932. The company will be highlighting V__matrix, Lawo’s software-defined IP Core routing, processing & multiviewing platform; VSM, the standard in IP broadcast control systems; mc² series Audio Production Consoles; Lawo’s brand-new diamond Radio/Broadcast Mixing Console; SMART IP-Network Monitoring and Realtime Telemetry solutions; A__UHD Core Ultra-high Density IP DSP Engine for mc² consoles; or Power Core RP IP Audio I/O & DSP Node for Remote Production.

SDI/HD-SDI/3G/4K-over-IP multifunction gateway, SMART OG is the first to bring the family of JPEG compression engines, including those specified in VSF TR-01, TR-07, and TR-08, onto the openGear platform.

Artel Video Systems showcases, at booth C2128, its solutions SMART openGear Platform, SMART Multimedia Delivery Platform for DigiLink and InfinityLink, Quarra PTP Ethernet Switches, and FiberLink Family of Media Transport Products.

SMART Media Delivery Platform is a softwaredefined, four-channel, autosense 3G/HD/SD-SDI-overIP multifunction gateway with integrated nonblocking Layer 2/3 switching and routing capabilities. The software-enabled solution features four video ports for transport of video, audio, and ancillary data and four GigE data ports bridged to one or two 10G interfaces.

SMART openGear Platform is a software-defined, fourchannel, auto-sense SD-

Quarra PTP Ethernet Switches are designed to offer IEEE 1588-compliant

ARTEL VIDEO SYSTEMS

36

timing and synchronization. Quarra switches have been modified to support live performance environments. FiberLink media transport products bring flexibility to any type of media workflow. The devices are compliant with SMPTE standards to ensure interoperability and reliability.

West Hall

AVIWEST Aviwest will exhibit at booth W5205. At the show the company introduces its StreamHub with a completely redesigned interface and enhancements. The device can be deployed as an in-house 1U server or as a cloud service.


NAB SHOW 2022

In addition, the company, recently acquired by Haivision, will showcase its new 4K/UHD bonded 5G/4G transmitter. The PRO460 supports 4K UHD and multicamera workflows for up to four highresolution, and fully framesynced feeds. PRO460 also offers a high-definition video return and is available for confidence monitoring or teleprompting. The solution supports fullduplex intercoms, as well as remote camera control, tally light management, or control of any IP-connected device. It houses six 5G modems and features antennas to ensure transmission. The company also showcases its new RACK400 video encoder and its LiveGuest solution. The first one provides transmission over any network, including cellular, satellite, IP leased line, and the public internet. The encoder enables video sync between multiple cameras for camera switching, video return, full duplex intercom, data bridging, and tally light. The second one is a video

calls solution. The solution enables the costumer to invite remote guests to join a live video call directly integrated with their production system.

ATEME Ateme exhibits at stand W3512. Its implication in the world trade fair will combine use cases demonstrations, presentations and panel debates. The company will demonstrate low-latency deployments such as the one announced recently, 5G streaming projects such as NESTED, and the nextgeneration video coding codec, VVC, delivered to a screen for the first time at the NAB Show this year. The demo consists in highly efficient end-to-end lowlatency streaming over 5G of hybrid multicast and unicast low-latency streams to televisions and mobile phones.

TAG VIDEO SYSTEMS TAG Video Systems exhibits at booth W3517. Visitors to NAB will be the first in

getting a glimpse of TAG Video Systems’ new Media Control System (MCS). This is the newest layer of TAG’s Realtime Media Performance (RMP) Platform. It enables clients to capture the data provided by over 400 distinct error detectors to be aggregated and correlated across the entire media supply chain. The MCS serves as an aggregation engine capable of exposing the data collected by TAG’s MultiChannel Monitoring (MCM) to standard third party analytic and visualization applications. It uses opensource paradigm.

DIGITAL NIRVANA Digital Nirvana will be at booth W2500. The company showcases MetadataIQ, its SaaS-based solution that automates the generation of speech-totext and video intelligence metadata, increasing the efficiency of production, preproduction, and live content creation services for Avid PAM/MAM users.

37


NAB SHOW 2022

The solution applies ML and AI based content analysis to accelerate the metadata generation process, including transferring video assets from Avid and ingesting the metadata as markers along with the assets. MetadataIQ also integrates directly with Digital Nirvana’s Trance platform to generate transcripts, captions, and translations in all industrysupported formats.

TRIVENI DIGITAL At NAB Show, booth W8917, Triveni Digital will demonstrate its end-to-end NextGen TV solutions for broadcasters. The company presents StreamScope XM ATSC 3.0 Monitor. It is an ATSC 3.0 professional monitoring, auditing, and logging system. The new ATSC 3.0 monitor offers seamless integration with Triveni Digital’s StreamScope XM Analyzer, StreamScope XM Dashboard, and StreamScope Enterprise, and is a must-have solution for broadcasters operating in the NextGen TV environment.

38

Triveni Digital also presents SkyScraper XM Datacasting System for ATSC 3.0. SkyScraper XM supports standard content distribution and private NRT distribution applications over ATSC 3.0 and ATSC 1.0, with optimized data delivery features such as Forward Error Correction (FEC), Opportunistic Data Insertion, and statistical multiplexing through hybrid broadcast and broadband delivery systems.

GLOBECAST Globecast is located at booth W10619 and it will be highlighting its fourpart business strategy at NAB 2022: Media Supply Chain including cloud playout. Globecast recently announced two cloud playout customers: Crown

Media and Great American Country. These two services bring new levels of services flexibility and increased time to market. This includes the launch of any new permanent or ‘pop-up’ channels. Globecast will be discussing its complete Media Supply Chain approach and its toolkit of leading solutions to create bespoke services.

BLACK BOX Black Box performs its exhibition at booth W4117. The company KVM-overIP solutions are tailored to the requirements of modern control rooms, media production and postproduction facilities, and broadcast playout environments. At NAB, Black Box will be


NAB SHOW 2022

highlighting The Black Box

analytics, TV monitoring,

Emerald GE. It is its solution

and Targeted TV

to leverage PC-over-IP

Advertising.

Ultra technology to enable multiple users to connect simultaneously and control a virtual machine (VM) just as they would a physical system. With low bandwidth requirements, the system supports remote access through both local and

A key highlight on the platform will be VO’s AI-based Targeted TV

high-definition and highly responsive computing experience.

broadpeak.io, its new that streamlines video

monetization of first-party

streaming. Broadpeak’s new

usage data. It is driven AI

API-based platform offers

analytics.

content providers, pay-TV

the Secure Player, a

while ensuring a secure,

The company will showcase

that revolutionizes the

and WANs). Pairing with

supports VM sharing

content at booth W9706.

software as a service (SaaS)

wide area networks (LANs

IP platform, Emerald GE

Broadpeak will exhibits new

Advertising solution

VO will also showcase

the Emerald KVM-over-

BROADPEAK

multiplatform media player for premium content, highlighting how it optimizes the delivery of live video content. Support for multiview allows end users to observe the same event from different

operators, and OTT service providers a way to deliver streaming services to their subscribers. Broadpeak will also demonstrate peakVU. TV at the 2022 NAB Show for the first time. This service is an IPTV managed service that enables U.S.

VIACCESS-ORCA

camera angles and select

VA exhibits at booth

to watch. Another feature is

W10203. The company

Watch Party, which enables

showcases its TV platform

sports fans to watch live or

streaming services with all

(on-prem, cloud based or

on-demand sports matches

the features subscribers

as a service) featuring the

on PCs or mobile devices

demand, including ultra-

VO Secure Video Player

while simultaneously

low-latency live playout,

and offering advanced

interacting with a viewer

start-over TV, lookback, and

capabilities such as

group through video chat.

NDVR, in less than 60 days.

the primary view they want

broadband, cable, telco, utilities, and wireless operators to launch video

39


VIRTUAL PRODUCTION

VIRTUAL PRODUCTION at ARRI STAGE LONDON This text was produced through a conversation with ARRI’s Business Development Director for Global Solutions, David Levy, and the editorial staff of this magazine. The conversation was set up as an interview where David shared his experience, knowledge, insights, and perspectives on virtual production.

40


ARRI STAGE LONDON

41


VIRTUAL PRODUCTION

How was the virtual production stage born at ARRI London? How did you develop it over time? Before the pandemic, ARRI supported several shows using virtual production techniques and had identified the technology’s potential importance. The pause in production due to the pandemic proved to be an important catalyst for the adoption of the technology. As a result, ARRI was invited to provide the technical design, and supervise the construction and installation of a virtual production stage at Studio Babelsberg, for the production company Dark Bay. In parallel, ARRI decided to convert an underutilised warehouse space at its UK site, with the ambition of supporting our partners globally wanting to explore virtual production. We also designed the stage for product development and as a resource for our R&D. As a manufacturer, it is critical that our hardware and software Rich Philips complement this form of content production. The

42

stage required significant investment, which meant it couldn’t just be used as a proof of concept or for product development, it also had to pay for itself. So, when planning the stage, it was essential to ensure it could function as a commercially viable business in its own right. With the support of ARRI Rental and our operating partner Creative Technology, I’m pleased to say it’s going well. What is the technology located in your studio? The studio has an impressive technology stack, which includes an extensive inventory of

hardware and software solutions from across the media and entertainment industries. From the gaming world, we use Epic Games flagship real-time 3D creation tool, Unreal Engine nDisplay, as our principal 3D playback system. Photon VYV is used as our 2D playback system, which comes from the live events world. Then we have the LED video walls, more commonly found in live events, broadcast, and digital signage. There are tracking systems, from our friends in motion capture and broadcast. Video synchronisation, processing and signal distribution systems, stage automation


ARRI STAGE LONDON

systems, IP lighting data networks, and of course, camera and lighting solutions from our portfolio of products. The list goes on.

THE STUDIO HAS AN IMPRESSIVE TECHNOLOGY STACK, WHICH INCLUDES AN EXTENSIVE INVENTORY OF HARDWARE AND SOFTWARE SOLUTIONS FROM ACROSS THE MEDIA AND ENTERTAINMENT INDUSTRIES.

Because of the complexity, and our firm belief in collaboration being key to delivering such studios, we partnered with Creative Technology to design, deliver and co-operate the stage. Through this collaboration, and leveraging of our respective expertise and experience, we have been able to deliver a state-of-the-art virtual production studio. The main ‘in-vision’ wall is 30 m x 5 m and constructed from ROE Ruby 2.3 panels, while the ceiling is 10 m x 10 m and uses ROE Black Quartz 4.6. The processor for both is the Megapixel VR Helios 8K. Then the back wall is 18 m x 4.2 m, and there are two moveable and tiltable side walls each 3 m x 4.2 m, all of which use ROE Carbon CB 5.7 panels, and ROE Evision 4K processing. As previously said, our playback system is either native nDisplay for 3D tracked projects,

or Photon VYV for 2D playback. The 2D playback system is most commonly used for car shoots. To ensure the best possible rigging solution for the video walls, we suspend everything from a dedicated secondary steel structure. This is preferred over ground stacking as it means redeployment of panels is simplified and improves the overall longevity of the hardware. Our main 10 m x 10 m ceiling element is suspended via a fully motorized, millimetre accurate, automated rigging system. It can be tilted in any direction up to 30 degrees as a single piece, but also very easily separated into four equal individual sections. This gives us incredible creative flexibility and speed over where we place light and reflections within a scene. In addition to all this, there is a dedicated IP-based lighting control network, where we’ve installed 50 of our latest ARRI Orbiter LED light fixtures on height adjustable lighting trusses to form a 360-degree configuration around the inside of the volume.

43


VIRTUAL PRODUCTION

The volume produces a

color reproduction is not

an ultra-bright, tuneable,

lot of light, and that can

as good as professional

and directional LED fixture,

be very important for

film lights. So, we enhance

designed with versatility

believably embedding the

the ambient stage light

in mind. This concept of

performers in the virtual

with the Orbiters, which

versatility is something we

environment, but it’s a

have a very good spectral

have incorporated in all the

very soft, homogenous

output, and incredible color

systems that make up the

light, and the CRI and

rendition. The Orbiter is

stage.

44


ARRI STAGE LONDON

And, finally, another

motor controller, which

important aspect is our

transmit near real-time

camera system. We have an

camera and lens metadata

ALEXA Mini LF with a full set

directly into the game

of Signature Prime lenses

engine environments.

and ARRI Electronic Control

We have found this

System tools like the Hi-5

combination really gives us

hand unit and UMC-4 lens

the most convincing results,

whilst allowing for creative freedom and simplification of the process. What is your pipeline? We work with our clients from pre-production, all the way to post. Firstly,

45


VIRTUAL PRODUCTION

identifying if the project is even suitable for virtual production. Realistically, this technology is not suitable for every scenario… at least not yet. We are very honest with our clients, and if the scene they want to film is not practical or cost effective to shoot in this way, we tell them. What we love is when a particular scene can’t effectively be done any other way. Or when it’s too expensive or too risky. For example, we worked on a project that had the lead talent floating in outer space, staring back at earth. Traditionally, this would have been a green screen shoot, with an expensive and time-consuming post process. Shooting in green screen is also notorious for producing detached performances and makes it not only difficult, but risky, for the director, and DP to faithfully deliver their vision. By shooting at Uxbridge, the production worked in real-time with the artist, director, DP and VFX house on delivering a cost effective, and more importantly, creatively

46


ARRI STAGE LONDON

OUR JOB IS TO PROVIDE CREATIVES WITH THE TOOLS THEY NEED TO CREATE THEIR ART AND TELL THEIR STORIES. IT IS STILL EARLY DAYS. LET’S HOPE THAT WE CONTINUE IN THE SPIRIT IN WHICH WE STARTED: COLLABORATION AND OPENNESS.

Is ARRI involved in manufacturing this kind of technology? Yes and no. We do not manufacture video walls, and we’re not here to make game engines. What we want to ensure, is that our equipment “shakes hands” with third-party technology. Future protocols will be IP-based, which means it’s important to look at how we

accurate representation of the director’s final vision. The only real post work required was the assembly of the footage and the wire removal, which is relatively easy to do.

adopt and integrate those

I have to say, even we have been surprised at what can be achieved. Productions have asked us for set-ups that would be impossible on location. For example, we had a scene with a motocross bike traveling at 70 mph on a dirt track and the DP got a shot with the camera about six inches from the artist’s face. This would have been impossible to do in reality; it would have been too dangerous. But with virtual production, it was easy and achievable.

47


VIRTUAL PRODUCTION

standards. Interoperability and simplification are very important to us. Our products must fit well within that ecosystem and make it easier for our mutual users to harness the creative power of the technology. Has this technology room for improvement?

We’re all part of the content creation industry. Our job is to provide creatives with the tools they need to create their art and tell their stories. It is still early days. Let’s hope that we continue in the spirit in which we started: collaboration and openness.

There is a lot of room for improvement. This is the first generation. Admittedly, it’s not particularly new; it’s just the natural evolution of the in-camera VFX techniques we’ve been using for years. But yes, there are lots of opportunities for improvement.

What differentiates your studio from others?

What do you think of the On Set Virtual Production SMPTE RIS program?

a great opportunity to

optimize, and improve it.

Our studio wasn’t designed

A big part of SMPTE’s work is standardization. We rely on standards to help us evolve and grow. They should not hinder us, and I think that’s exactly what SMPTE’s RIS project is trying to address. It’s about giving us the rules and the tools our industry needs, in order for us to work from a unified position.

Also, I think the automation systems make us unique. The fact that our ceiling is movable, with millimetre precision, and can be reconfigured in multiple different shapes and angles, makes all the difference. We have a dedicated lighting IP network, which is also important.

in mind, but to serve

48

I think the main difference is that ours is a permanent installation that offers flexibility. While others are built for a single project and then torn down, ours continues in the space in which it was built, providing learn a lot more; we can continually update,

with a single production effectively and creatively as many variations as possible. What are the main reasons to choose virtual production workflows? The main reason should always be based on the creative. We do this to


ARRI STAGE LONDON

tell stories which would

it efficiently and effectively.

Does this technology

otherwise be untold. Other

You can move the camera

change the job of a DoP

secondary reasons include

and there’s little risk of

or art director?

control, speed, efficiency, risk mitigation, and cost. Imagine a scene in a car where your two lead talents have to perform eight pages of dialogue over

missing the shot or making continuity mistakes. The same goes for magic hour. In a day there are only 20 minutes of magic hour, at best. In the LED volume

I don’t think it’s changing. I think it’s expanding to be involved in more parts of the process. I think it’s very important for those department heads to

and over again to get the

you can make that brief,

required coverage. This

precious light last the whole

conversation about how

technology allows you to do

day.

to tell the story at every

be part of the broader

49


VIRTUAL PRODUCTION

50


ARRI STAGE LOND ARRI STAGE LONDON

ARRI IS WORKING WITH A BROAD SECTION OF OUR TRADITIONAL AND NONTRADITIONAL INDUSTRIES. FROM EDUCATION TO ENTERPRISE. THIS IS NOT A REVOLUTION BUT A CONTINUOUS EVOLUTION, AND THIS IS WHAT WE KNOW FROM OVER 100 YEARS OF BEING PART OF ONE OF THE MOST DIVERSE BUSINESSES IN THE WORLD.

stage of production. And this technology requires all these departments to be involved. From ARRI’s position, what projects related to virtual production would you highlight as the ones you are most proud of? We are very proud to have supported the Dark Bay Virtual Production Stage. We provided on-set services, from planning and technical supervision of the build, and then rental services during the production of the Netflix series 1899. Also, I would like to mention how proud we are of the first virtual production studio we built for these applications, for Epic Games at its Innovation Lab in London. What challenges have you encountered and how have you solved them? Based on our learnings and experiences we have made multiple phased upgrades/updates in our studio, and we will continue

to do so. We expect to have announcements about further innovations soon. Automation, interoperability, simplification and standardization, cost reduction and efficiency— these are our priorities. Just like the SMPTE initiative you mentioned earlier. That’s what we want from our partners. The most immediate and obvious challenge for us as a technology company is simplifying and automating the complexity of these systems, whilst also developing user intuitive workflows. The next challenge, I would say, is people and education. There is not enough of either to effectively scale. This is where innovation and collaboration come in. ARRI is working with a broad section of our traditional and nontraditional industries. From education to enterprise. This is not a revolution but a continuous evolution, and this is what we know from over 100 years of being part of one of the most diverse businesses in the world. 

51


VIRTUAL PRODUCTION

52


GARDEN STUDIOS

This interview is the result of a conversation this editorial office had with Mark Pilborough-Skinner, Virtual Production Supervisor at Garden Studios. We spoke with him about the particularities of their infrastructure, the current state of the technology and its future. This company has created a virtual production facility with all the necessary technology and qualified and essential personnel to make these techniques an affordable option for any type of company.

53


VIRTUAL PRODUCTION

What is virtual production for you and what are the differences between this technic and the classical green screen? Virtual production is the natural progression of green screen, but at the same time, they are used in different ways. This new technology consists, simply, in taking some of your post workflows and moving them to on set. With green screen, you can do a high-quality render afterwards. And if your production, like most productions, hasn’t done a lot of previs, you can decide after you’ve shot how it’s going to look. The only downside to this is that you are locked in with your physical lighting. However, with virtual production, you can envision what it will actually look like in camera. This capability is impressive because you can get about 80% of your fill lighting from the LED volume itself, and you only have to use a couple of physical lights as your key lights or to

54

replicate other sources. Not only that, you can actually previs how your subject is going to fit into the scene beforehand.

This allows for shift of perspective or parallax in the digital content that matches the physical props and talent.

It is a powerful tool, however, it implies some caveats because the system runs in real-time. Here at Garden Studios, the first thing we do is to check which shots are more appropriate for virtual production, which ones are location shots and which ones we can do in one of our sound stages.

We use this software for the deployment of our 3D content, and then we use a media server for video playback. We also have the possibility to introduce a hybrid approach where we can send the video content to Unreal and thus playback a combination of real images and CG content on our screens.

Which is your pipeline? We’ve spent the last year developing and improving our system and then working and collaborating with 3D content creators, DOPs and gaffers to make the process as smooth as possible. We use Unreal Engine because it offers unparalleled real-time graphics. The way this differs from matte paintings or traditional video walls is that we’ve got a tracking system which essentially keeps our virtual camera and our physical camera in the same location in virtual and physical space.

The image is sent from our render node to our Brompton image processors, which in turn map it onto our LED wall. We use the Mo-Sys StarTracker to run tracking on our camera. We also have full DMX integration, so we have the option of using a control desk to monitor both the physical and unreal lights, and we can also link them together. Is this technology beneficial for the teams involved in production? Creatively it is great because usually, you’ll


GARDEN STUDIOS

have your director, your art department, your VFX supervisor, who probably won’t talk to each other until the day of the shoot or, sometimes, weeks after it’s over. Whereas virtual production forces all departments that would normally work independently to talk to each other before shooting.

WE USE UNREAL ENGINE BECAUSE IT OFFERS UNPARALLELED REAL-TIME GRAPHICS. THE WAY THIS DIFFERS FROM MATTE PAINTINGS OR TRADITIONAL VIDEO WALLS IS THAT WE’VE GOT A TRACKING SYSTEM WHICH ESSENTIALLY KEEPS OUR VIRTUAL CAMERA AND OUR PHYSICAL CAMERA IN THE SAME LOCATION IN VIRTUAL AND PHYSICAL SPACE.

Your VFX supervisors talk to your art department and know that your physical assets and digital assets are going to match up. Your DOP will already be thinking about the lighting configuration and talking to the director. It means you have to do a little more work before you get to the set, but I think the creative possibilities it offers are much greater than through the traditional way. Has virtual production changed the way teams work? Do they need to be exposed to this system having gained indepth prior knowledge? I think a lot of people are apprehensive about using these techniques because it is really new technology.

55


VIRTUAL PRODUCTION

However, cinematographers, for example, as soon as they do a shoot with it and you just give them a few little rules – one is not to focus completely on the screen because then you will get an interference pattern-, they become familiar with the technology. To be honest with you, none of the techniques they have to learn are new. Same with gaffers. At the beginning they used to say: “No, I need to be controlling all the lighting of the setup,” but when they understand that we can control the light of the LED volume and be at their command or even offer them an iPad or traditional desk to do it according to their needs, everything changes. The same goes for the art department. I think it’s probably the one that improves the workflow the most because there is prior communication. You have to make sure that the physical props match the digital props. And you can do that beforehand. It also helps a lot that, if you wanted to change the

56

colour of a bookshelf in the background, you no longer have to hurry to find another bookshelf or repaint it quickly, but just adjust it on the screen.

on a green screen, but the car would be full of green reflections. With this method you get reflections all over the car from the surrounding LED screens.

What are the most appropriate situations to be shot in virtual production?

Of course, it is not the right technology for all scenarios. If you want to do a big scene with a wide establishing shot of nature up a mountain, go shoot it on location, then bring us the video plates to do the close-ups. Or if you have a big stunt scene in your movie, where there’s choreography involving a lot of actors and then you need a lot of CG elements

Sequences involving locations that are impossible or involve many risks or are very expensive. Car sequences, for example, they are amazing to do on a virtual production screen. It is true that you could do it


GARDEN STUDIOS

added, it becomes more difficult to do on a virtual production set. What is the importance of colour in virtual production? It is really important. Brands need to make sure that their colors are exactly how they are on camera. To achieve this, we use our displays in HDR mode, which gives us much more information. We have analyzed all of our LED displays and our monitors to get a true reading of the colors they can achieve. We then use the OpenColorIO

plug-in which transforms the Unreal color space into the color space of our LED volume. We have used Pomfort Livegrade as one of our color grading systems for the screen. What this solution can do is that you can grade the screen non-destructively while the content is being produced. You can do day, night and golden hour grading of the scene and switch between them at the touch of a button. A lot of times when people do a virtual production,

A FEW MONTHS AGO WE DID A VIRTUAL PRODUCTION EVENT WHERE WE PARTNERED WITH EPIC GAMES, THE MILL AND QUITE BRILLIANT. WE DID AN HOUR AND A HALF LIVE EVENT ON THE VP STAGE, WHERE THE MILL DELIVERED SEVERAL SCENES FOR US TO USE.

they do it by eye and make it look good in camera, but what we’ve found is that if you have a very solid color workflow that gets everything physically accurate from the beginning, you can start

57


VIRTUAL PRODUCTION

doing it experimentally and save yourself a lot of work. It also helps, obviously, in the post-production processes. Do you have a graphic design department? At the beginning, no. In the beginning, we had to be a turnkey solution for projects. Our rate card included the LED volume, Unreal Technicians and UE4 scene optimisation. What we used to do was that the client or a third party would create the 3D content and we would modify it and make sure it worked properly on our infrastructure. Having said that, we’ve realized that if we provide content creation from start to finish, if we create the content, tweak it and throw it on stage, we’re accountable and we’re confident that it’s going to look good and work well. So we have now started to build a small content team and we have offered our talented team to several community projects. Which one would you highlight?

58

A few months ago we did

shoot a live commercial

a virtual production event

in an hour. What happens

where we partnered with

is that a lot of people

Epic Games, The Mill and

say, “Virtual production

Quite Brilliant. We did an hour and a half live event on the VP stage, where The Mill delivered several

isn’t ready for live events because it’s new and things can go wrong.”

scenes for us to use.

At this event, we did a

We also leveraged UE4

presentation, we switched

Marketplace content to

live to scenes from The Mill,


GARDEN STUDIOS

WE ARE GOING TO BE EXPANDING OUR VIRTUAL PRODUCTION OFFERING THIS YEAR AS WE’VE GOT A LOT OF DEMAND FOR BIGGER STAGES AND WONDERFUL PROJECTS.

we created scenes that had lighting transitions, etc. Then we went and shot six scenes in a row without having to reload any of them, streaming them in and out on the fly. It was fantastic, because in the space of an hour we shot a fake commercial based on audience suggestions. We had no technical problems and delivered a great event that showed both the promise of VP for advert as well as live events. I haven’t asked you about the characteristics of your LED volumes, what makes them special compared to other studios? We’ve got a 12-meter by 4-meter wall, which has got a curve. The reason we didn’t build a 20-meter

by 8-meter volume was because that size of stage means it’s more democratized. Ours is still accessible to the ad agencies, independent filmmakers and smaller productions while still being suitable for Film and TV. We also have the capability to extend our LED volume by 6m if requested. We are going to be expanding our virtual production offering this year as we’ve got a lot of demand for bigger stages and wonderful projects. However, what we want to do is to keep it that way. I think it’s surprising, because normally, tools like this stay in the film world for 10 years until they reach all the other industries. The Mandalorian is now three

and a half years old and that was the first example. Now, brands can use it almost immediately after it has been implemented in the film industry. What is Garden Studios’ experience in virtual production? I think part of Garden Studios’ knowledge and experience is because we’ve done everything from music videos and movies to commercials, so we can look at scripts and storyboards and advise people to use virtual production when it makes sense and to use traditional shooting methods when it needs to be done. The interesting thing is that this technology is very recent. We have only been developing its possibilities for two years. And people who come to us are coming with very different ideas depending on the world they come from. I come from a game programming background. We have people with experience in traditional film, VFX and, even, live events.

59


VIRTUAL PRODUCTION

This characteristic pushes our technological capacities forward and we are able to deliver better productions at Garden Studios, but we also share everything we find with other VP stages because, at the end of the day, the UK is an incredible center of film and television production. We want to make sure it stays that way as these new technologies develop. When did you implement this technology in your studio? We did some initial tests around November 2020. We then moved on to the Garden Studio campus in January of 2021. We did two, three months of building the stage, setting up our workflows, training staff and shooting our internal test projects. We did a couple of music videos. We reached out to DOPs to come in and experiment. From March 2021, we were fully open and bookable for clients. In the beginning we were working on projects every week or every two weeks. Now, we are booked

60

on an ongoing basis, doing multiple shoots a week or longer film and tv projects. The pandemic definitely accelerated virtual production. But I think, even without the pandemic, it would have been adopted very quickly because it’s a fantastic tool. Do you think this technology will be quickly implemented and assimilated by the industry? I think so. If you look at the studio map from January to March, a year ago, we were one of the only permanent

virtual production studios in the UK. Now, there are dozens of virtual production studios appearing all over the UK. From large studios building volumes of virtual production, to modest midsized studios building small sets. I think the technology is only going to improve. It’s at a very early stage now, but Unreal is investing a lot to make it more and more accessible. The limit right now is just image quality, but with the advent of Unreal Engine 5 everything will change. You will be able to leverage higher quality


GARDEN STUDIOS

meshes using Nanite and fantastic lighting from Lumen. I believe we’re going to see the gap between offline and real-time rendered graphics slowly merge in the coming years. We are not far away and, as soon as we can have high quality graphics in real time, virtual production will become an indispensable tool. What are your next steps regarding this technology?

One of the main values of our company, as we have already mentioned, is development and research. Therefore, it is very important for us to encourage technological innovation. We are developing several ideas right now that we believe will help advance this technology. For example, we are working on a project to extract the Unreal scene layout and project it onto our studio floor via laser projection. It’s a simple

idea that could really help the teams get a feel for the layout of props and scenery and thus speed up the process for everyone. We are also working with photogrammetry techniques to scan and digitize as many props as we can. We believe that in addition to going to a prop house for physical objects, it would also be invaluable to have libraries of 3D objects. Finally, one of the things we are investigating is video playback. At the moment, there is no media server or playback option that does everything we want. This is very common, because right now, a lot of the technology that is used in virtual production comes from other sectors. However, it is also true that we are starting to see LED walls specially designed for these techniques and specific tracking systems for VP. We are talking to existing media server manufacturers to see if we can develop something that meets all the requirements we have found. 

61


VIRTUAL PRODUCTION

SMPTE RIS On-Set Virtual Production

62


SMPTE

SMPTE is a global organization that seeks to help make all technological solutions involved in the content creation industry more accessible to its users. The goal is clear and, so far, so are the methods to achieve it: the creation of standards. But the industry is changing and moving faster than ever. Virtual reality and the technology that has clustered around it is proof of this. It comes from worlds outside the industry as we know it and, moreover, it is in a rapid and constant process of evolution. In such a situation, standards would slow down evolution. And what SMPTE is determined to do is to create a toolbox that facilitates the use of these technologies. We spoke to Kari Grubin, Project Leader at Rapid Industry Solution (RIS) On-Set Virtual Production, SMPTE to find out how they do it.

63


VIRTUAL PRODUCTION

Kari Grubin, Project Leader at Rapid Industry Solution (RIS) On-Set Virtual Production

What is the origin of Rapid Industry Solution (RIS) On-Set Virtual Production and what is it important? With the idea of getting feedback from the general community, the SMPTE executive board came about with this program last year, on 2021. The objective of this idea was to know how SMPTE could be a better service to the community at large. The SMPTE’s board looked at several different potential areas as topics. At that time, virtual production had exploded with the pandemic and hadn’t really been discussed. They decided to trust in myself to focus in that topic.

64

The first thing we did was to interview over 30 different participants across the whole ecosystem. So I went and found representatives of broadcast, education, and creative storytelling as many as I could. Globally, I spoke to universities, broadcasters, streamers and motion picture studios, professional organizations like the EBU in Europe or around the world. Also, I reached out to a lot of the companies that they may be hadn’t really been involved in the SMPTE universe before, but are really critical to this pipeline. So we got feedback from game engine companies, the compute and camera tracking companies, specific manufacturers, and LED wall manufacturers. It was very important to get a very broad picture. And we asked them the questions about “what is it in virtual production that is difficult right now?” We also asked them where do they need help and overall, what is SMPTE doing right, what does SMPTE need to look at to actually be of service to what you need? We got really good feedback and

we started develop our work. Where did you find the most challenges at the beginning? We encountered challenges on two points: Communication between people who, in advance, come from very different worlds and interoperability. Around the world, and during research, we have


SMPTE

encountered the same issue. There are not enough well-educated and knowledgeable professionals to perform these tasks. Many of the end users and developers were concerned that they had to go to universities to look for specific profiles, and many times the educators did not know what they were talking about. This education is necessary

for communication to be established. Sin embargo, desempeñamos un papel crucial en cuanto a la comunicación entre agentes distantes tradicionalmente. It is normal that video game developers do not share the same language as people who work in film. Finding that common ground was also very complex.

On the other hand, interoperability issues, as usual, do not only pertain to virtual production. But especially in the world of content creation, many innovative technologies are emerging. What happens is that the tools that allow us to integrate them into other systems emerge later. It has happened with everything: with high dynamic range, with standard definition and high definition, with the transition from 2K, to UHD, to 4K, and so on. But in this case the process is software-based and is different because in the past things didn’t evolve so fast. So because software tends to evolve faster, the tools that are needed to facilitate interoperability are needed sooner. That interoperability comes from data. It has to do with all those systems that have to interact and work together. For example, how do you get enough computing power to have enough refresh rate that allows you to avoid problems in the LED like flashing or jitter. To avoid this, you

65


VIRTUAL PRODUCTION

will also need a much smaller pixel and thus get a much more photorealistic look. However, if you are doing a close-up and the background is out of focus, you will not need that pixel quality. Making all these pieces fit together and preserving those actual technical roadmaps was really important. In addition to facilitating communication and preserving the development of interoperability, how has SMPTE been introduced into this process?

66

In short, we wanted to develop a different approach where we could be counted on when we encountered a problem.

What difficulties have you encountered in bringing together so many companies from such distant worlds?

The request was for the SMPTE to be more proactive. That it get involved, ask how it can help, but at the same time, not redo the work that other groups are doing. We need SMPTE to really listen to what is needed here and provide that glue and support to bring things together.

At first, there was some hesitation. The biggest reaction I got from manufacturers and companies was concern that we were telling them they could no longer own their own systems. We can understand that, because content creators have always asked, especially if they have enough power, for everything to be open source, easy and interoperable. In a perfect world you could do that, but the technical innovation happens because there is research and development taken on by these companies, and they look for to generate benefits.

This is our effort. To make SMPTE, through action, more involved in the evolution processes of this industry. No one doubts that SMPTE is of service, but this is our response to the request that the industry is making.


SMPTE

The position that SMPTE has taken is to ensure that they are not going to stop being proprietary. We want people to understand how users are going to use their technology. We want to create a way, a hub, a way of transportation. The goal is to provide the knowledge to the user so that they know that if they use a certain system from a certain manufacturer, they will get facilities in the process because they have worked out interoperability. We’re going through a process like this right now. We’re developing a camera interoperability program where, in the conversation, we have lens manufacturers, camera manufacturers, there’s Movielabs, there’s the American Society of Cinematographers, etc. The goal is to put property aside and work on sharing the data that needs to be maintained from image capture to the end. Some of the people we have asked for this feature have told us that some of the problems had to do directly with this property issue. How can you

intervene in something like this without being detrimental to the interests of the companies? Let’s use an example to illustrate this. Imagine you are an author and you create a work of art. That work of art is your property because you created it. If, what you do is create an NFT, for example, that has the same quality of originality as that work of art and can be considered as such, some of the software you used to create it is intrinsically linked to that NFT as well. Let’s say you have created a NFT through a game engine. And let’s say someone buys the NFT you created to use it in another ecosystem; to use it in another game engine than the one it was created with. That’s where those interoperability problems linked to ownership occur. If you want to help, you can tell the software you created it with, but it’s no longer up to you, it’s up to the program itself. And they may not want or be able to give much information. This is what happens in the case of many of the

technologies involved in virtual production. Our role cannot be to step in and develop standards. A lot of what we have been told by people in this industry has been not to do that, because that would choke innovation. And they are right. We have to find the middle ground to get everyone on the same page without constraining them. We all know that robust and rigorous standards are necessary. But what we want to do is help the industry in another way. We can provide information so that people who use virtual reality can do so in a more efficient and more accessible way. Not everything needs standards; technology can evolve so fast that it would be counterproductive to set standards. I don’t mean that we won’t do it at some point; standards will always be necessary, but in this case, the most necessary thing is to provide a toolbox. That is why RIS was born. Therefore, what are the objectives of this program and the means to achieve them?

67


VIRTUAL PRODUCTION

The main objective is to create space for collaboration in the industry. We also aim to make information available to anyone who needs it, young people who want to be trained to access this technology and also for existing professionals who need to understand and use it. To achieve this educationrelated goal, we are developing a grant program that links manufacturers and suppliers with educational institutions to lay the groundwork for specialized education in virtual production. Related to facilitating access to information, last October we launched the creation of a large wall chart whose purpose will be to identify each of the roles and technology involved in virtual production. It is a flexible document that will be updated over time. It will not only be able to be checked to see which employees are needed, but also to know the set of skills and knowledge that are required to perform specific tasks. The goal is to make it accessible through an interactive platform that is accessible to all.

68

We are now in the process of gathering information and cataloging it to facilitate profiling. This will help many creatives and technicians to know where to start and avoid mistakes resulting in increased budgets. How do I choose the right volume LED? What difference does it make and what should I use higher or lower pixel pitch for? What is the minimum I need? What is overkill? This information will facilitate decision making and help to meet budgets and avoid mistakes. After this development, what is the next objective of this program? The truth is that we have planned a three-year program and we want to complete the interactive wall chart that we mentioned before the end of this year. At the end of the next six months we will also be able to confirm the conditions on which our educational program will be based. On the interoperability side, we are very focused now on developing and clarifying that level of metadata we

were talking about earlier. We’ve started from the camera perspective, from capture all the way through. On this side, we are also reviewing the documents that were developed by the SMPTE community to see if we can apply already developed standards to these processes. Although it may be in a minimal way, but in a way that can help to start developing guidance. We also have an SMPTE subgroup that spreads the education and interoperability efforts we are working on at market fairs. How can the industry help? Everyone who is interested in this area, either just being part of it, developing it or supporting it financially can help. But the truth is that we have to do it little by little because there is a lot to cover. As an example, every day we talk to more manufacturers of LED walls and, at the moment we are not talking specifically about LEDs, but we engage in conversations about the


SMPTE

importance of metadata and how it works when it comes to your wall. It is important to know how it can affect the frame rate of what has been captured when it is played back on an LED display. That’s why we go back to the beginning of the process, because it’s necessary to make the end look good. Today we are in the middle of getting groups and companies that are not part of this ecosystem to start thinking about internal conversations that will allow them to develop research so that, when their turn comes, they will be ready. In fact, it’s fascinating to see how all these technology experts outside the content creation industry are already preparing for the future. This is necessary because at any moment a creative will come along and say, “Hey, I created this on TikTok, and I wanted want to put it up on a giant screen. But I also want to create this interactive world where I’m going to make something that’s going to be on YouTube, but it’s also going to be available for

Oculus. And with all that I want to create a NFT.” Everything is going to be just one thing, all together. And the whole industry has to be prepared. Is this technology at an early stage? How will it grow? I will respond in two ways. The first is that initial virtual production is nothing new. We’ve been doing it since silent films. It used to be a rotating backdrop behind a car, a painted canvas. Then it became matte paintings and became rear screen projection. Then, granting more flexibility, it became green screen. And, after all that, this is what we’re talking about. Nevertheless, virtual reality will not substitute green screens. Because not everything is appropriate for this technology. And, considering how it may evolve, I would say that the technology will reach a degree of refinement that will only come through a learning process. The people who manipulate and develop it are the people who will learn to get the most out of it.

On the other hand, this technology will change the paradigm. Creators will come to understand that nothing is destined for a single use anymore. In a given location you can have a LIDAR that scans everything and stores all the data in case at some point a sequel or an immersive experience through virtual reality, for example, is developed. In its growth, the sky is the limit, really. The possibilities offered by this technology are amazing. We can have the Colosseum to shoot a scene without having to travel to Rome. We can recreate a romantic scene at sunset as many times as we want because we have an element that will put you in that natural moment without any time limit. This fact also implies that everything is going to change from an environmental perspective. We are not going to destroy natural environments, we are not going to move large crews, we are not going to create film sets and then destroy them. 

69


OPINION

The Virtual Production Boom:

How virtual production and decentralized workflows are enhancing the viewer experience

By TJ Akhtar, Vice President of Product Management, Vizrt

Before 2020, remote workflows and fully virtual productions were nowhere close to being the status quo for broadcast teams— it’s the direction the broadcast industry was moving toward but was not commonplace. Entire production teams would need to be onprem to produce a full show. Camera, audio, and lighting teams, producers, etc. would operate the broadcast on-site. The last two years have ushered in a new generation of powerful, and decentralized workflows and virtual sets that leverage the advantages of hybrid and cloudbased solutions. With a new roadmap for virtual broadcasts, production tools have become

70

faster, easier, and more democratized than ever before. The decentralization of workflows and democratization of virtual production has advanced significantly with remote working. On-air talent, technical staff, and all the production equipment can now be located nearly anywhere in the world. “Teleportation” Interviews and discussions are now possible— two people from opposite sides of the world can take part in a live conversation without ever needing to get on a plane. Software-defined and cloud-based production tools enable broadcast professionals to reduce the cost of ownership for expensive hardware, spacious production rooms,

and unwieldy cabling— and opt instead to deploy highly flexible, scalable remote workflows.


VIZRT

So, how can we provide broadcasters with the tools needed to claim these benefits of virtual production? Software-defined solutions provide broadcast professionals and production teams with greater accessibility to a much larger pool of talent across a diverse geography. Production studios can tap into designers, onscreen

talent, and producers in different facilities, cities, and countries. For broadcasters and production teams, remote production enhances creativity in sourcing new capabilities and better utilizing existing resources. And productivity doesn’t drop when we enable teams to work from home, trends show that productivity increases when we allow teams to work remotely.

Decreasing people-on-theground and recognizing that broadcast teams can produce entire shows from a remote location enables crews to create entire prime-time productions with only the camera, audio, and lighting operators on site. Workflows today can be remotely automated and accessed from home. A production team from San Francisco can produce a

71


OPINION

show in Barcelona with only three people on-site — a concept that would have been a logistical nightmare just years ago. Enhancements in rendering technologies with virtual sets significantly increase the demand for more capable and photorealistic graphics design tools Creative teams with these tools in hand are limited only by their imaginations. With a more realistic production, the likelihood of engaged audience viewership heightens. Imagine for a moment a broadcast with a window in the background. If the window were there, you would see the woodgrain on the windowsill, the curtains drifting in the wind, maybe slight glare or reflections from the glass; perhaps not the most vital element of the story but one that evokes emotion from the audience and paints a fuller picture within a story. A virtual set needs to rely on the Physically Based Rendering pipeline (PBR) derived from the gaming

72

industry to deliver photorealistic graphics to the audience even though the onscreen talent is merely standing in front of a green screen. The same woodgrain, curtain movement, and glass reflections would be visible only to the production teams and viewers at home. The hyper-realistic visual elements achievable today elevate virtual productions to become more engaging and nearly indistinguishable from reality for the average viewer. So, why now? If the last two years were any different, remote workflows would’ve taken much longer to materialize. The conversations were there, they were happening all around, especially regarding the cloud, but the last two years expedited the adoption of the remote workflow. With brands eager to debut new technology and customers keen to try new remote workflows, the adoption of virtual production exploded. However, the need for consistent, fast, and reliable

workflows never wavered Cloud-based asset sharing, and management offer considerable advantages in this area, especially within a decentralized workflow. Production needs a unified appearance and visual story, which


VIZRT

can help tell a data-driven story. The teams who don’t have vast resources may need out-of-the-box tools to get the production up and running by adding simple tweaks to preexisting scene designs, adding logos and brand image, and then going live. Flexible and saleable cloud-based production tools further democratize virtual productions for all organizations to advance their business goals. In the past, broadcasters had good reason to resist moving anything from their on-premises systems. There is too much opportunity today to shy away from virtual production. Broadcasters are now more willing to try non-traditional workflows designed around different infrastructures, like the cloud, because

around the globe. Cloud-

Get prepared for the decentralized workflow

based asset management

For broadcast professionals

asset management allow

and production teams

broadcasters to adopt

looking to create everything

virtual productions and

from scratch, there are

decentralized workflows

ready-to-use tools that

with the future in mind. 

can be challenging when producers live and work

keeps production graphics and content in line with the brand or network image.

of the countless benefits. Quality talent, flexibility and scalability, and consistent

73


OPINION

5G – from utopia to reality and beyond By: Daniel Pisarski, VP Engineering, Americas, LiveU

I guess most of you reading this will remember the talk about 4G LTE in the buildup to initial rollouts, of course including the many trials that took place. Some of that talk was around the fact the arrival of 4G LTE would reduce the need for the then nascent IP bonding space as there would simply be so much bandwidth that bonding would no longer be needed by the streaming and broadcast space. We all know the reality of how that worked out. The ramp-up towards the launch of 5G has also made a lot of promises: gigabit bandwidth out of the air, wired network-like latency, millions of devices connected in a single cell, the ability to guarantee bandwidth, and even longer battery life for devices! But we aren’t quite living in

74

that world at the moment. This is a point with which our UK Country Manager Malcolm Harland concurs, saying, “I remember, as many will, when 4G was touted as the utopia. Of course, it was for a while, but then that settled down to the reality, a reality of which IP bonding has allowed customers to take full advantage. And now the reality is, as it was with the fourth generation, 5G isn’t utopia, it’s evolution. I think what 5G undoubtedly brings is greater bandwidth and significant benefits around latency. It brings more traffic capabilities, which increases remote production possibilities, which means things like remote camera control, autocue – all the things that can benefit from low latency. And then there’s the longer-term possibilities.”

5G has certainly moved the bar already. 4G LTE, or “Long Term Evolution”, has been with us for a decade. With the technology around for so long, it’s hard to remember the earliest days of LTE, but needless to say the performance of LTE has greatly evolved in that decade. 5G is in a very similar state – we are still in the “earliest days” of 5G networks, where sometimes the network cores are still run by LTE technology, or even sometimes uplink is provided by LTE while downlink is provided by 5G. In addition, some 5G options such as Uplink MIMO are not yet deployed by any network. The results


LIVEU

of this nascent stage are 5G networks that offer an incremental improvement over LTE, but not yet the “full promise” of 5G. Harland adds, “I think the market accepts that this is now an evolutionary situation; that’s the reality. Having said that, we are already beginning to see great benefits when it comes to video contribution using IP bonding.”

As examples, and as mentioned above, lower latency than LTE and latency that’s more predictable is a clear advantage. This, combined with the ever-improving rollout of 5G, gives it a very bright future over the next several years.

no. In the earliest days of

So, does 5G technology change the role of bonding in video contribution? The answer is both yes and

“ideal circumstances” a

cellular bonding, bonding was an absolute must to achieve the appropriate uplink bandwidth – a single channel of cellular was not capable of reaching the uplink bandwidth required. However, ever since LTE technology, in single cellular connection can offer the uplink bandwidth required. This

75


OPINION

certainly may be true in 5G as well. The key role of bonding comes in non-ideal circumstances, providing otherwise unobtainable resiliency, and redundancy and this remains true in 5G. In fact, 5G adds some exciting combinations to bonding such as bonding LTE to 5G, 5G MMW to 5G Sub-6, and combinations of all of the above. This gives a contribution stream access to a full-stack of cellular technology with simultaneous use of all the paths available today. Where the earliest (now more than 15 years old!) examples of cellular bonding required many modems and links, and left room for only some of those to fail, modern bonded cellular transmission can use fewer total modems and links, and still deliver great video, should a large portion of those fail, be unavailable, be out of coverage, or have any issue whatsoever. In addition to the benefits of bonding, LiveU’s LRT™ (LiveU Reliable Transport) protocol still plays a key role on a single 5G or

76


LIVEU

bonded uplink, through its many resiliency features. Verifications and resends, forward error correction, and packet reordering are still components of video contribution and part of LRT. Harland adds, “If we look at 4K, doing that over even eight 4G connections – four-camera over eight connections – is quite challenging at the quality level that people want guaranteed for really complicated content. With 5G, that steps things up a considerable factor. Using our LU800 multi-cam solution delivering 70 Mbps, or even 150 Mbit/s in the future, over 5G, reliably with low latency, then you can hit unquestionable broadcast grade for multiple HD feeds or 4K for even the most complex content.” What we’ve seen through some of the 5G trials in which we’ve been involved – not to mention our ongoing work at an EU standards level – is the potential of private 5G networks in combination with IP bonding technology. These networks are ideal in a sports stadium or other fixed locations – temporarily fixed, too – where a range of IP bonding technologies can take advantage, including

bonding via smartphones. This is a point reiterated by Deutsche Telekom at the recent Mobile World Congress, following on from trials in which we’ve been involved. We will continue to see technology development that takes advantage of these private networks. As well all know, there’s yet to be a network designed that hasn’t very quickly been full to capacity and beyond. The launch of both 5G networks and now associated super-high bandwidth Millimeter wave networks in many locations is undoubtedly a terrific opportunity for live video contribution and production over cellular networks. This is a huge step forward for cellular networks, including features such as “deterministic” latency and software-defined networks. 5G is beginning to provide real benefits across the professional video space in conjunction with properly developed IP bonding technologies. And, of course, there’s way more to come. For an additional look at 5G and LiveU: https://www.liveu.tv/ resources/blog/next-generation5g-ip-bonding-and-the-newworld-of-production

77


OPINION

Specificity versus flexibility: are they really mutually exclusive?

By Simen K. Frostad, Chairman Bridge Technologies

Why IP can drive flexibility in broadcast network structures, operating on an embedded, appliance or software basis. It might be a bit of a stereotype, but if there’s one thing the broadcast industry loves, it’s a gadget. From cameramen with bulging kit bags to studios with a bright flashing button for everything, a sleek new piece of equipment excites engineers and creatives alike. As such, whilst in the consumer sector there’s a constant drive to simplify, streamline and reduce things down to one device, one cable, one connection, the broadcast industry

78

remains surprisingly entrenched in its mentality of a different piece of equipment for every job. The reasoning is often that single pieces of equipment are thought to deliver specificity, precision and depth in their purpose. After all, they’ve been built for a single purpose, so it surely follows they’d do it well? But that’s a wisdom that needs to be questioned. Not least because server racks are beginning to groan under the weight of additional equipment and the functions they bring. Which is no good in an industry that increasingly needs to become more agile, flexible and dynamic – both in terms of production and delivery. Whether it’s setting up remote broadcast on the side of

a mountain to capture winter sports or handling the delivery of streamed content to a global audience, broadcasters


BRIDGE TECHNOLOGIES

need to find ways to do this in smaller spaces, with smaller (and often remotely dispersed) crews and with lower reliance on physical proximity. Fortunately, IP allows this. One of the most particular ways it facilitates this is in relation to network topology. There were once limits as to where (and how) components could be placed within the structure,

and these limitations added expense and reduced flexibility. Moreover, with a focus on ‘one object, one job’, networks were inevitably crowded and cumbersome. Now though, technology allows us to deliver vital functions on a flexible basis; as an appliance, an embedded blade, or as software, depending on the needs and priorities of the user.

Of course, for users to make this decision, they need to be aware of the strengths and drawbacks of each option. One of the key issues is energy consumption. Broadcast setups draw astronomical amounts of energy – a server can run from anywhere between 200 to 1000 Watts, whilst a probe draws as little as 25 Watts, which makes a huge difference when operating

79


OPINION

in remote locations with limited power options. Operating generally on the edge of a network, embedded options are best when there is a risk of needing to divert to backup power. They’re also more robust, with a life-span nearly double that of a server, and it goes without saying that in terms of size, they’re

80

significantly smaller, which presents obvious benefits in applications where space and heat generation are key concerns. If embedded options offer so many benefits then, why would any broadcaster pursue a server-based solution? The simple answer is capacity – giving more reach and depth in terms of what can be

achieved (though with the advances IP brings, even this becomes a less pressing consideration). Centralising in this way actually also enhances the ability to engage in remote and distributed production, and in terms of convenience, server-based solutions are usually the closest you’ll get to plugand-play.


BRIDGE TECHNOLOGIES

And finally, there are software solutions. Again, it’s IP that has really revolutionized this field. For broadcasters who already have their own server installations and want to maximize versatility whilst reducing costs, software provides a perfect option – tailored and specific, but highly versatile. Software solutions even allow you to go so far as running from the cloud, a concept that was unthinkable just a few years ago given the immense amounts of data in question.

philosophy. It can be

At Bridge Technologies, this idea of maximising flexibility and specificity is what has driven our product production

embedded, appliance or

seen in the VB440, which delivers a full spectrum of industry-leading, intuitive and in-depth production tools through an HTML-5 browser, allowing for fully remote and distributed production anywhere. And it can also be seen in the VB330 - a solution for broadband and media operators who need to monitor and analyse thousands of streams along their backbone in real-time and in parallel. By allowing network operators to choose between software installation (or indeed, a hybrid of all three as a result of the harmonized nature of the

underpinning coding), this grants maximum flexibility but delivers the same amount of specificity and deep-level functionality. So whilst unpacking another expensive broadcast toy can bring a bit of excitement, it’s time for the industry to adjust its mindset to meet the potential that IP brings – detaching itself from the conventions of single purpose hardware, and rejecting ideas that flexibility, agility and multi-purpose solutions necessarily mean a compromise on functionality. It’s time to do more with less, and embrace the IP revolution.

81


TECHNOLOGY

“How to Mojo” by Ivo Burum (Part 3) How do I record and edit mojo stories?

Ivo Burum is a journalist and award-winning television executive producer – and a mobile journalism pioneer. His company Burum Media provides mojo training in remote communities and to many of the world’s largest media groups. Dr Burum has written five books on mobile journalism. His latest, The Mojo Handbook: Theory to Praxis, published with Focal/Routledge, has been chosen as one of 12 must-read books for investigative journalists in 2021. Ivo lectures in mobile and digital storytelling and in television production and is discipline convener of Media Industries at Latrobe University, Australia. Twitter: @citizenmojo

82


MO-JO

83


TECHNOLOGY

Figure 1. The author with a variety of mojo equipment (Photo courtesy of Ivo Burum).

Recording Mojo Stories Story rules, and a story focus should always dictate the relevant technology and approach — mojo, or hybrid. Once the story has been developed, there are several key focus areas that can be addressed when shooting mojo stories. Equipment Focus The smartphone is a Swiss Army knife on steroids — in media terms, it’s a creative

84

suite in a pocket. The mojo kit will vary depending on the job and the region. In some regions of the world, you probably wouldn’t have large kits because they’re too obvious. Working small can mean you have more access and are less of a target. Using a smart device properly is not always easy, and it takes some practice to effectively use powerful apps. The type of app is often determined by the level of story you need to

tell. So, experiment with apps and make your own informed decision based on the job at hand. Coverage Focus Video stories need pictures. There are basically two types of coverage — the type that you set up and actuality. Either way, you are looking to shoot wellcomposed shots and sequences that tell a story. My experience as a producer and cameraman working on international current affairs shoots in


MO-JO

Ivo’s tip When working hand-held press your elbows into your sides and hold the smartphone with two hands. This triangle creates added stability, in much the same way that a smartphone cradle, like a Beastgrip, does. difficult regions suggests that a tripod can slow things down and quickly paint you as a target. Use tripods for long interviews, long wide pans and a longlens close-up, otherwise practice working hand-held, close to the subject, on a wide lens.

Figure 2. Stabilising a hand-held shot (Photo credit: Ivo Burum).

Irrespective of the type of

• What B roll is required to

coverage, it’s important to

introduce interviewees,

consider these questions:

cover edits and narration,

• Will the story have a dramatic opener with close-ups, a telling wide shot, dynamic B-roll

and compress and expand sequences to accentuate story points? • What interviews and

actuality, narration, an

actuality are required to

interview grab?

provide currency?

Figure 3. Standard frame sizes (by Ivo Burum).

85


TECHNOLOGY

• How will you close the story and what vision, and audio is required? Following are a couple of templates for thinking about coverage. Check your framing. Generally, use a Medium Close Up (MCU) for interviews and change to a Medium Shot (MS) if your interviewee moves around a bit. Here are some basic shot sizes. The classic five-shot rule. This template could be used to cover simple activity and a localized event — how shots are juxtaposed with narration and other elements, and how they are edited gives the piece its pace and gravitas. Look at these three shots. It is the same shot cropped, yet, each frame provides a different feel. So, choosing

Figure 5. Frame variation (by Ivo Burum).

86

Figure 4. Sequencing (by Ivo Burum).

your framing is essential to telling the story. Which frame would you begin with and why? Add an interview, narration, relevant B roll and, if you’ve planned right, completed SCRAP, and some captured “emotion,” you’ll have


MO-JO

Figure 6. Eyelines (by Ivo Burum).

the raw material to edit a strong story. The five types of shots (Fig. 4) that form a sequence, plus the main interview and narration, will tell your basic story. But your story may have anything from five to 13 sequences, each identified by a style of shot(s) and each having a structural role. When planning these shots/ sequences it’s easiest (if possible) to consider them in the order they might be edited. Hence, my tip is to learn sequence coverage by filming something that has a definite process and structure. For example, practice filming someone cooking pasta, making eggs and bacon, or using a lathe to shape a block of wood

into a table leg — how many different sequences are required to cover each of the above processes and to edit them from say 30 minutes, into a two-minute film? How much B roll will you need? What about narration? Simple process shoots, where structure is evident, enable a coverage focus and in particular sequencing with an emphasis on emotion, information, B roll, interviews and the elements required to turn a long process into a short film. Understanding eyeline is important. If you set up for a single-camera location interview, you might consider the front-

on eyeline set up in ‘B’. In many regions it is quite acceptable to hold the camera in front — at chest or waist height — and have the interviewee respond over the camera. When adopting ‘B’, place the camera as close to front on as possible, and place the journalist (mojo) next to the lens. Traditional location interview eyeline would have you place the camera behind your eye line ‘C’, but not if you are working solo. ‘C’ is done when working with a camera person, who can check the frame. If you need the shot to be profile ‘A’, it can be, but avoid profile shots for the following reasons: • You get more variety of shots, without moving, if you shoot from the front. • You can easily push the camera in from the wide lens when shooting from the front. • You can even zoom for a close-up (not advisable unless imperative). The front-on shot (or almost front on) is more powerful because we see

87


TECHNOLOGY

the person’s face, their eyes and their emotion, all without moving the camera away from their eyeline.

Figure 7. Screen direction (by Ivo Burum).

Here are a few tips for shooting shots and sequences: • Switch the smartphone to Airplane mode to avoid unwanted calls and WiFi interference. • Make sure your battery is charged. • Make sure you have space on your phone. • Let someone know where and when you will film. • Use a structural plan to indicate clearly what you have filmed and how much is left to film; this will help during the shoot and the edit. • Prepare specific research and questions for your interviewee and set these out against your structure. • Shoot with the light over your shoulder and on the subject. • Don’t use the tripod unless you must.

88

• For stability create a V with your smartphone, hands and your elbows locked into your body (see figure 2 above). Using a wide lens also helps with stability. • Always switch the camera on and wait for the counter to roll before asking questions. • Ask open-ended questions that don’t give you a yes or no answer. • Frame your interviewee in medium close-up (MCU) unless they move their hands about; in which case choose a wider medium shot (MS).

• Try and hold B-roll shots for at least 10 seconds to ensure they are steady and to enable multiple sections of the shot to be used. • Always shoot a few seconds of static shot before and after a panning shot so your pan is steady at both ends and so you end up with three potential shots. • Shoot B roll before the interview, for example, as you enter a building, during actuality with interviewees and immediately after the interview.


MO-JO

• When it gets busy, think about information (story) and not the shots.

• Don’t forget the detail and your location-setting shots.

• Steady movement within a frame, or that which is created by moving the camera, can create more dynamic shots.

• When you feel you have enough B roll, get some more.

• Be aware of screen direction and the 180-degree rule that dictates that you stay (keep your cameras) on one side of the screen line. Figure 7 shows the screen line and in this example if our cameras stay on one side of the line, person A will always be talking left to right and person B right to left, and to each other. Having said that, learn how to break rules and the boring line, by using neutral close-up shots that enable you to cross the line. This is one reason why B roll is so important.

• Will you need an audio buzz track of the atmosphere in a room — usually recorded for at least 30 seconds, it’s a continuous audio recording used in postproduction to cover and smooth contrasting audio edits. • Will graphics be required and how and where will these come from? • If it’s a historical piece, or an update, will archive material be used, where will this come from and who pays for it?

Coverage aspects that should be considered are:

• Will a stand-up, piece to camera (PTC) be needed and if so, why? Stand-ups can be effective when there is no B roll, no interviews or actuality, and to establish the journalist on location.

• How will interviews and narration create story flow and bounce?

When shooting a standup, keep the script short — 8-10 sec, one thought

• Use the correct mojo gear (see Tools)

per sentence and one sentence per paragraph. Choose a key word for each paragraph. Roll into a retake without buttoning off. If there is a problem, it’s usually in the script. If you fix the script and still fluff it, put a sharp stone in your shoe. When your concentration is on the stone, record the stand-up. Audio Focus Recording audio in the field is about being ready with the right microphones and a sound strategy to enable you to react to location inputs. The recording task is made more complex because mojos predominantly work on their own, locations can be noisy, and when shooting video quickly on location it’s not always possible to be close to the sound source. In difficult and changing audio situations, I’d suggest: • Always have a shotgun mic attached to your phone/cradle and a lapel mic in the bag so that you can respond quickly. • If you need an interviewee to walk and talk, use a radio mic.

89


TECHNOLOGY

• If you need your questions cleaner, use a splitter and mic yourself with a lapel mic. Remember that both the interviewee’s and interviewer’s microphones will generally be recorded onto the same track on your smartphone. So, don’t overlap your voice with the interviewee. • If you need a true split, use a device like a Zoom H1 to record a second track of audio and record a clap at the beginning of the record to sync the two tracks in the edit. In summary, video stories are more than radio with pictures. Understanding the rubric cube nature of combining video elements

to create a seamless story is a key step in the mojo workflow. You need discipline in your approach to successfully pull all the pieces together. Working as a mojo is a holistic and organized process. You need to be smart and ready to react to location inputs. You need to think and work like a journalist and focus, record and edit the story like a filmmaker. Staying ethically and legally healthy and understanding the technology — but not being limited by it — is also part of the job. Knowing how to do all this is difficult, but the control it provides mojos over their stories is uplifting. Give it a go and you’ll grin broadly at the unfolding possibilities.

Figure 8. Mojos recording sound at a suspected murder scene (Photo credit: Ivo Burum).

90

Recording Mojo Audio on a Smartphone In the TV business we have a saying “waiting on sound” and no truer words have ever been spoken. Soundos are always doing something like waiting for an air-con to switch off, or a car noise to disappear, and we wait with them. And when we aren’t waiting on sound, we often treat sound as an afterthought. But sound is the most important of all the recorded elements. Here are some tips on location sound recording picked up over my years of television and mojo production and training, which will assist your mojo work.


MO-JO

Figure 9. Polar patterns (by Ivo Burum).

Mojos typically work without a sound recordist and need to be aware of sound issues when interviewing, shooting B roll and recording actuality audio. It’s a difficult juggle but with some practice mojos can become very effective one-person bands. Location Audio Clean audio recording begins with a sound strategy — understanding the parameters of the task and planning accordingly. If you’re recording speech, one of the dynamic microphones or cheaper lapel mics mentioned under mojo tools might be right for your job. At a press conference, a demonstration, or a sporting event, where

cables can be an issue, a wireless set-up might work best. Whether you’re using a lapel, shotgun, or wireless microphone, it helps to understand sound recording principles. Generally, the closer a microphone is to the sound source, the more focused the audio, and the richer the sound. A microphone’s polar or pick-up pattern determines how it picks up sound — at the front, the side, behind or all around the microphone capsule: – Omni-directional microphones have a 360-degree pick-up pattern and are excellent for placing on a table to record several sources, or for use as neck mics.

– Cardioid, or super-cardioid and super-directional shotgun microphones have a narrower pick-up pattern and are found in handheld microphones, or on video cameras used by news and documentary makers and mobile journalists. Recording Actuality Audio Actuality is the timesensitive content that you don’t need to set up – police, ambulance, or witnesses at an accident – it’s not full of the journalist’s questions. Sometimes called B roll, cutaways, overlay, and a bunch of other names, actuality is used during an edit to create atmosphere, cover cuts, and can give the story currency.

91


TECHNOLOGY

Ivo’s tip Never record actuality or B roll mute because you never know what will be said.

Figure 10. Framing (by Ivo Burum).

On-camera cardioid, super-cardioid and superdirectional shotgun microphones may be the right option for mojos covering evolving run and gun events, primarily because mojos normally work close to the source and not on a long lens. These on-camera microphones have a directional pattern that focuses on audio in the direction they are pointed. Recording Interview Audio Individual interviews are generally recorded in a medium close-up (MCU) shot that’s cut off around chest area; close enough to feel emotion and to enable

92

a sound recordist (when working with a crew) to swing their boom close to the edge of frame. A mojo will shoot most interviews in an MCU and without a sound recordist. For a more animated interviewee, a slightly wider mid-shot (MS) might be best. Lapel, or lavalier, microphones are preferable for recording interviews. Pinned 6 to 8 inches (the spread of a hand) from the interviewee’s mouth, they are close to the sound source and shielded from the wind by the subject’s body. The microphone cable should be at least 2.0 meters long, like on the Sennheiser XS Lav, so it can be hidden under a shirt.

Cardioid and super-cardioid shotgun microphones, placed on the smartphone, will work fine if they’re positioned within a meter (750mm) of the interviewee and there is not too much background noise. Talent should be positioned so that extraneous noise hits the interviewer’s back and the back of the microphone. If interviewees are sitting next to each other, try using an omnidirectional microphone placed between them. Alternatively, a splitter plugged into the smartphone enables two microphones to be used when interviewing two people. You’ll get two


MO-JO

Ivo’s tip If you are panning between two interviewees ask them not to overlap each other when they speak.

Figure 11. Mic positioning (by Ivo Burum).

Ivo’s tip Learn to shoot with Using a wide lens on your smartphone rigg enables mojos to get their directional or topmic closer to the subject and the sound source while retaining an MCU interview frame. If you want tighter audio perspective don’t zoom in, walk in so that the mic is closer to the subject and work on a steadier wider lens and frame.

distinct audio feeds that are recorded as one track on a smartphone. Digital recorders can be handy to record audio separately when two tracks are needed. The Zoom H1 (USD100) is small, cheap, and effective. I also use the Tascam DR-10C PCM recorder (USD180) that fits into the palm of a child’s hand. You could also use an audio app that records split tracks and sync in the edit. Handheld microphones: are often used in very noisy location news reporting where they are pivoted between the reporter and the interviewee. With

practice, mojos can use this technique effectively and quickly. Radio microphones, also called wireless, are mostly lapel or handheld. They use a transmitter that sits at the source, rather than a cable, to send a signal up to 150 meters away, to a receiver that sits on a camera, or smartphone. Radio mics are used when the reporter, or the interviewee, is mobile, or some distance from the camera or smartphone. Recording Narration Audio Narration is usually recorded during an edit and

93


TECHNOLOGY

Ivo’s tip Avoid giving handheld microphones to interviewees who will invariably move them away from their mouths and wave them about.

The easiest way to do this

Make sure your

on a smartphone is to place

microphone is platform-

your hand over the lens

ready; smartphone-

so that the shot is black

compatible microphones

and easily recognizable as

use a TRRS (tip, ring, ring,

narration, and record the

sleeve) plug with three

narration as video. Doing

rubber rings. TRS (tip, ring,

this means that all assets

sleeve) plugs, with two

(video and audio) are found

rubber rings, are generally

in the same folder on the

used with DSLRs.

app, so that you don’t have to search for your narration when editing. You can detach the audio from the

for mojos this can occur on

black video during the edit,

location. On smartphones,

or simply cover it with B roll.

I mostly record narration

Another quick option is to

as video with audio into the

use the edit app’s voiceover

Camera Roll (iOS) and in the

record function tool during

Gallery (Android).

the edit.

Figure 12. Narration (by Ivo Burum).

94

Ivo’s tip Buy the best radio mic set you can afford, because you probably won’t change it.


MO-JO

Ivo’s tip When working quickly it’s best to record narration into the same location on the smartphone as video.

Sound Recording Do’s • Always switch on airplane mode so you don’t get calls or experience WiFi

• Always do a test and check audio at the beginning of the recording, then check your recorded audio before you leave each location.

If you have time, make sure you get a buzz track — 30 seconds of natural location audio. This can be laid under a series of B roll shots to smooth audio transitions in the edit.

• Always record audio clean in the field. Don’t rely on fixing it later in postproduction.

Finally, the best tip I ever got about recording audio is to get the microphone close to the sound source and to listen to the location — this is still key to recording dynamic onlocation audio.

Treat every shoot like an overseas job, where you need to record everything clean the first time, because you can’t go back.

distortion. • Always have a directional microphone on the smartphone just in case. • Always carry a lapel microphone. • Always carry a spare battery for batterypowered microphones. • Always carry a windsock or dead cat for filming outdoors. • Always record within a meter of the interviewee to avoid background noise.

Ivo’s tip On location, record narration in a car parked somewhere quiet, or in your hotel room in between two mattresses positioned as a tepee, or simply on a park bench with your jacket over your head. Do not record in an echo chamber like a stairwell.

Editing on a Smartphone Being able to shoot, edit and publish using a smartphone enables immersive cross-border reporting. Rana Sabbagh, senior editor for the Middle East and North Africa (MENA) at OCCRP, believes all journalists should learn to edit stories on a mobile. “Editing generates diversity in storytelling and a local point of view’. Today’s mobile edit apps allow mojos to slide B roll, alter transitions, duck audio, mix music, create versatile titles, render and send

95


TECHNOLOGY

Ivo’s tip Newer mics can be platformagnostic choosing between TRS and TRRS automatically.

projects to various target sites from almost anywhere with a connection. Android or iOS — it doesn’t matter. But what’s important is not only the technology but the skillset it’s wrapped in. Video editing is a way of thinking about converging states of immediate possibility. More specifically,

it’s a visual language and a form of digital writing that enables journalists to be video storytellers. I’ve literally spent thousands of hours in edit suites and here’s a summary of what I do at each of the key stages. Planning the Edit in the Field Make a SCRAP story plan early in the production cycle and use that as your draft edit map. SCRAP answers journalism’s five ‘Ws’ (who, when, what, where and why) and with the five-point story and structural plan, it forms my first edit map. I use this plan to jot down elements

Figure 13. Edit platforms (Photo credit: Ivo Burum).

96

I record — my interview questions, actuality and B roll — against certain structural points I’ve listed. This indicates what I’ve shot (what’s missing) and how my content might best be used in the edit. Beginning the Edit Starting is important, especially when you can’t make up your mind about how to start. More than 30 years ago, Ulla Ryghe, Ingmar Bergman’s editor, taught me the hardest lesson in editing when she said, “Ivo, you need to learn to kill your babies.” Beginning the edit helps identify which shots and interviews work and which can be thrown out — killed.


MO-JO

Start with the strongest elements — actuality, interview grabs, overlay, music and or narration — that best capture the story emotion. A stand-up can also be very effective if the journalist is in the middle of a riot. Don’t get bogged down finessing your edit before you find the structure and the emerging story. Once the story is laid out on your timeline, only then add B roll (unless of course B roll is seminal in early timing).

Figure 14. Checkerboard editing (by Ivo Burum).

The Story Cut Using a video edit app with at least two video tracks (V1 and V2) enables the story cut to be edited on V1, leaving V2 to add B roll. This places the focus on

Ivo’s tip Try structuring a story to answer what listeners’ questions might be, in the order you think they might arise.

Figure 15. Butt and J cut (by Ivo Burum).

story — writing in and out of the interview grabs and actuality — rather than on B roll. However, two video tracks also enable B roll to be added early and slipped during the fine cut. Figure 2 below describes checkerboard editing, where video PTCs and interviews checkerboard with B roll and narration. This is done so that you can replace, shorten or extend B roll shots when they are inserted, something that you can’t easily do when

editing with only one video track. The example shows how you might extend the wide shot to overlap some of the interview and do the same with CU 1 (close-up) or CU 2. This creates a more dynamic split edit. A2 is used to add music or FX. Changing butt edits (where one edit butts up to the next) to split edits, or ‘J’ and ‘L’ cuts (where edits overlap at the beginning, or end of a clip) creates a more

97


TECHNOLOGY

dynamic and urgent edit pace — pre-empting audio (J cut) and holding audio (L cut)). Editing B Roll B roll, overlay, or cutaways cover unwanted zooms and jump cuts in an interview, can hide mistakes in shooting, colour narration and is used to compress and expand a sequence. During and/or just after an interview, make a note of the B roll shots you need. The interviewee might

mention an important map. To assist editing that scene, B roll might include a sequence with the person looking at their map, in a wide shot (WS), in close up (CU) and a CU of map detail. Writing and Editing Narration Narration is used to compress rambling sync dialogue, expand sequences and to segue between story elements and structure. It bounces the story forward by

Figure 16. B roll (by Ivo Burum).

98

relating to the outgoing sync grabs and introducing the incoming video. This is called writing in and out of pictures. We used to write and record narration in the car sitting in a park, or in the hotel suite in between two mattresses, even on a train on the way to the edit. In the mobile ecosphere, it often happens at the scene where your five-point plan, with your location scribbles and narration notes, is essential.


MO-JO

Figure 17. Mics (composite by Ivo Burum).

Read the narration script into an external microphone plugged into the smartphone and positioned about 6 – 8 inches from your mouth. Record audio (as vision and audio) directly into a camera app, with your hand over the lens, so the shot is black. This enables you to find it fast during the edit, because it will be in the same folder as your video and being black, it will stand

Ivo’s tip Narration shouldn’t repeat what is said in the first words of the incoming audio grab.

out. You could also use the edit app’s voice over feature. Except in circumstances where you need audio at super-high bitrates, or specific formats, you won’t need to use a separate audio app to record narration for video. But you will need a good microphone. When writing and timing narration, you can work on

99


TECHNOLOGY

three words a second, the speed at which we generally read for television. If your video is seven seconds long, you’ll need about 21 words of narration. Here are some basic tips for writing and recording narration during the edit: • Speech is more informal, so use spoken English • Write in the present tense and active voice so the audience feels the currency • Use mostly simple phrases and sentences — one idea per phrase or sentence • Sentences should be about 5–25 words • If you need to, use your hands to help inflect and punctuate for rhythm • Don’t inflect every word, like a newsreader • Don’t read and report, tell us a story • Remember the first few words are often not heard, as the audience tunes in to the story

100

The Fine Cut Once the edit is at rough cut (with B roll in place) you’re ready for the fine cut. Where the story’s not working on the timeline, where it fails to move forward, it’s often because the sequence includes redundant information. If you have that problem, try the following steps: • Go back and watch the edit once without changing anything • Make notes about what works and what doesn’t • Go back to the beginning and start fine cutting: - add or replace interview shots and B roll - slide B roll shots left or right in the timeline for best impact - shorten shots to lose dead air (any gaps that shouldn’t be there)

Ivo’s tip Get a rough narration down quickly to establish timing and story structure and then add B roll and finesse the edit.

• Emotion — the cut has moments of emotion • Story — the cut focuses and moves the story forward • Rhythm — cuts occur at the right time to assist story bounce • Screen and story relativity —that we understand the story continuity I will always ask:

- ensure that words aren’t clipped

• Why am I cutting to new

- re-record and relay any draft narration.

• Is that new information,

This is different for everyone, but I look for the following in a cut:

• Where does it take the

information?

right?

story?


CAMERA ACCESSORIES

Here are some basic edit rules: 1. Never make a cut without a positive reason. 2. When undecided about the exact frame to cut on, cut long rather than short. 3. Whenever possible cut ‘in movement’. 4. Current is preferable to the ‘stale’. 5. Substance first — then form. Don’t use a shot unless it moves the story forward editorially and emotionally. Cut for story

even in fast montage sequences. Playing out the Project into a Video On the timeline the video is a project (computer language) that needs to be rendered (transformed) into a video. The process involves rendering the timeline in either a hiresolution (e.g. 4K or 1920 x 1080 HD1), or one of the lower video resolutions (e.g. 640 x 360). Render and export in a low-resolution if upload speeds are low, then re-render at a higher resolution and send that

version when you have better bandwidth. Once rendered, export to Camera Roll or Gallery and upload to YouTube, or a proprietary site. Even after decades of video production, editing still feels like magic. Only 25 years ago this magical experience was costly and relatively exclusive. Now we carry our own creative edit suite in our pockets. All we need to do is learn how to use our smartphones to write with pictures — that’s the editing process. Go mojo… 

101



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.