TM Broadcast International #113, January 2023

Page 1

EDITORIAL

2023 begins and TM Broadcast International is fully focused on two of the major trends that will make our industry evolve during this year. We have thoroughly analyzed the technical developments and business possibilities that virtual production techniques and metadata curated by artificial intelligence in the digital environments will bring to the companies in our industry, not in the future, but in the present.

Virtual production is destined to be one of the great gamechangers in the multimedia content creation sector. The technology enables more flexible workflows, broader creative possibilities and much more sustainable modes of consumption and investment than traditional techniques. We offer you a detailed report on the technological capabilities and business possibilities provided by two of the most widespread technical infrastructures in virtual production environments: Green Screen or LED Volume?

Our expert, Yeray Alfageme, Business Development Manager of Optiva Media, an EPAM Company; has brought us an in-depth analysis of everything that can contribute to our business by knowing our user perfectly. Through an analysis of the Big Data

provided by the sharing of content consumers on the platforms and through artificial intelligence curation techniques we can multiply advertising revenue in digital environments. We can even slightly dilute one of the biggest risks associated with this industry: knowing in advance if what we are going to produce will be successful.

As a magazine focused on all the margins of our field, we have approached to know the technical details of one of the most significant post-production companies in LATAM: Cinema Maquina. Adapted to the new times, its workflows have evolved to gain flexibility and efficiency in its contributions thanks to its great investment in technology. This Mexicobased post-production company has worked with major platforms such as Netflix, Amazon Studios or HBO, and has become one of the main players in the American market.

Finally, in this edition, we travel to central Africa to learn about the working methods, the state of infrastructure, renovation plans, development trends and the challenges that one of the oldest public broadcasters in the region faces: Kenya Broadcasting Corporation. Will you join us on this journey?

3
chief
account
Creative Direction
González mercedes.gonzalez@tmbroadcast.com Administration
in
ISSN:
TM Broadcast International #113 January 2023 TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial
Calle
14 28290 Las
Phone
43
Editor in
Javier de Martín editor@tmbroadcast.com Key
manager Susana Sampedro ssa@tmbroadcast.com Editorial staff press@tmbroadcast.com
Mercedes
Laura de Diego administration@tmbroadcast.com Published
Spain
2659-5966
Tartessos
Pollensa 2, oficina
Rozas (Madrid), Spain
+34 91 640 46

Kenya Broadcasting Corporation

We wanted to go to the center of Africa to know, first-handed, what is the state of broadcast technology in this part of the world. Through an exclusive interview with the oldest television in the region, has almost a century of life, we have been able to identify the same trends of transition to IP, adaptation to digital ecosystems or the constant investment and modification of infrastructure to broadcast in HD.

Virtual

production

with Dock10 Mars Volume

Virtual production is destined to dominate the multimedia solutions sector. These techniques, which are expanding rapidly in our industry, require fewer physical resources than more conventional production methods. This feature provides three key circumstances for the future of the sector: flexibility, sustainability and great creative capabilities.

4 News 6
SUMMARY
20
30
5 AI and Big Data in broadcast FAST channels, how to really customize content Signiant and Digital Nirvana 52 OB1: The great cinematographic stake of Procam Take 2 88 Postproduction Cinema Maquina 70 SRT Alliance 80

Kiloview launches a new generation of video encoders: Kiloview E3

Kiloview has released the E3 generation of video encoders: Kiloview E3. It realizes video input and loop through HDMI up to 4K P30 and 3G-SDI up to 1080 P60, encoding HDMI and 3G-SDI video by H.265 and H.264 simultaneously. It also features an enhanced chipset and supports simultaneous transmission to 16 destinations with an adjustable bit rate of up to 100 Mbps. The product to release on January 16th, 2023.

E3 allows encoding video sources from any 3G-SDI or HDMI port, mixing both video sources

into one output in PIP or PBP mode or even encoding video sources up to 1080p60 from both HDMI and 3G-SDI ports simultaneously for one transmission.

For all IP-based video transmission processes it supports the HEVC video codec as well as H.264.

It has the ability to process either source or a video mix of the two video sources with multi-protocols including NDI|HX2/ NDI|HX3/ SRT/ RTMP/ RTSP/ UDP/ HLS.

The Kiloview E3 system can transmit live to up to 16 different platforms

simultaneously by outputting both main stream (up to 8 channels in 1080p) and secondary stream (up to 8 channels in 720p) with an adjustable bit rate. The maximum speed is up to 100 Mbps.

The solution can process rotation/crop and add custom overlays, including images, text or other OSD to the output video.

E3 allows encoding audio sources through its 3.5 mm line input and SDI/ HDMI embedded audio input. It can convert audio into multiple formats, including AAC, MPEG-4, MPEG-3, MPEG-2, Opus or G.711. The device supports embedded audio encoding of 8 channels of SDI inputs and 4 channels of HDMI inputs.

The encoder is equipped with a 1.14-inch LCD display with touch buttons. Connectivity, IP address, CPU, memory, resolution, temperature and real-time tally status can be checked on the LCD display. 

NEWS - PRODUCTS 6

Sony, Plateau Virtuel and Studios de France develop a major virtual production studio with LED solutions

The collaboration between Plateau Virtuel, Studios de France and Sony resulted in a modern studio adapted to virtual production flows with LED volumes. Sony Crystal LED screens have been deployed in the virtual studio to create a surface of 90 m2 (18 meters wide by 5 meters high). The capture technology also comes from the Japanese multinational: it has been integrated with the Sony Venice.

Plateau Virtual is a company focused on virtual production, a subsidiary of Novelty-MagnumDushow Group, a business group focused on event production. Studios de France is a set supplier subsidiary of AMP Visual TV, a group dedicated to live production in France.

The project was born after shooting a campaign for the European Space Agency in virtual production with the Venice camera. Plateau Virtuel wanted to take

virtual production a step further in terms of playback and also on-set quality.

“It took us 15 days to set up this screen with Sony equipment. We felt it was important to have a suspended structure to be able to slide floors underneath, LED or otherwise. We also have an LED ceiling that allows us to integrate everything if necessary,” says Bruno Corsini, Technical Director of Plateau Virtuel.

The curved display consists of 450 “units”, each with a combination of 8 LED

modules. The technology offers a very high contrast ratio, and a pixel pitch of 1.5 mm.

This new studio, designed for audiovisual and film productions, has also been designed for television production. AMP Visual TV representatives say they wanted to deploy a “laboratory platform” to respond to all kinds of requests.

The presentation of the study will take place on February 12 at Seine Saint Denis, in the north of Paris. 

NEWS - SUCCESS STORIES 8

Zero Density Powers FIFA World Cup Coverage Around the Globe

As fans watched, broadcasters around the globe were bringing smashing graphics to the screen with the help of Zero Density products. Using its real-time graphics hardware and software, Belgium’s RTBF, France’s TF1, Qatar’s Alkass, Maldives’ Ice TV, Malaysia’s Astro, Slovenia’s RTV SLO, and UAE’s Asharq News built virtual studios for all their live coverage.

While few broadcasters opted for fully green screen studios, others built hybrid sets with LED panels in addition to the cycloramas or went for virtual set extension in a physical studio. They all had in common: smashing graphics, high-quality storytelling and cutting-edge Reality tools.

Reality Engine offers broadcasters a wide range of tools and features to create the ‘wow’ effect on screen. Flycam and teleportation were among the most popular features to be utilized. Flycam gives the ability to go beyond the limitation of the physical location and 360-degree visual storytelling capability. France’s TF1 placed its studio on the stadium’s edge, looking over the field. With flycam, TF1 connected the physical camera to the virtual camera and took the audience to any location in the stadium to see player lineups, statistics and more.

Teleportation allows to ‘beam’ guests to the studio from a remote location as augmented reality graphics. For example, RTBF teleported soccer players from Qatar to get their post-game comments, and Alkass Sports teleported reporters as well as guests to receive their commentary on the game and news from the field.

All these creative broadcasters offered dynamic player lineups, statistics and scoreboards as augmented reality graphics to their thousands of viewers and pulled them into the story. For example, RTV SLO extended its regular u-shaped cyclorama with additional flat pieces to have more space for the graphics and the host to move. The shape and color difference can be handled easily by Reality Keyer, the first and only real-time imagebased keyer with advanced clean plate technology, making the system much more advanced than just a standard chroma keyer.

Read on Zero Density website to discover how each broadcaster decided to enhance their creativity with Reality Engine.

9 ADVERTORIAL
Photo courtesy of TF1 & Kennedy Agency & Dreamwall Teleportation of Thorgan Hazard. Photo courtesy of RTBF & Dreamwall

Blackmagic Design has shared a recent interview with Tashi Trieu, professional colorist, about his work on James Cameron’s “Avatar: The Way of Water”. The colorist did his grading process in DaVinci Resolve Studio The development of the esthetics was mainly carried out in WetaFX.

The professional was involved at the beginning of pre-production. A series of very detailed tests were carried out to achieve the best possible solutions. Shooting in stereoscopy – the 3D feel – added a higher level of difficulty. Trieu says that polarized reflections can result in different levels of brightness and texture between the eyes. That can generate gradient effects in “3D.” “I’ve never done such detailed camera tests before,” he says.

Thanks to the intermediation of WetaFX, a wide creative latitude was provided at the

digital intermediate stage. Coupled with this factor, the LUT used is a simple S-curve with a basic color spectrum mapping from SGamut3.Cine to P3D65. “These two features provided flexibility in bringing different tonalities to the film,” says the colorist.

The mechanics of the work, in the interest of those involved, were simplified and automated as much as possible. The colorist created on this project, through DaVinci Resolve’s Python API, “a system to index the delivery of visual effects once they were sent to the digital intermediate, so that my editor Tim Willis

(Park Road Post Production) and I could quickly import the latest versions of the shots.”

The challenges associated with the production process, highlighted by Tashi Trieu, were not only related to the stereoscopic format, but also to the high frame rate (48 f/s).

“Even on a state-of-theart workstation with four A6000 graphics processing units, it’s very difficult to guarantee real-time operation. It’s a very delicate balance between what is sustainable with SAN bandwidth and what the system is capable of decoding quickly,” details the interviewee. 

NEWS - SUCCESS STORIES 10
“Avatar: The Way of Water” graded with Blackmagic Design’s DaVinci Resolve tool

Rohde and Schwarz delves into 6G telecom viability

6G transmission will be at frequencies below 1000 hertz, or below THz. SubTHz typically encompasses the frequency range from 100 to 300 GHz. Within this range, frequencies between 100 GHz and 300 GHz are of great interest worldwide.

Access to much higher bandwidths will enable very high-performance shortrange communication combined with an environmental sense for object detection or next-generation gesture recognition down to millimeter resolution.

For these reasons, Rohde and Schwarz is undertaking

in-depth studies of the specific conditions in the electromagnetic bands at this frequency. In addition, it has also devoted efforts to channel sounding measurements between 100 and 300 GHz.

The company’s findings have contributed to the ITU-R Working Party 5D (W5PD) report, “Technical Feasibility of IMT in Bands above 100 GHz” with the objective of studying and providing information on the technical feasibility of cellular mobile technologies in bands above 92 GHz.

IMT stands for International Mobile Telecommunications Standards.

The report will be consulted at the International Telecommunication Union (ITU) World Radiocommunication Conference 2023 (WRC23), where additional frequency bands above 100 GHz are scheduled to be discussed and their allocation considered at the subsequent WRC27.

The current 3GPP channel model is only validated up to 100 GHz. A crucial first step for the 6G standardization process is to extend this channel model to higher frequencies. 

NEWS - SUCCESS STORIES 12

Netflix announces the creation of an engineering hub in Poland

Netflix recently announced the creation of an engineering center in Poland. The new division will be based in the Warsaw office.

The objective of the engineers will be to help build the products and tools that the creative partners use to deliver Netflix shows and films.

The relationship between Polish culture and Netflix goes back a long way. It’s been six years since the platform launched the Polish-language service and already more than 30 movies and TV shows have been created in Polish, including titles such as High Water and How I Fell in Love with the Gangster.

In the words of Netflix’s

press release, the reasons for establishing the center in this Central European country are that there is a very good engineering background and a large development community. “We can’t wait to see the innovation and creativity that comes from our hub here. We are currently recruiting for business application software engineers.” 

NEWS - SUCCESS STORIES

Ateme and Enensys upgrade the infrastructure

Italian Rai Way to broadcast in DVB-T2 standard

Ateme and Enensys have recently combined their expertise in video distribution for Rai Way, the operator of the RAI television network, to initiate the project whose objective is the transition to the DVB-T2 standard.

DVB-T2 allows broadcasters to distribute HD content. The standard also provides more efficient and resilient signals, while freeing up the 700MHz terrestrial frequency for 5G mobile broadband.

The combined solution, offered by a local partner, won the tender and the project started in 2021. Both companies proposed a competitive solution with a reduced Total Cost of Ownership.

The integration includes Ateme’s Titan softwarebased live video compression solution. Enensys provided the Seamless IP & ASI switches (ASIIPGuard) and DVB-T/

T2 network gateways (SmartGate T/T2), enabling SFN transmission (Single Frequency Network).

The transition was completed within the March 8, 2022 national conversion deadline.

“Viewers want higherquality experiences and stronger emotions from their video entertainment,” commented Carlo Romagnoli, Sales Director Southern Europe from Ateme. “The move to DVB-T2 satisfies this need by facilitating the distribution of 4K content, as well as providing advanced features such

as High Dynamic Range, High Frame Rate, Wide Color Gamut and Next Generation Audio to enhance the viewing experience. We are delighted to have been able to help Rai Way in this transition”

“We are proud to have played a key role in this exciting project of national importance”, said Cyprien Galesne, Sales Manager Southern Europe at ENENSYS. “Our synergistic partnership with Ateme on digital media distribution projects continues from strength to strength, offering flexible solutions to complex challenges.” 

NEWS - SUCCESS STORIES 14
of the

GB News relies on Quicklink Studio for remote management and broadcasting of contributors

GB News is a free-to-air television and radio services in United Kingdom. The broadcaster has chosen Quicklink Studio (ST55) for introducing remote guests into their news productions for national broadcast.

Remote opinions are a large part of the channel’s programming. Withs essence in mind, GB News required a solution for introducing remote guests in high-quality with realtime latency into their Schedule.

“Opinions and thoughts from remote experts/

guests are a key part of GB News structure. As a result, it was extremely important that we adopted a solution that would allow us to obtain these opinions in the highest quality possible –Quicklink Studio allowed us to achieve exactly this,” said Stephen Willmott, Head of Technology & Operations at GB News

Using Quicklink Studio Servers (ST100, ST102, ST208 & ST200), the remote guests can be integrated into any workflow using HD-SDI, NDI and other professional broadcast outputs. The solution

works within a simple web browser. Contributors can be invited via SMS, WhatsApp, email or by generating an URL link.

In addition, by utilising Quicklink Manager, GB News has operational control over virtual rooms, remote guests and more within one central portal accessible from any global location.

“The management of remote guests is an essential part of introducing remote guests. Our guests are not always the most proficient with technology, however the remote control options within the Quicklink Manager allow us to overcome these challenges by being able to remotely control devices and other aspects of the remote guests. The Quicklink solutions are truly a fundamental part of GB News productions,” adds Stephen Willmott.” 

15 NEWS - SUCCESS STORIES

ZDF is a German public service television broadcaster. The corporation produced and showed the World Cup live in Germany.

They have trusted Apantac KVM over IP solutions for its remote production. The games were handled remotely at the National Broadcasting Center (NBC), located in Mainz, rather than Qatar.

For this project, MPE (“mobile production unit”

founded and driven by ARD and ZDF) purchased more than one hundred and fifty KVM over IP transmitters (Tx) and receivers (Rx) (models: KVM-IP-Tx-PL/ KVM-IP-Rx-P) and software licenses from Apantac, via Broadcast Solutions Produkte und Service, Apantac’s representative in Germany.

These solutions were chosen because the technical equipment can extend and switch video signals, keyboard and

mouse functions, as well as embedded and analog audio signals over IP. With this solution, operators can access all their computers and servers on the network from any other point.

In addition, the firmware has been specially designed for this project. It adds an extra 3840×1080 EDID to the Tx module, which addresses the teams’ need to display two computers (top and bottom) on a single 16:9 UHD monitor. 

NEWS - SUCCESS STORIES 16
ZDF develops a solution to work the World Cup remotely with Apantac and Broadcast Solutions

KAN Channel 11 is an Israelbased public television station that has offered the emotion of the World Cup to their audience. The Station has partnered with systems integrator AVCom to outfit an OB Van for remote production coverage. The objective was to connect the communication system with the production crew in the OB and with the rest of the team in Israel, the camera team at the venues, and the presenters located in the KAN studio in Qatar. They chose Clear-Com solutions.

“The production value is always enhanced when the crew can communicate

clearly and quickly. Collaborating with ClearCom on this championship has simply reinforced that their extensive experience in these types of large-scale and often high-pressure events ensures we have a flawless result for our communications,” explains said Tsachi Korner, Senior Presales Engineer at AVCom.

AVCom deployed an Eclipse® HX Digital Matrix Intercom System for the company on the core. This solution interfaces with endpoints including the V-Series Iris IP User keypanels and FreeSpeak beltpacks. Also, they

added Dante interfaces and MADI cards to KAN’s communication system to provide interoperability with a Dante-based audio-overIP infrastructure.

Both AVCom and KAN explained their reasons when asked why ClearCom. “By using Clear-Com equipment, we’re able to make our production process much smoother,” said Tsachi. “We’re always confident when we propose the Clear-Com solutions to our clients, as they are so reliable and scalable, and offer such powerful capabilities. This enables our clients to produce a superior event.” 

17 NEWS - SUCCESS STORIES
KAN entrusted Clear-Com to develop its communications system for the Qatar World Cup

ViewLift has announced a multi-year partnership with Bitmovin. ViewLift is specialized in streaming and OTT solutions and Bitmovin is focused in deliver cloud-based video streaming infrastructure solutions.

With this collaboration, both companies are aiming to elevate the standard of viewing experiences on OTT platforms worldwide by integrating their capabilities and expertise.

According to Bitmovin’s research reported in the press release, a large portion of viewers who unsubscribe from certain platforms are due to problems when viewing content, such as buffering delays. Both companies have identified that the quality of the experience has become one of the main metrics for viewer retention.

Through this agreement, ViewLift customers can now access Bitmovin’s capabilities, including: NextGeneration VOD Encoder, which features multi-codec streaming, 8K and multiHDR support, and its Live Event Encoder. Bitmovin Analytics, Bitmovin’s Stream Lab, an automated device testing solution, and Bitmovin Player will also be integrated with ViewLift’s OTT platform.

Together, they enable ViewLift customers to leverage the power of cloud encoding to scale video channels and optimize

adaptive bitrate scaling (ABR) to reduce content delivery network (CDN) costs.

Stefan Lederer, CEO and co-founder of Bitmovin, said, “In today’s fiercely competitive video streaming market, quality of experience is vital for viewer retention. Teaming up with ViewLift means we will power the best viewing experiences and accelerate how quickly customers get to market and monetize their offerings so they can remain one step ahead of competitors.” 

NEWS - BUSINESS & PEOPLE 18
ViewLift partners with Bitmovin to tackle one of the main reasons for OTT platform unsubscriptions

solutions into its products

Enco, a company specialized in the development of software solutions for broadcast environments, has announced that it has acquired Rushworks. With this move, the company will integrate its automated captioning and translation services systems into Rushworks’ products. Although the terms of the agreement are not yet public, Enco will keep the Rushworks professionals in their positions and the founder will relocate him as sales and marketing manager for the company’s product line. Engineers from both companies will join forces to develop new solutions, they said in their communication.

“Enco is the global leader in real-time captioning with its patented enCaption technology,” said Rush Beesley, Rushworks’ Founder. “The integration of its powerful, highly accurate captioning engine with our broadcast automation,

integrated PTZ production, and courtroom production and streaming systems will ensure our mutual customers can comply with government regulations and provide critical captioning, transcription and translation services to audiences worldwide.”

“The acquisition of Rushworks adds proven technology and talent that

opens the door for us to innovate and develop cohesive, integrated broadcast and AV solutions for years to come,” said Ken Frommert, president of ENCO. “They also bring strong expertise in video applications, which diversifies our software portfolio to serve a much broader array of business verticals and applications.” 

19 NEWS - BUSINESS & PEOPLE
Enco acquires Rushworks to integrate its software
20 BROADCASTERS

We wanted to go to the center of Africa to know, first-handed, what is the state of broadcast technology in this part of the world. Through an exclusive interview with the oldest television in the region, has almost a century of life, we have been able to iden fy the same trends of transi on to IP, adapta on to digital ecosystems or the constant investment and modifica on of infrastructure to broadcast in HD.

In this detailed interview you can learn about the current technological status of the Kenya Broadcas ng Corpora on, as well as their plans for the future or the content they offer for the whole country, as it is the main mul media public service broadcas ng corpora on in the country.

21 KBC

What is Kenya Broadcasting Corporation?

The Kenya Broadcasting Corporation, established in 1928, is Kenya’s state media organization. It was established by an Act of Parliament —CAP 221 of the Laws of Kenya— to provide independent and impartial broadcasting services in the areas of information, education and entertainment. It broadcasts in English and Swahili, as well as most local Kenyan languages. It is headquartered at Broadcasting House (BH) and is located at Harry Thuku Road, Nairobi.

The Corporation has three main production centers: the BH headquarters, which we have already mentioned;

Sauti House in Mombasa County, which serves the coastal region; and Kisumu, which serves the western region.

We offer several products:

In the area of television broadcasting, we have three channels: KBC Channel One, the main flagship channel; Y254, which seeks to attract young audiences; and Heritage TV, whose content focuses on spreading our cultural and historical heritage.

On the other hand, as far as radio is concerned, we have several stations: Radio Taifa and KBC English Service, which are the most important channels of the corporation and share content differentiated by language. In addition to these two outstanding stations, we also offer radio content through fifteen services: Coro FM, Pwani FM, Mwatu FM, Mwago FM, Nosim FM, Kiembu FM, Kitwek, Minto, Mayienga, Western Services, Eastern Services, Iftin and Ingo FM.

22
BROADCASTERS
Job Karimi, KBC CTO

Television and radio services distributed through the DTT (Digital Terrestrial Signal Distribution) infrastructure are also important enough to mention. KBC is the only authorized public digital signal distribution operator under its flagship Signet brand, broadcasting from 42 sites throughout the country, covering 97% of the terrestrial area. It currently hosts 125 channels and all are free-toair (FTA).

I would also like to mention another of our most important services. In our country there is a great need to give opportunities to people who have talent, but do not have the financial capacity that would allow them to access professionalized contexts. To offer these opportunities, we created Studio Mashinani —which means rural areas— and through this infrastructure we help artists of music or short films. As we have already mentioned, we offer them a professional environment at zero cost. So far we have created 8 audiovisual studios.

23
KBC

To conclude the section on the content we offer, we also broadcast the signals of the Parliament and the Senate on two independent channels: Bunge (Parliament) TV and Senate TV. We do so by virtue of a contractual agreement.

How does Kenya’s public television work? What is its organization chart and how is its structure organized?

KBC is led by a Managing Director and, under him, there are 16 departments headed by a Manager. The Board, appointed by the Ministry of ICT and Digital Operations, oversees the management.

What content does KBC broadcast, how much is produced in-house and how much is produced externally? How much is broadcast live?

KBC broadcast operations are regulated by the Kenya Communications Authority.

The authority dictates that 70% of the broadcast content must be local. Therefore, 70% of our

content is local and the rest is foreign. Of the 70% local, 80% is done in-house and 20% is bought locally.

As a broadcaster we do not have a threshold of live content that we must cover, but our mandate is public interest and, as such, we ensure coverage of all major public events.

How is the network infrastructure between production centers?

We have deployed fiber between the different production centers. To ensure continuity of service, we have also invested in developing redundancy for this network.

How does Kenyan television deal with sending signals from the outside?

KBC has two OB vans connected to Intersat satellite (KU band) and 25 mobile backpacks from TVU Networks. This infrastructure is intended for remote coverage of the various events taking place throughout the country. To

guarantee the link with our 42 transmission centers, we count on a C-band uplink to Amos Sat.

Which manufacturers make up your infrastructure?

We have recently completed a process of improvement and transition in our newsroom. It is now semi-automated with solutions from the manufacturer Octopus.

Part of this process has also focused on our file management system, our MAM. We now operate a system from SNS (Studio Network Solutions), a playout solution from Axel Technology and graphics from Vizrt.

Our next goal in the newsroom is to fully automate it, which we are already working on.

Editing is done in Final Cut Pro, routing, switching and internal distribution solutions are from Blackmagic Design, lights are from Canara and cameras are from Sony.

24
BROADCASTERS
25 KBC

For signal processing and uploading to the network we have solutions from Thomson and Harmonic. The transmission for TV and radio is currently carried out using technology from the company TEM. 70% of our infrastructure is based on this brand. Now we are looking to renew it in two directions: counting on the company DB for radio transmitters and on the company Egatel for the transmission of TV signals.

What have been the latest renovations that Kenyan television has developed?

Due to a shortage of funds, we have not been able to carry out a complete upgrade of the production infrastructure. To realize our transitions, we have developed a mix and match strategy.

The goal was to improve signal quality and workflow efficiency. And we have certainly succeeded.

What challenges have you encountered in this

26
BROADCASTERS

The mix-and-match approach has not achieved optimal results, but it has worked for us.

However, we are committed to funding a complete overhaul to achieve what we want: a seamless IPbased, high-definition workflow. Right now our

signal flow is on BNC cable and we are only using IP in the radio workflows. That radio infrastructure works synergistically.

Regarding HD transmission,

27 KBC
process and how have you overcome them?

all our own content, the content we produce inhouse, is HD. However, it is transmitted in SD due to technical restrictions. Nevertheless, as I was saying, we want to overcome these restrictions by upgrading the system to also transmit in HD.

How is KBC adapting to digitalization?

Initial adoption was slow, but later a digital department was created and is catching up quickly. We understand that the future is digital and we are doing everything in our hands to make sure we are up to speed.

What are KBC’s own challenges?

KBC is the mother of

broadcasting in this African region. As we mentioned at the beginning of the interview, the fact that it was founded in 1928 has given it a history of almost a century. As you can understand, all that time of life has made us build up a huge archive of historical material in audiovisual formats. This archive is composed of

28
BROADCASTERS

content stored in many different formats: Two-inch tapes, One-inch tapes, U-matic, Betacam and DV. Part of our digitization goal is to transfer, catalog and unify the archive into single, organized formats. However, we do not yet have the capacity to do this because of technological constraints. The KBC obtains its funding through ex-cheque and, due to the difficult economic climate, we have faced funding problems that have affected our productivity. That is

why we have not yet moved forward on these terms.

the radio in Dante.

We are pursuing funding to renovate our television and radio production center, as well as the remote transmission center.

We intend to install a highdefinition IP system for television, as mentioned above, and also to upgrade

On the other hand, our remote broadcast centers are now staffed. However, we are building the remote monitoring capability with adequate redundancy to ensure uninterrupted service availability. In this way, we will optimize our human resources.

In addition, we are also planning to acquire two more modernized OB vans: one for television and one for radio. 

29 KBC
What are the next steps in the technological evolution of the Kenya Broadcasting Corporation?

Virtual Production

Green Screen or LED Volume?

Virtual production is destined to dominate the multimedia solutions sector. These techniques, which are expanding rapidly in our industry, require fewer physical resources than more conventional production methods. This feature provides three key circumstances for the future of the sector: flexibility, sustainability and great creative capabilities.

Gaining the ability to tackle ambitious and quality projects in record time is one of the most widespread goals in the multimedia content creation world. Virtual production techniques provide a series of workflows that optimize times by providing photorealistic 3D and 2D environments and characters. Previously, to recreate the same assets using the old techniques, we would have needed large investments in time and budget, as it would have been necessary to produce mock-ups, scenery, characters, etc.

By offering us the possibility to dedicate less effort to physically recreate all those objects and characters, we are also becoming more sustainable. We mobilize fewer personnel and use fewer resources to achieve a result that, in many cases, is even more optimal and spectacular than what we would achieve using traditional workflows.

Finally, as we have already mentioned, the possibilities of obtaining original and creative content are multiplying exponentially. Technology, always at the service of stories, is now capable of offering us worlds that we could only imagine before. Suddenly, those projects that are too complex, too expensive and too inaccessible due to their narrative conditions are really viable. If you don’t believe it, ask the producers and content creators who have already worked with these techniques. We wanted to go further. Virtual production technology is evolving in two very different ways today. The technical evolution trends are clearly differentiated. Green screen or LED Volume? That is the key.

In this report, we wanted to explore these two possibilities. With the direct testimonies of the heads of the virtual production department of dock10 and Bild Studios - Mars Volume, we offer you the reasons for this differentiation and the capabilities of each of these possibilities.

30 VIRTUAL PRODUCTION
31 VIRTUAL PRODUCTION
32 VIRTUAL PRODUCTION
33 DOCK10
A journey into the possibilities of expanding reality

Andy Waters: We began our journey of virtual studies in 2017. We started because we perceived a real need in our customers. At that moment, we considered the way forward with our client BBC Sports.

To understand the context, dock10 has ten television studios where they produce sports content and shows such as “The Voice”, “Let it Shine” or “Bake”. The imput that made us decide to create a virtual studio was motivated by the sports products, they always have a very innovative mentality.

After development, we wanted to share what we learned with the rest of our customers. The date previewed was March 2020. We were going to invite lots of entertainment clients from all over the UK to come and show off what we could do with Virtual Studio, but unfortunate something else happened in March 2020.

However, thanks to the development we had done, the technology was ready and virtual studios came to

our rescue. Our client, the BBC through orders given by the British government, also contributed to the growth of the technology.

The government said to the public broadcaster: “We need to educate young people at home urgently. Can you get a program working?”

Richard and his team stood up one of our smallest studios to create a children’s education show. The workflow deployed consisted in our team to build the sets from home, email them to the company and, finally, to create program. All of this was done with a minimal crew. We had to run and scream, because we were on the air in two weeks.

Richard Wormwell:

The government wanted to continue with the curriculum learning, so the show was, literally, day by day following what would have been taught in school.

“BBC Bitesize, that is its name, became the most watched BBC educational program we have ever produced. We thought we were going to make a program for a couple of

months, but we went on to produce 370 episodes.

In fact, 3 million children watched the program on the first day of broadcast.

I believe that by taking advantage of the techniques offered by virtual production, in conjunction with 3D objects and augmented reality, we were able to help explain fundamental educational issues.

Since then we have done factual entertainment programs, eSports tournaments, commercial programs, etc. Our goal is to push this technology into the entertainment space.

Andy: Going back to our plans, it wasn’t until 2022 that we put our original strategy into action: to show the whole of the UK what we could do. The national entertainment community found it fascinating.

Was the company prepared to exploit virtual production technology?

Andy: When we adopted this technology we could not afford, as a commercial company, to have a

34
How did dock10 get started in virtual production?
VIRTUAL PRODUCTION
35 DOCK10

dedicated studio that was only used two days a week for sports through virtual production. We have to try to maximize our resources and we can’t have a green screen studio unoccupied the rest of the days of the week.

What we planned to do was to take the entire infrastructure out of the virtual studio every week, turn it into a regular studio, do a traditional live TV show, and then at night turn the space back into a

virtual production space. A lot of people said, “Oh, that’s crazy. We shouldn’t do that.” But it was the only way.

Richard: Literally every week we adapted the stage six times.

Andy: However, we gained a great understanding of virtual reality technology and all the associated workflows by repeating this task so many times.

In this respect, how have you evolved?

Richard: We used Zero Density as a graphics solution on top of Unreal Engine visual creation technology and Mo-Sys tracking solutions. We started with five camera channels for the BBC. Now we are working with fifteen. We want to continue to grow our capabilities because the trend of creating these products is growing all the time.

We started working in that small studio, where we modified the infrastructure

36
VIRTUAL PRODUCTION

as needed, and now we have just finished a pilot for Saturday prime time that we produced on our 7,500 square feet stage in a 270 degrees sight with eight cameras.

Andy: We now offer the solution in every of our studios.

Richard: We have achieved this through the expertise we have gained and also through the decision we made at the beginning to centralize our services in order to gain scalability. We standardized our inside-out tracking solution centrally with Mo-Sys StarTracker. We can deploy that fairly rapidly across any of our spaces. Was everything else also planned from the beginning or were some decisions made on the spur of the moment?

Richard: Some parts were improvised. For example, we added the ability to capture motion. We used a system similar to StarTracker’s, thus based on inertia, from a company called Xsens. We don’t need external tracking cameras pointing at an object with

reflective stickers, it’s all based on body movement. We don’t even need to be in a green screen situation to deploy motion capture on a physical set. From motion capture, we have started to offer facial recognition capture.

Have you swelled the ranks of professionals to offer these new arts?

Richard: It was very important for us to have an in-house team. We needed people in the building who understood not only the equipment, but also the physical dimensions and operation of the spaces.

We’ve hired five people to work with us full time. Andy Elliott, our lead developer, who I’m sharing this interview with, has been with us for five years and comes from a video game background. Now, Andy has a couple of juniors who used to work for him delivering content. We’ve also created a new role in the industry: Virtual Studio Operator. This is someone who works in the gallery, in an operational role that ensures that the virtual elements work optimally. We also have a

development producer who works for us. He works with the client in pre-production, making sure that everything they need from their scenery is understood, provided and delivered on time.

On the other hand, we have also encouraged extensive education and exposure among the teams so that they are familiar with virtual production techniques.

Andy: On the other hand, we have also spent a lot of time educating our customers. The interesting thing is that they are very different. Sports has been one of those industries that has always embraced technology early on, however, customers coming from the entertainment sector tend to be more traditional when it comes to arranging the way they work. A lot of this technology is new to them.

Richard: As soon as we show them the capabilities of this technology, producers, executives and curators start thinking creatively. They realize that certain things they had to discard in the past because the technology wasn’t

37 DOCK10

ready, can now be done thanks to virtual production techniques.

initial concept to final delivery. This is actually the most comfortable way of doing things for us.

Richard: There is a MoSys unit in the emission chamber itself or on the physical floor of the camera studio. This is an ultraviolet camera sensor that sends a signal to the studio grid. On the grid, there are a series of reflective stickers placed in random positions and at different heights. The signal is sent and arrives back when it bounces off the sticker. The system measures the time it takes for the light to travel. With this method, the solution can triangulate the camera in the physical world. It is a fairly flexible and simple solution.

Do you have the ability to produce virtual content?

Richard: Yes. We work in multiple ways, so we can go from “BBC Bitesize Daily” to illustrate it.

In that case, we design it, pitch it, produce it and do the whole process, from

However, as we have expanded the services we offer, we have delegated tasks such as set design. Also, in those larger productions, we tend to work on hybrid models that include sets with part real physical sets and part virtual.

We are also starting to see professionals coming to us with sets and assets already designed. We tend to introduce them into our systems if we think they are worth having.

You work in near real life, what technological capacity do you need to work in this way?

Richard: Our production channel is almost live, only 6 or 12 frames delayed. To achieve this, we have taken two things into account. The first is that each broadcast camera has its own independent pipeline. If one were to fail, it would only affect that part of the system. We rely on that level of redundancy to ensure this capability.

The second is that we have the most advanced graphics systems in each of the camera channels. The graphics GPUs are some of the best in the industry. They are NVIDIA graphics with raytraing capabilities using artificial intelligence solutions.

However, these capabilities we are talking about would be nothing without Unreal Engine technology. This, combined with advances in GPU rendering performance and possibly the tracking accuracy that can be obtained from MoSys, has made it possible to produce photorealistic 3D images.

In any case, we are always playing at the limit. We must try to squeeze the most out of our systems without pushing them too hard, because if we do we will find image problems. We are constantly on the edge of that precipice.

How do you see augmented graphics and what capacities do you have to deliver them?

Richard: We base our ability to develop augmented reality graphics

38
How have you managed to solve the challenges related to tracking?
VIRTUAL PRODUCTION

on the very infrastructure we are talking about. When we design the content, we decide whether certain elements will be part of the background or whether they will make up other graphical elements such as augmented reality.

From our technology point of view, we have always tried to be as agnostic as we can be. What we try to do is adapt to the number of options that exist in the industry and we do that by ensuring that our graphics output is adapted to any of

the graphics options on the market.

Then there is the case of creative options. Augmented reality can add a lot to a piece of content if used in an appropriate and original way.

Andy: I would like to share some examples. What we did with Clogs, one of the characters in the “BBC Bitesize” format, was to create the character using motion capture techniques. In fact, the combination of virtual production and

motion capture techniques that we developed in that program won an Innovation Award from “Broadcast Tech”.

Another example was the show we did for Polyphony on “Gran Turismo”. In that case, what we developed was a graphic solution that indicated the possible performances that the competitors would have on different tracks.

What I mean by this is that there are many ways to get that data out to the public,

39 DOCK10

but an important part is to make sure that we can collaborate and interact with our preferred graphic customers and suppliers. That’s why we pursue agnosticism.

Your bet is on green screen technology instead of LED Volume. What are the advantages and disadvantages of this approach? Do you have the capacity to implement LED Volume in your studios?

Andy: LED Volume is a brilliant tool which will transform our industry, but it won’t transform the industry we work in. What we do has a lot to do with events or live and recorded production. This implies that productions are done on multi-camera systems and this option is not the most appropriate for LED Volume. These techniques are appropriate for drama, for productions that might be made by Netflix or Amazon, but definitely not for us.

Richard: It is based on the

same techniques. It uses

Unreal in the back end, it uses camera tracking technology and many other similar devs, but in an environment as limited as LED Volume you can’t get wide shots or freedom with the camera. That’s why it’s not a technology for us.

What projects are you involved in?

Richard: We have produced a talk show with Stephen Fry set in the world of dinosaurs. Using virtual production and augmented reality techniques, we have managed to place him in the Jurassic surrounded by prehistoric animals and interviewing different specialists about this period.

It was a really important project for us where the biggest challenge was to create all these different dinosaurs, their animations and to do it in real time in the studio. We had over 300 different animations, more than seven different dinosaurs, in five locations and everything shoot in 360°. All produced in

three months. We had 32 animators in 19 countries working around the clock to meet the deadlines.

We pushed the boundaries of technology. Mixing, in real time, real characters with fully automated, multicamera, pre-animated characters was another big step forward in our technical roadmap.

Andy: Currently, and with future perspectives, we are working with MMU, Manchester Metropolitan University, to adapt this technology to the 3D world, to the metaverse or to different platforms. We are in the process of applying for a government grant to further this research.

Richard: We want to create a distribution platform for 3D assets. We are producing entertainment content in three dimensions, so now it’s about how we can take what we currently build in 3D, but adapt it by delivering it on a 2D platform or take it from 2D and deliver it on a 3D platform. 

40
VIRTUAL PRODUCTION
41 DOCK10
42 VIRTUAL PRODUCTION
43 MARS VOLUME
assured solution in the filmmaker’s toolbox Virtual production at MARS
of Bild
An
Volume, part
Studios

How was MARS Volume born?

We began plans for MARS as the covid pandemic was hitting its global stride. But even earlier than that, we had been conducting our own research and development into the foundational building blocks of Virtual Production - camera tracking technologies, LED Systems and real-time game engines.

Bild Studios, the parent company of MARS Volume, was founded by visual

engineering pioneers

David Bajt & Rowan Pitts. Bild Studios has been responsible for the deployment of some of the largest shows in the world such as Eurovision, Dubai Expo, Birmingham Commonwealth Games and more.

In mid-2020, we ran a proof of concept called MARS 1.0 in Central London. This helped us test the workflows and customer experience, while also testing the market to gauge the level of commercial interest. The pilot was a

success on all counts; which spurred us on to make plans to establish the UK’s most accessible Virtual Production facility, to cater for the widest possible audience. In August 2021 after months of R&D and hard work, we opened our doors and had bookings from day one.

What services do you specialize in?

Our facilities and human teams at MARS Volume are specialized in offering services for multimedia content creation such as:

44
VIRTUAL PRODUCTION
MARS Academy Group.

Plate playback on LED stage 2D playback, which is perfect for driving scenes; real-time scene playback with 3D technology and tracked cameras; LED stage for professionals, and stage integration. Our human resources, in addition to providing assistance on the virtual production set, also offer instruction through our MARS Academy educational offering, and an R&D space for industry professionals. How much space have you set up for virtual production?

Our stage covers a 15m x 9m space within an 8000 sqft studio. We have a 4000 sqft space for production services additionally. What specific projects could you highlight and what challenges have you encountered in them? How have you solved them?

In collaboration with BBC Studios and Epic Games, MARS Volume transported Sir David Attenborough 66 million years ago to the day an asteroid devastated our planet for “Dinosaurs:

The Final Day with David Attenborough”.

Through a blend of Epic Games’ Unreal Engine and physical prop and set design, Tanis’ prehistoric graveyard was recreated allowing Attenborough to interact real time with the virtual world before and after the final day of the dinosaurs.

In recent statements, Sir David Attenborough, said: “We’ve gone far beyond the old days of the cinema, when you had greenscreens

and after filming you were able to replace the screen electronically to make you appear in any landscape you wished. When I walked into that studio the images were all already there, the back end of the studio was a forest on fire, and it was very, very impressive too.”

In 2021, Pulse Films approached MARS Volume for a Virtual Production solution to their driving scenes for the Sky series. With an ambition to shoot some of the most creative

45 MARS VOLUME
David Attenborough at MARS Volume.

and compelling driving scenes, Pulse Films agreed virtual production offered maximum flexibility and most efficiency for creative output compared to other production techniques.

The driving scenes were shot within a sequence of

busy city roads and luscious countryside driving plates with the vehicles lit with LED wild walls and exterior lighting. This enabled scenes with seamless and naturalistic reflections over the vehicles reflective surfaces, and the faces of the actors.

Using the vehicles mounted on a turntable that was paired with the content was a good example of MARS Volume’s technical expertise powering the simulated driving scenes. With this innovation, the crew were able to not only create hyper-realistic

46
VIRTUAL PRODUCTION

scenes but also shoot angles that would otherwise be impossible when compared to traditional techniques such as green screen or location shooting.

As the production also involved a variety of long driving scenes, the ability

to avoid any technical, environmental and unforeseen weatherrelated issues by shooting virtual production were additional benefits for the production team. Avoiding the logistics of closing roads, gaining permissions, managing shooting in public spaces was another huge advantage for the team.

What technology makes up your LED Volume?

The LED panels are from the manufacturer AOTO, our processing on the contents and LED volumes comes from Brompton Technology. The camera

tracking technology is from Stype. The playback is from Disguise and the content is driven by Unreal Engine.

The camera tracking system is a Stype Redspy, as I was saying. This is an “inside out” system which relies on an array of retroreflective stickers to triangulate the camera’s position to high degrees of accuracy. We can also extract FIZ data from the lens.

What are your virtual production management system providers?

We build our own bespoke render nodes for the

47 MARS VOLUME

rendering of real time environments on the volume for which we have developed an extensive range of proprietary tools to enable the smooth operation of our virtual production shoots at MARS.

Do you have the infrastructure to design the backgrounds? What

computing power do you have to generate and what are the characteristics and how does your computer network work?

We have our own in-house Unreal Engine artists and operators. While the usual workflow is to work with our clients supplied

content, and their preferred VFX house of choice, we do have expert resource internally to support and provide essential translation services to ensure that content runs successfully on the screen at all times.

MARS Volume has been equipped with the highest

48
VIRTUAL PRODUCTION
MARS Academy.

available computing power, and live events-grade systems redundancy to ensure that the show will have the up-time required. Our approach is informed by our teams experience on the world’s largest live events, where there is no second take - you only get one shot.

Productions shooting at MARS are staffed and supported by our in-

house VP team. This team covers a diverse and broad range of skillsets specializing in everything from real time systems, mediaserver platforms, camera hardware and video / IT infrastructure. We often supplement this team with external resource drawing from our freelancer network.

49 MARS VOLUME
How are your virtual production teams composed? Have you incorporated professionals specialized in virtual production workflows?

Your bet is on the LED volume, what pros and cons have you found compared to the green screen?

First things first, they are different tools for different jobs.

There are, however, a number of common situations where LED provides some really compelling advantages over greenscreen. The most significant of which is probably reflections and naturalistic ambient lighting. The ability to capture these in camera can often be of huge benefit to DOP’s and directors, and can save on expensive and time consuming post production work. We have seen a number of shoots leverage this to huge success including “Gangs of London” and a number of car commercials.

The other significant benefit which is often overlooked, particularly for car plate work, is having a clear reference point for actors, being able to see where the car is travelling, what direction the car is turning etc helps lead to

a much more naturalistic performance.

What is your perspective on Volume LED technology, do you think it could become the next big revolution in the industry?

We have no doubt that the LED Volume, as a tool in the filmmakers toolkit, is here to stay. As happens when new technologies are introduced to markets, there is often a moment of over-excitement before the reality of the applications of this tool become abundantly clear.

Once teams are well versed in the LED Volume, and the re-ordering of the production workflow ( a lot of post-production decisions have to be made earlier in the process in pre-production) then time and budget efficiencies can be made - and we have seen teams come back a second or third time and make these savings.

We are all painfully aware that the film and television industry must look for innovation and stepchange to achieve the

sustainability targets that we have to attain for our industry to be responsible.

Virtual production and LED Volume productions offer that opportunity. We are actively working on tools and efficient workflows to support this for the industry.

What is the future of this technology? What does the future hold for MARS Volume?

We are excited to see what 2023 will bring, but already we can see teams returning and recommending us as a good supportive place to host their virtual production shoots. We invested heavily into education in 2022, launching MARS Academy - and we have big plans to take the Academy global in 2023.

The future lies with the brave and innovative teams and industry leaders who are working hard to champion virtual production as a positive future filmmaking tool. We are determined to collaborate, educate and support these leaders as best we can in 2023. 

50
VIRTUAL PRODUCTION
51 MARS VOLUME
52 TECHNOLOGY

Data has become the gold of our industry. This precious material is already delivering its enormous benefits. What distributors and content creators can do with it is, quite simply, the panacea of interaction with their customer. Knowing who your consumers are, what they like, when they like to watch it, knowing how they prefer to enjoy their favourite content are differential factors when creating content, observing the interests that could awaken a certain project and even foreseeing the economic benefit options that could be obtained. All this can be easily obtained thanks to the fusion between Big Data and AI.

The options for exploiting this information are virtually limitless. Content providers can deploy content only to specific demographics that are likely to consume it. In fact, thanks to the capabilities provided by digitization, online thematic channels can be created and their call for access can be targeted to specific users who are likely to pay a subscription to consume.

Going further in this concept of directing the communication and focusing the shot on the target, the commercial possibilities skyrocket. Imagine being able to serve the specific solution to your hypothetical customer’s particular needs by targeting advertising that responds to his or her greatest interests. Put another way: Who would be more likely to buy a particular car than someone who has spent months thinking about it and informing themselves?

Targeted and personalized advertising is precisely that last push they will need to buy.

All of this, targeted and personalized advertising and the right content at the right time, is achieved through data analytics. We know, there is a lot of data to analyze. The task would take too much time and professionals. And that’s where artificial intelligence (AI) comes in. Machines can save us tremendous amounts of both time and effort. In fact, they can finish this task in seconds with a good training (machine learning). The capabilities, therefore, are endless.

TM Broadcast International presents an informative special on the real capabilities of these technologies analysed in detail from three points of view: a report by Yeray Alfageme, Business Development Manager of Optiva Media, with an agnostic and personal vision on how and why to use this technology in FAST channels; and two specialized articles by two major industry representatives focused on these solutions: Signiant and Digital Nirvana.

53
BIG DATA & AI

As we already tried to explain in a previous article in TM Broadcast, FAST (Free Adsupported Streaming Television) channels are a very good option to exploit the existing VOD content, already produced, thus creating linear channels where this content can be offered in a thematic and novel way. However, where FAST channels really make sense is in customization.

54 TECHNOLOGY

Two-way communication, the truly distinct feature of streaming

We identify streaming, in a natural fashion, as yet another way of distributing content to viewers, just as satellite, DTT or cable television are. And that is true, there is no doubt. We could set out to establish the differences between IPTV and OTT, but it is pointless in this context. However, streaming is more than just taking our linear video signal and, instead of uploading it to a

satellite or inserting it into a distribution network such as DTT or cable, encoding it into data packets and distributing it over a structured data network.

Actually, it’s a lot more.

And the thing is that, if we just did that we would be leaving out one of the most useful tools that this technology offers us: two-way communication. Until now content would travel from point A -the production center, broadcast or distribution point- to point B -our TV in the living room. To establish

what was being watched, who was watching it and when it was being watched, we would have to resort to firms like Kantar Media, who would carry out surveys, prepare statistics and extrapolate those results to 100% of the viewers. However, with streaming it is not like that.

Now we are able to find out who, when, what and how our content is being viewed with 100% accuracy. No statistics, sampling, or applicable approximations. All devices report to the CDN (Content Distribution

55
BIG DATA & AI

Network) -because it is necessary for the CDN to provide them with the content in the quality, format and specifications required-, what they are displaying, when and where from. Let’s just say it’s now a requirement for everything to work, rather than something ancillary. This is a game changer in several ways. The first of these is capturing and understanding all this new information that is being generated.

Metrics, the Big Data of streaming

All these viewing metrics that are generated in real time that the CDN needs to use in order to distribute the right content to the right devices, is at the risk of dying there, on the CDN. Because they really aren’t necessary, technically, beyond that. However, it would be a serious mistake to let them go to waste. In fact, it was so for a while when streaming first appeared. Luckily, we soon realized what we were wasting.

Using viewing metrics only as a technical component of the content delivery network is like throwing away the great advantage that streaming offers us beyond being able to reach more devices with more content. But you have to know how to capture all this information and also know what to do with it.

The first thing to keep in mind is where all this information is generated. That place is the CDN, the last layer in the entire distribution chain. Summing up, the CDN -we have also covered it extensively in another TM Broadcast article [#157 TM Broadcast, October 2021]is a worldwide network of servers that replicates all the content that you may want to distribute and makes it available to all devices. Being this understood, it is clear that the CDN needs to know to whom, when and how to distribute this content in order to determine how the right size for these servers and their underlying network.

Once these metrics are

56
TECHNOLOGY

generated we have to store them, analyze them and understand them. For those who are more familiar with Big Data, standardization of this information is not in place, but such standardization is not necessary since the source of information is unique and therefore the data will always arrive in the same way.

And now, what do we do with the metrics?

And so, the first thing to do is store them. But we can’t. Well, technically we can, but it doesn’t make much sense to keep them all. It’s just too much information and, in part, only usable by the CDN for its own operation. That’s why you have to choose what to store and where. That’s where platforms like NPAW’s Youbora play their part (I’m sorry to use a commercial example, but I think it’s pretty enlightening for everyone) as they link up to the CDN, whatever

it is, and store these metrics. They even go a step further, because they analyze these metrics and present them in a usable way; but let’s go step by step.

Well, we have already generated these bunch of data, we have already extracted and saved them for further use in order to make the content distribution network work; the metrics platform has also analyzed the data, but we have not done anything with them yet. This platform is going to give us a lot of information about content consumption and, apart from the technical use of correct network operation, there are many more things we can do.

The first thing is to group the data in a useful way. That is, carry out a segmentation of our audience. The first thing that comes to mind are demographic parameters of the type: age, sex, location and so on, but that would be

57 BIG DATA & AI

the “easy” approach. The correct segmentation would be by likes, similarities, viewing time, and data that are actually associated with audience behavior regarding our content. Now we’re getting to the heart of it.

Another important point is to know if the data we have are good or not, the much-vaunted data quality. Not because we have a lot of data it means we have a lot of information. We must have quality data and, of course, a quantity of them that allows us to extract the

appropriate analyses. That is why being sure that the data we have is of the right quality is something vital before even segmenting and analyzing the data.

Analyze and measure, measure and analyze

All this hard work of capturing that data, making sure they are good quality and segmenting our audience correctly, can lead to very nice, but complex metrics and analytics.

Again, this would be falling short. In addition, the CDN itself can already offer us this type of dashboards without having to do all the additional hard work. But leaving it like this would be a waste of time, so we must go one step further.

And it is that measuring and analyzing our audience is nothing more than a new source of information on which to make decisions.

Just as Kantar Media used to tell us by who and when our linear DTT channels were being watched, we now have these endless

58
TECHNOLOGY
Fig. 1: Example of NPAW’s Youbora dashboard.

dashboards. But what decisions did we use to make with that information? Well, programming of new content, editorial decisions about the content to be produced and even the creation of new channels or themes. And that’s when we’re back where we started. Let’s use all these data, our correctly processed streaming viewing metrics, to create our FAST channels.

Customization is the key

And if we have the content, -our entire VOD cataloguewe also have our audience and we know them, -thanks to our properly captured, segmented and quality metrics- and we have the technology that allows us to group this content in order to create new linear channels; we have everything, we just have to connect the dots to get the full picture. Let’s get to it.

There is a detail that we have skipped before, and not a trivial one. It is the fact that, in order to have these metrics with the

right quality, there is an indispensable requirement: that our viewers get registered on our platform. And if not, how do we know who sees the content: if it is a man, a woman or if it is always the same person from the same place, or a family of 9 people. There are several strategies for this.

The most direct one is to associate payment with access to the content. That is, a subscription-based model. So, it is already obvious that it is necessary to have a record in place and of course, in keeping with the GDPR, request viewers to provide the appropriate data. This clashes head-on with the F of our FAST channels, the free-of-charge part. Remember that these channels are going to include advertising, so a subscription odel must be handled with care. There are models that follow this example, but only the major players -Netflix and the likes- can afford to do it thanks to the quality and, above all, the amount of content they have.

The second strategy, and applicable in this case, is to offer viewers something in return. If you tell me who you are you will have the content you like, created for you and, in addition, with ads that interest you that will not bother you. Deal?

Deal, of course. And this is the true new model of FAST channels: the use of these metrics to generate customized channels as if they were tailor-made for the viewer. But they are more than just grouping, in an intelligent way, of the content we already have.

Automate and ready to go!

Of course, having an army of people looking at metrics dahsboards and thinking about how to group the different parts of our huge content catalog so as to offer them to our viewers doesn’t seem the easiest or the cheapest of approaches. That’s why the next obvious step is to automate this whole process.

Linking our analytics platform -including our

59 BIG DATA & AI

CDN- with our CRM systems, where our viewers’ data are stored, and use this information to automatically position the VOD content on our OTT platform and thus deploy new channels is the culmination of the whole process. And it’s not technically that difficult, let’s see.

We have data that tells us that women of 30 years of age who reside in a specific neighborhood of Valencia spend an hour and a half, around 11pm to watch property auction programs, for example. If we have our content with the right metadata -something we have not covered today, and not because it is a requirement for FAST channels but for our OTT to be of quality- it is easy to make the most of it and offer this viewer segment customized content. That is, a channel that every day at 11 pm will show three chapters of our favorite twins and, in addition, the ads between these episodes are from those decoration and refurbishment stores that

they are going to visit the next morning. The squaring of the circle, nothing less.

Advertisers, the other added value

Last, there is a key component to all of this: ads. Someone has to pay for all this system and the effort needed to get here, of course. And this someone is our advertisers. But of course, something must be given to them in return. Just as we do with our viewers to have them provide us with their data and agree to view these ads, the deal is similar.

You are going to pay me to advertise on my FAST channels above what you paid me on my traditional linear channels. But why so? Because I’m offering to reach exactly the audience you want, whenever you want. Not a bad deal, uh?

Advertisers view this proposal in a very positive way, as it closely resembles what online ad platforms offer them. No need to conduct massive campaigns of repeated, costly advertisements on all sites

at once and let everyone see the ads to find the needle in the haystack. Reaching out to those viewers who are potential clients of mine and offering them what they want at the time of the day when they are most likely to accept my offer looks better, doesn’t it?

That’s why FAST Channel ad prices are justified and higher than traditional

60
TECHNOLOGY

ones. That’s why this business model works and is so interesting.

Conclusions

We have a lot of content that we have already paid for in our OTT platform’s VOD catalogue. In addition, all this content has its associated metadata and is perfectly categorized. We

also have an audience that we know, since they have registered on our platform, and, thanks to the quality metrics and the correct segmentation that we have carried out, we know what they like watching, and when.

On the other hand, we have advertisers who are tired of paying huge amounts of money to reach 100% of

our viewers when, in truth, they only want to reach that niche market that will accept their offering and that accounts for 5% of our audience. It’s not that they don’t want to pay for promotions. They both want and need to do that, but they also want their campaigns to be better and their return greater. No one wants to throw the money away, of course.

FAST channels join these two worlds together and offer our viewers a selection of content based on their history, tastes, preferences of their peers and novelties according to what our analytics, which are good, reveal. On the other hand, they offer our advertisers a selected audience, very prone to like their offering and with a very high sensitivity to their proposals. Therefore, the return on advertising investment is higher, almost guaranteed, than in previous models. As we said, the squaring of the circle. 

61 BIG DATA & AI
62 TECHNOLOGY

What does Big Data bring to this sector?

When we talk about big data in broadcast, we’re talking about the hundreds of terabytes, or even petabytes, of data that a system gathers during direct interaction with end users. Typically that happens when broadcasters make their content available through VOD or streaming options. Broadcasters can analyze this big data to understand customers’ preferences, which in turn, helps them serve better content to viewers and serve the right demographic to advertisers.

Besides the massive amounts of data exchanged between broadcasters and end users, big data also refers to the many content feeds most broadcasters ingest continuously and simultaneously. For example, the volume of

incoming video feeds for a news organization is huge — several hundred gigabytes to terabytes on a daily basis. If broadcasters can make sense of that big data, they can use it to help make content.

What are the possibilities of artificial intelligence in the broadcast industry?

Applying artificial intelligence across audio and video opens a world of possibilities for the broadcast industry. For example, speech-to-text technology has reached a point where it is better than humans at understanding specific domains. A welltrained speech-to-text engine can provide a very accurate transcript and captions of incoming content. At the same time, other well-trained engines can extract facial recognition, perform onscreen text detection,

detect objects in the background, and more.

In the case of multiple and continuous incoming video feeds, artificial intelligence can help describe what is in the feed and make it very easy for editors to find what they’re looking for. AI capabilities can also generate metadata that makes the content easily searchable and retrievable, leading to easier content creation and better content publishing decisions.

How can all these technologies enrich the content consumer experience?

Once content providers become more familiar with user preferences, they can bubble up content within their archive that is better-suited to those preferences. They can also use AI to quickly access

63
BIG DATA & AI
Answers supplied by Hiren Hindocha, CEO, Digital Nirvana

data that will inform their content feeds, such as in social media channels.

Take World Cup soccer, for example. Rightsholder Fox Sports could use AI technology to identify moments in the game that are worthy of viewing, and within a few minutes of the game ending, they can put up those highlights on YouTube. Before AI, this process would have taken a human many hours.

And of course, the more consumers watch content that is in tune with their preferences (action, drama, certain news topics), then the better the system gets at predicting and serving similar content. That’s an example of tailoring the content for a better consumer experience.

How should traditional broadcasting adapt to these technologies to get the most out of them?

Broadcasters need a website or platform where users can search, find, and consume content. The smaller the chunks broadcasters produce, the greater the consumption,

which means they need to be able to capture all of that consumer information and make it useful. To be able to do that at scale, broadcasters have to adopt technologies that can process the information faster and better than employing an army of people.

From what perspective does Digital Nirvana approach the possibilities of artificial intelligence?

Digital Nirvana believes AI has great potential to accelerate media workflows and make life easier for our clients. To realize that potential, we’re always looking for new and better ways to help our clients use artificial intelligence

tools like speech to text and facial or object recognition to describe what is in their audio and video.

Digital Nirvana is doing a lot of work on training speech-to-text engines to automatically recognize who is speaking and what they are saying — such as distinguishing one media personality from another and identifying different topics.

How does Digital Nirvana intend to take advantage of the confluence of both technologies?

Our focus right now is to leverage AI technologies in the audio, video, and natural language processing sectors. Natural language processing is the ability to understand what is being said in the content. Not only can we provide a verbatim transcript of what is being said, but then we use natural language processing to figure out who is doing the talking and what the topic and context are. For example, our Trance application uses multiple technologies, including automatic speech

64
TECHNOLOGY
Hiren Hindocha, CEO, Digital Nirvana

to text and an automated translation engine. Our goal is to make sure those engines keep getting better and better.

It has not yet become mainstream technology, but there are already many developments and pilot projects. Which ones would you highlight as the most challenging and interesting?

One pilot project we’ve been working on with a major U.S. broadcaster is automatic segmentation and summarization of incoming news feeds. Suppose the programming in the feed lasts 60-90 minutes and contains multiple segments on different topics. Today in production, we are generating real-time text of that content, but in the future, we’ll automatically be able to figure out which people and places are being discussed in that feed, then provide a headline and summary of each of the segments. We’ll also be able to detect changes in topics

and categorize accordingly.

This is not an easy thing to do.

A similar use case relates to podcasts. Today, a welldesigned podcast will have what we call chapter markers within an hourlong or 45-minute podcast. The chapter markers delineate the different segments, and there are show notes related to each chapter marker. Right now this process is done manually.

We foresee technology that will listen to a podcast and automatically generate chapter markers along with a summary of each chapter.

Finally, Digital Nirvana is developing an advertising intelligence capability for a large ad intelligence provider that needs to analyze advertisements at scale. This provider must process close to 20 million advertisements per year, and there is no way to do it manually. They have to use technology.

The technology we’re developing will look at an

advertisement — whether it be outdoor creative, a six-second social media advertisement, or a 30-second broadcast commercial — and determine the product, the brand, and the category (e.g., alcohol ad, political ad, automobile commercial). That kind of analysis is a challenge, and being able to do it automatically will significantly improve this company’s workflow.

What future developments is Digital Nirvana involved in regarding the capabilities of this technology?

Digital Nirvana already processes media in a multitude of languages. Our goal is to keep evolving so that we not only improve the accuracy of our existing languages but continually add new ones.

Also, we are looking at ways to apply generative AI — AI that helps generate content — to the media and entertainment space. 

65 BIG DATA & AI

What does Big Data bring to this sector?

Signiant participates in the upstream end of the media supply chain (B2B vs B2C) and while most people think about Big Data in media in the context of analysing information about consumer behaviour (because inherently there are a lot more consumers than producers) there are also large and diverse data sets in the B2B

media world that are challenging to collect, store, share and analyse. These data sets can provide useful insight into optimizing modern media businesses. As such, to the same extent that Big Data can be used to create a more engaging experience for consumers, it can also be used to optimize the technical processes involved in creating that media.

66
TECHNOLOGY
Answers supplied by Ian Hamilton, Chief Technology Officer, Signiant

What are the possibilities of artificial intelligence in the broadcast industry?

This is a broad question. The possibilities of AI are almost endless in every industry, but with a near term focus, the biggest opportunity I’ve seen in the B2B side of broadcast (excluding some of the ways Signiant employs AI, covered later), is allowing humans to perform tasks more efficiently. This could be labelling data leveraging image recognition (enabling more efficient search) or scene detection (enabling more efficient time-based mark-up).

In many instances the work of an AI must be verified, so people are still involved, but AI allows these people to perform tasks much more efficiently.

How can all these technologies enrich the content consumer experience?

In B2B media space, applying AI to optimizing the manufacturing and processing of digital content allows more and higher quality content to be produced and delivered at a lower cost. Obviously, there are lots of opportunities in the B2C space as well, like better recommendations

based on consumer viewing history and ratings, but Signiant isn’t directly involved in this end of the supply chain (other than to provide transport of the Big Data sets produced by aggregating consumer data).

How should traditional broadcasting adapt to these technologies to get the most out of them?

As noted above, the biggest near-term opportunities I’ve seen are augmenting routine tasks with AI to make humans performing those tasks more efficiently.

67 BIG DATA & AI

One way broadcasters can easily benefit from AI is by using widely adopted multi-tenant SaaS solutions, because by our nature we have access to usage data that can be analysed in an anonymous and aggregate manner to make better decisions. And, the effectiveness of machine learning systems increases as the quality and size of the training data increases.

From what perspective does Signiant approach the use of technology associated with Big Data?

Signiant approaches the technology surrounding Big Data and AI using Big Data and AI best practices that aren’t specific to the media domain. As a widely adopted multi-tenant SaaS, Signiant is in a unique position to use anonymized data inherent in our system to benefit the whole media industry.

From what perspective does Signiant approach the possibilities of artificial intelligence?

Signiant approaches the

possibility enabled by AI and ML in two ways.

The first we apply ML to optimize media transport (a foundational element of our platform … more on this later) and the second we provide connections from our SaaS Platform to third party AI systems (like labelling and speech recognition) to augment the metadata available to our system and optimize media manufacturing.

While the Signiant SaaS Platform is not a MAM, metadata is still key to what we do. The Signiant SaaS platform acts as a connective tissue between the many authoritative sources of structured information about media to optimize the modern media factory. These sources of information

include: information we can extract from the file system through indexing; information we can get from MAMs and other systems via APIs; and, information produced by AI.

By cleanly addressing inventory and manufacturing as separate concerns, brittle hard coded legacy workflows can be retired and replaced with systems that leverage more modern approaches (like low-code/no-code).

In the Signiant world, AI is primarily a source of metadata.

How does

Signiant intend to take advantage of the confluence of both technologies?

One specific way Signiant takes advantage of Big Data and AI/ML is providing efficient global access to media. Network optimization is a foundational element of the Signiant SaaS platform. Every time Signiant transports any media, we collect anonymized information

68
TECHNOLOGY
Ian Hamilton, Chief Technology Officer, Signiant

Signiant interface.

about the environment and the results achieved.

Environmental parameters include information about the network, storage, and systems involved. Changeable parameters include transport protocol, size of data blocks transfers and number of data blocks transferred in parallel. The main observable result is the achieved goodput (rate of transfer excluding network overhead). This

creates a giant anonymous data set that can be used as ML training data to create an AI model for picking the best settings for future media transport. By injecting some entropy into our decision making process, we also get data that allows us to train the AI on an ongoing basis.

What future developments is Signiant involved in regarding

As an innovator in Media Technology, Signiant will continue to look for interesting ways to apply AI on our SaaS Platform, but it’s most likely that future AI-based innovations will involve connectors to AIbased sources of metadata that enable optimization of the modern media factory. 

69 BIG DATA & AI
the capabilities of this technology?
70 POSTPRODUCTION
71 CINEMA MAQUINA

How was Cinema Maquina born?

Cinema Maquina was born from the production of a film. I myself, Ariel Gordon, was directing while producing. At a certain point I decided that I was going to postproduce it too. It all started out as a joke. We called the post-production company In My House. We made a multi-format film that wasn’t very much acclaimed by the critics. However, debate arose around post-production. This happened in a context where it would practically cost an arm and a leg to undertake this stage of the production chain. For these two reasons, they began to call us to develop the post-production of different contents.

The company began to grow very organically. Today we are at a very different point as compared to our beginnings: we have five HDR color grading suites, a DI theater with a Christie 4K projector, several racks hosting a central storage with a Quantum solution offering a capacity of 5

Post-production is a stage in the production chain of multimedia content as important as script creation or shooting. In the editing suite, the film is rewritten in a much more plausible way than when the screenwriter captures it on paper by drawing on imagination. In this process work is carried out with a fixed material, a content that in a disorderly and “raw” way -as it is often said in slang- tells a story that has mobilized so much money and so many professionals.

This phase includes, in addition to editing, all the tasks that consist of giving the final finish to the content. Among them color grading or VFX production are worth highlighting. Cinema Maquina, a Mexican company that has managed to position itself as one of the most reputable firm in the North American country, is a major representative of such work thanks to the combination of a grate workforce and the adoption of cutting-edge technology. In this interview with Ariel Gordon, the firm’s CEO, you will learn the details on the technological infrastructure that has led them to achieve this recognition, and which will also lead them to expand their influence beyond their national borders.

72
POSTPRODUCTION

PB, twelve online editorial suites, offline editorial suites, forty VFX artists, etc. We have managed to become a company whose team consists of more than 100 people, with a growing capacity of 15 series and 12 films per year.

The production specialization from which Cinema Maquina started has been segregated from the current company, right?

It is true that on some occasions we have coproduced, but always from the perspective of the post-production. Our previous experience in production helped us to be clear about the needs of the projects, but we have focused on the core business; the completion of films, series and premium content. We have had the pleasure of participating in projects that have made it to prestigious festivals such as Berlin, Cannes, Venice, Sundance, etc. Regarding awards and prizes, we have also had the opportunity to win them with series such as “Falco”, which got

the International Emmy. We have also entered the platform market by postproducing content that has been released on Netflix, Amazon, HBO Max, Vix, etc.

All this we have achieved thanks to a constant growth, partly organic, but also thanks to the fact that Mexico has become an important hub for content creation. In this new paradigm, we managed to adapt by taking the film workflows in which were experts to tv series. The key to achieving this was related to technology.

What is this key?

As you can imagine, series workflows should be highly focused on time optimization. Our contribution was based on buying time with technology. The differential moment that made us advance in this task was in 2017 when we became involved in large projects for the first time. This event, coupled with the demands that are associated with working with international studios, forced us to up our standards by

focusing on precision and implementing quality control procedures. The technological investment that I will now comment on was what distinguished us from other post-production companies, which usually have a more monolithic, less flexible structure.

In this specific case that we have commented, although people do not usually appreciate it the way it would deserve, the ability to deliver any final file quickly and accurately, just as solving any adjustment, has a definite impact on workflows for the entire project.

As for the technology on which we laid these foundations, I would say that we could really talk about a whole range. The key to a post-production studio is to integrate the different infrastructures and cherry pick different software and systems to achieve a single ecosystem. We decided to put our stakes on the Blackmagic Davinci Resolve platform early on. We understood that, although there were many options for color

73
CINEMA MAQUINA

grading systems at that time with more pedigree, what we needed was to avoid bottlenecks. Today we have five Sony HDR monitors of the best class and a Christie 4220 4K projector which is the standard for postproduction. We also decided to get a centralized storage system. This was something that many postproduction houses did not do because they were selling another working model based on a structure based on individual systems rather than creating a studio-wide ecosystem. Thanks to all this, today we have fifty Resolve stations.

A 16 gbits fiber network to link all our facilities has

also helped to make our work model more efficient. Basically, any of our artists can access any project from any terminal at the facility by using the appropriate credentials.

How has Signiant’s Media Shuttle tool helped you achieve this flexibility?

This solution has provided us with an efficient way to communicate our infrastructure with the outside world. We started from the need to obtain a channel to receive and send remote material to and from our system. Signiant provided us with a simple, friendly and safe tool with which we could meet our needs. Media Shuttle provides us with the ability to transport raw or edited material via download and upload links. In addition, its platform has

74
POSTPRODUCTION

accelerators in place to gain even more time. The first time we used it was in the post-production of the Netflix series “Selena”. This happened two years ago and at that time we were able, thanks to this tool, to overcome the traditional workflows associated with the runner, a member of the production team who transports the footage from the set to the postproduction facility located 2777 kilometers away. This way of working set a precedent in our system and we have put it back into practice on many occasions for projects shot around the country. In fact, I believe

that the traditional mode of communication is becoming increasingly obsolete. Our plan to strengthen this process is to grow to up to a 20Gb connection with two Internet connection routes.

What is the obstacle for this process to become a standard?

The Internet. With the evolution of this network and the growth of mobile networks such as 5G, I do not doubt that in a few years we will be uploading all the material from the set to the post-production storage using tools such as Media Shuttle. But that day has not yet come.

How can cloud technology affect your processes?

It certainly will, but for the moment what is really playing a very decisive role is the hybridization between local systems and cloud-

75
CINEMA MAQUINA

based systems. We have achieved this by deploying our own connected storage, protected with the relevant security measures. All our VFX artists and colorists can access the same centralized storage via online stations. Still, today, it is very expensive to have a cloud ensuring concurrent access by many clients

with guaranteed speed and support, that is, a hot storage. The focus of the cloud business, at this time, is for companies that use small files. In the case of post-production, it is not that the infrastructure does not exist, but it is not profitable yet. However, we have already made hybrid projects with Netflix and Amazon, for example. We

enjoy being part of the evolution of this process, but it is still at the early stages.

How did you react when the pandemic started?

At that time we didn’t have everything as clear as we do now. We managed to move our entire operation to our team members’

76
POSTPRODUCTION

homes using technology and in record time. We achieved this thanks to the collaboration of partners such as Signiant. We feel indebted to them because they allowed unlimited access to their services by unlocking the limits on their file transfer systems. Today we know that there are other more efficient workflows than sending

the material between one location and another through the Internet, but when the lockdown began due to the pandemic, this method, provided by Signiant’s technology, was our lifeline.

You remarked that you have a system for VFX. What hardware and software do you have in Cinema Maquina to perform these functions?

As for VFX streams, we have integrated state-ofthe-art DELL servers. The interesting thing is that through them, we have virtualized all VFX machines.

Machine virtualization is the future, no doubt, and our efforts are all going in that direction.

As far as software is concerned, our main pipeline is based on Nuke from The Foundry. We have twenty NukeX, ten normal Nuke and five Nuke Studio that allow us to see all the VFX in their final state. To get to this stage we use Hiero from the same company. Then we have Maya, 3D Max and Houdini to do simulations, 3D, etc.

One of the great revolutions that the multimedia content

77
CINEMA MAQUINA

creation industry is experiencing is virtual production, LED volumes, etc. How does this technology affect your work?

We have been involved in projects where volumes are being considered. Honestly, I think this technology seems easier than it actually is. The way of implementing and creating virtual worlds still remains complex.

There is a great deployment of predefined templates, it’s true, but the point is that when you want to see something totally cinematic, the templates don’t work. Much progress has been made in tracking technologies, but as long as modelling does not evolve, we will not be able to achieve the full potential of this technology. Therein lies the challenge, in the creation of virtual worlds.

In addition, LED volumes are not always necessary or suitable for all situations. For example, it does not make much sense to recreate a floor with LEDs, because you will still have to create all the scenery

physically. I anticipate that it will evolve to give us many more possibilities. Everything is moving very fast, so we will probably have overcome these challenges in the next couple of years.

What will be the next step in the evolution of your workflows?

Our goal this year is to become 100% hybrid. We believe it is necessary to build up the community by going to the office, but we also believe it is important to maintain a high standard of private life and a good work-family balance.

To achieve this we are now deploying remote desktops and hybridizing systems. Not only do we want to gain the ability to get remote systems via VPN as this technology generates many latencies. We believe there are far more advanced ways to remote a team today. For example, in VFX virtual systems, we have used the Teradici protocol to implement remote work solutions.

Likewise, we want to

remotely access all the desktops in our infrastructure using the Parsec protocol, which comes from the video game industry and is specialized in peer-to-peer communication. We chose it because we believe it offers a lot of speed and accurate color playout.

As I was saying, we will also continue to grow in system virtualization. Our goal is to virtualize the company 100%. This implies, in addition to the virtualization of the machines, also doing it with the administration side of the business.

78
POSTPRODUCTION

We want to introduce audio postproduction in order to be able to offer 360° solutions to our projects. We are now working on a pilot project for which we will create a specialized label in fast and quality post-production for budget projects. This division of the company will be called Sketchbook Media.

The company, in these 12 years, has become much stronger, has a lot of knowledge, and we are interested in expanding to other markets. We want to work throughout Latin America through alliances

and/or acquisitions.

I believe that the success of Cinema Maquina is based on the ability to modify the workflow and on the fact that, by definition, we do it based on the needs of each project. We have not found a default path to follow, a work path from which we do not deviate one iota. In fact, we believe that this way of working and conceptualizing a company is wrong. As technology advances, we re-examine the possibilities and we adapt them to the new needs required by the projects. We believe that

understanding the market and technology makes us flexible. And the other key is the creativity in the team that executes the workflows. Without talent, there is no such thing as a post-production studio. However, it is important to remember that there is one crucial detail to our business that we should never forget. Technology and creativity are what must remain at the service of telling a story. Therefore, we strive to understand each project in which we work, analyze it and apply what is most appropriate in each given situation. 

79 CINEMA MAQUINA
80 PROTOCOLS

Ensuring safe video streaming

SRT Alliance is an organization developed by the company Haivision whose goal is to overcome the challenges associated with streaming video transmission. The basis of this task is the SRT (Secure Reliable Transport) transport protocol. One of the particularities of this protocol is that it is free, open source.

Thanks to this feature, the evolution is enriched by a constant great collaboration. Haivision has named this project of growth of the capabilities of its SRT protocol Open Source Project.

On the other hand, in addition to ensuring efficient transport in streaming, the SRT Alliance also aims to involve as much of the global multimedia content production industry as possible in the adoption of this protocol. One of the key events to achieve this is PlugFest, promoted by the SRT Alliance. In fact, as they venture in this interview, the 2023 edition will be very special.

On how to achieve these two crucial objectives, we had a conversation with Pablo Hesse, Director, SRT Alliance. In this discussion, we understood their methods, their past, present and future, as well as the advantages of being involved. Through his answers you will understand why SRT is one of the best options for communicating facilities with remote transmission points.

81 SRT

How was the SRT Alliance born?

For over 18 years, Haivision has been dedicated to video networking technology. As video streaming becomes ubiquitous, Haivision recognizes the need to ensure broader ecosystem collaboration given that real-world custom workflows inevitably combine technology from multiple vendors. To ensure interoperability in these complex ecosystems, Haivision believes that SRT can serve as the lowlatency protocol that bonds video streaming technology together.

Haivision recognizes that no matter how important the SRT becomes, adoption

is unlikely if the technology is proprietary. Technology providers need reassurance that they have control of their own technology roadmaps. Thus, the opensource initiative.

By opening up SRT technology to the world, everybody, including Haivision customers, can benefit from an open ecosystem where they can integrate their video streaming technologies with other video solutions, network infrastructures, and systems.

In 2017 Haivision open sourced the SRT protocol and supporting technology stack, and formed the SRT Alliance to support its adoption. Since then, major streaming services, cloud platforms, and broadcast solution providers have supported and adopted SRT, including AWS, Google Cloud, and Microsoft.

What are the objectives of the association?

The mission of the SRT Alliance is to support the free availability of the open source SRT video transport

protocol to accelerate innovation through collaborative development.

The SRT Alliance promotes industry-wide recognition and adoption of SRT as the common and de facto standard for all low latency Internet streaming.

An important goal of the SRT Alliance is to make new features available to the open-source community. Communitycontributed open source SRT functionality is available to any developer, and new developments by SRT Alliance members will migrate back into open source SRT on a regular basis.

What are the lines of action being followed by the association to achieve its objectives?

Joining the SRT Alliance is free and provides its members with several benefits, including access to the latest information about SRT solutions, partnership and marketing opportunities, and networking opportunities with SRT Alliance executives and other members.

82
PROTOCOLS
Pablo

Under what conditions is SRT an open source protocol?

The SRT is an open source protocol distributed under MPL-2.0 and licensed under Mozilla Public License. Mozilla Public License was chosen to strike a balance between driving adoption for open source SRT, while encouraging contributions to improve upon it by the community of adopters. This way, any third party is free to use the SRT source in a larger work, regardless of how that work is compiled. And, if any source code changes are made by a third party, they would be obligated to make those changes available to the community.

As an open-source protocol, vendors are secure in knowing that any investment they make in adopting and supporting SRT is protected in the open-source community forever. Open source also allows for increased client adoption and a clear path to standardization within the community.

What is Haivision’s business model for SRT?

Our goal is to overcome the challenges of low latency video streaming, and ultimately change the way the world streams. The best way to do this is by making the SRT protocol freely available so we can accelerate innovation through collaborative development. All SRT Alliance members will benefit from being a part of a growing network of SRT endpoints and have the opportunity for collaboration in business, technology, and workflow improvement.

What distinguishes the SRT protocol from other options in the industry?

The SRT protocol is very used in the streaming world and includes more than 592 SRT Alliance members that offer thousands of SRT-ready solutions. In fact, Haivision’s 2022 Broadcast IP Transformation Report indicated that SRT is the most widely used transport protocol for broadcast video. Additionally, the SRT protocol offers built

in statistics to help with troubleshooting and monitoring. It also provides live information on network transmission quality that can be used as real-time feedback to observe video encoding processes and adjust video bandwidth.

As a UDP-based protocol, SRT has the added advantage of addressing limitations caused by network congestion control and experiences lower latency compared to segment-based protocols. Whenever there are situations in which low latency, security, or network reliability come into play, SRT is needed. Since SRT is content agnostic, it offers flexibility in choosing and enhancing audio and video codecs and enables new trends at the forefront of technology.

What is the penetration of the SRT protocol in broadcast?

The transition from SDI to IP has been a hot topic for some time, but in the last two years, it has become more critical than ever. From distributed

83
SRT

workforces to remote production, the pandemic continues to present both significant challenges and opportunities for the industry, forcing broadcasters to rethink their approach to creating and delivering quality content.

With the technologies and standards now in place, broadcasters are finally realizing the benefits of migrating their production workflows to IP. Transporting video across the internet, cloud, mobile, and local area networks allow for greater flexibility, scalability, and the ability to create new and unique services previously unavailable with fixed SDI infrastructure.

While the internal high-speed networks at broadcasters are dominated by IP networking technologies, like NDI and SMPTE ST 2110, SRT closes the bridge to remote facilities, field reporters and the cloud.

The trend towards remote production with limited equipment and staff sent

on location continues.

At the same time, live production workflows are themselves being decentralized. Broadcasters are leveraging low latency streaming technologies to enable real-time collaboration among talent and staff over long distances and even to remotely operate broadcast equipment.

While the broadcast industry is clearly undergoing a transformation towards IP and cloud technology, transition paths vary. Most broadcasters are mixing different technologies, both on-prem and in the cloud, while leveraging existing investments.

SRT, SMPTE ST 2110, NDI, and, even, 5G are among the many options for IP enabling workflows, so are the growing number of cloud-based solutions for fleet management, stream routing, and broadcast production.

As with other industries, broadcasters are also accepting that many of the recent changes to the way we work and

produce content are here to stay. The new normal is hybrid, whether it’s how broadcasters work or what technologies they use. Making broadcast workflows as flexible and agile as possible will be key to their success.

Where is this protocol good and where is it not?

The SRT protocol was specifically developed to maintain low latency and high video quality when streaming over networks dealing with packet loss, fluctuating latency, and available bandwidth as well as jitter. Even if latency is not the main criterion, for a certain use case, SRT is often chosen due to the strength of streams.

The SRT protocol helps to overcome firewall connectivity restrictions and offers AES encryption to secure the content. Multiple network connections can be bonded for redundancy and failover scenarios, ensuring packets arrive at the receiver via multiple network paths.

84
PROTOCOLS
85 SRT

An area we are currently exploring, which excites the team here, is the ability to bond multiple network interfaces in order to provide more bandwidth than a single link. This can be extremely useful when streaming in situations where there is a high concurrency of users over the same network.

How far can the SRT protocol reach? Is it capable of sending a 4K 50p HDR WCG signal with 5.1 surround sound?

The limits of the SRT protocol are mostly defined by available network bandwidth or concurrent connections. In terms of resolution, frame rate, wider color gamut, number of audio/video channels, and codecs used, SRT is very flexible. It’s content agnostic and transports packets, no matter what they contain, making it a great future-proof choice.

How are you preparing for the cloud?

The SRT protocol has enabled cloud-based workflows from the

beginning in 2012 and especially since the opensource release in 2017. To overcome the challenges of bringing content to the cloud from the field or securely moving content between cloud instances, the SRT protocol is one of the main options. In the early days, it was mostly due to lack of alternatives. But today, the reason for this is that SRT is recognized as a reliable, mature and easy to implement protocol. SRT development maintains a focus on cloud trends and is currently active in Media Over Quick working group at IETF. SRT Alliance members also give valuable feedback on feature needs to adapt to quickly emerging cloud workflows.

Many video contributions over telecommunication networks are made using the SRT protocol when there are other alternatives. Why is there so much reliance on it?

The reasons for choosing the SRT protocol are

manifold. Above all, SRT solves problems that users are facing when streaming in the real world and has earned a reputation as a reliable protocol. Seeing is believing and it’s easy to get started on your own with SRT. Using a smartphone camera as an SRT sender or monitoring SRT streams with mobile devices and free apps is as simple as watching an SRT stream in VLC media player. On the other hand, you can easily find broadcast-grade equipment supporting SRT. This great flexibility and interoperability across a wide range of devices, software, and cloud solutions from many manufacturers and open-source initiatives is nearly as important as the capability of compensating for packet loss, jitter, and fluctuating bandwidth.

In which other industries and markets is the SRT protocol trying to be introduced?

The network challenges that SRT helps to overcome are not specific to a single market or industry. This

86
PROTOCOLS

What is the future of the SRT Alliance?

We will continue to support all our members through the multiple channels we have in place. One of those vehicles, as we mentioned, is PlugFest, and we are yet to confirm a very special edition for next year, so if you want to participate do not hesitate to submit your application to the SRT Alliance or send us an email to info@srtalliance. org.

is why SRT is chosen by broadcasters as well as those within the defense, enterprise and medical industries, sports and entertainment, and science and public institutions. The needs of broadcasters will continue to be a focus in future development, as well as processing

feedback from the SRT community from various markets. SRT is about making streaming more reliable and secure, regardless of whether you’re in a TV studio, helicopter or ship, in the field with an OB-van or just a smartphone, in cooperation with a complete film crew or on your own in your home office. 

87 SRT

OB1

The great cinematographic stake of Procam Take 2

88 OB

Procam Take 2 is a company in charge of planning, supporting and distributing cinematic multicamera solutions to a wealth of projects, from large-scale television productions to live-streamed concerts and festivals.

The PT2 Projects division was established in 2016 with the appointment of Vicky Holden (Director of Projects) and Dan Studley (Technical Director) in response to the market’s growing demand for multicamera, fixed rig, and portable production units.

In the following lines you will find out everything you need to know about Procam Take 2 explained by Dan Studley, which has managed some of the UK’s most well-known prime-time television shows and is renowned for developing and integrating pioneering workflows across many entertainment, reality, fixed-rig and live music productions.

89 PROCAM TAKE 2
Dan Studley, Technical Director

What is OB1’s distinctive factor compared to other companies’ offerings?

Rather than following the trend of a traditional broadcast truck, OB1 is a cinematic-facing OB truck. This means it can bring in any cinematic HD frame rate, such as 23.98P and 25P, paint, vision mix, and transmit them without having to change to an interlaced workflow. OB1 also offers ‘any camera, any lens’, allowing the client to use any geared iris lens on any camera it fits in. In the cinema world, it offers producers, directors and DoPs a huge range of options for their camera choice.

Why is OB1 the largest OB Truck in the UK?

It isn’t. With capacity up to 20 cinema camera channels, it is the largest cinematic-facing OB truck in the UK.

How did you manage to introduce 40 UHD signals?

We offer 40 HD inputs, and realistically we can do this in UHD if required, however,

there will be limitations. The truck is designed to meet the needs of the live music and special events world, whose workflow is generally a UHD LOG ISO record on every camera, with a painted HD line cut and TX. Therefore, OB1 is built as a HD truck. The demands of UHD have been built in,

just not throughout the complete TX chain. We hope to upgrade this over the next 12 months if the requirement presents itself.

How does the signal transmission work, is it SDI or fibre? What protocols do you work on?

90
OB

the Sony Venice, Arri Alexa Mini and Arri Amira workflows, but it is also wired for Sony CCUs for the more traditional OB cameras. We are looking to expand this workflow into the Arri Mini LF, Arri 35 and other large sensor cameras in 2023.

What video and audio mixing solutions are in the truck?

Our default vision mixer is a Ross Carbonite, and the audio desk is a Calrec Omega.

Regarding graphics, what are the manufacturers of the solutions included at OB1? Do you have

OB1 is built in SDI throughout. It also has multiple comprehensive fibre satellite systems built in for off-board monitoring.

Which manufacturers have you relied on for the cameras?

We have built OB1 around

91
PROCAM TAKE 2

capabilities to generate augmented reality graphics?

Our Ross Carbonite vision

mixer has the Xpression license for GFX, however, these are limited to the license and not a comprehensive solution.

OB1 does have the option for external GFX machines if supplied separately. We don’t have the capacity for augmented reality graphics.

92
OB

Is there really a market’s need for so much capacity to manage signals?

In short, yes. We recently worked on an event with 20 outside sources, in addition to the internal 24 camera feeds we were looking after. Signal capacity is so important when looking after big jobs.

In the age of cloud and remote production, what does the future hold for OB units?

We can see the appeal of remote productions in sports and other live broadcasts; however, in the world of special events and music, the directors and producers want to be on the edge of the action. It’s this area that we are focused on, so remote productions are currently not on our roadmap.

How will these expensive and voluminous units adapt to a future where software located in data centres with access

through high-capacity networks, -such as 5G or fibre-, will predominate?

So long as directors and producers want to be on location, so will the trucks. As the technology, reliability, and latencies improve, equipment will ultimately be moved to more cloudbased productions allowing more spacious production areas and OBs. In turn, this will make way for smaller, quieter and more energyefficient units.

What will the OB Vehicles of the future look like and how will they be used?

I suspect that the size of OB trucks will shrink as more equipment is put into the cloud. This will allow less power draw and a quieter set-up. I also suspect the push to more sustainable vehicles with self-powered devices to be at the forefront. 

93
PROCAM TAKE 2

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.