TM Broadcast International #124, December 2023

Page 1

NEWS - PRODUCTS

Titular noticia Texto noticia

1



EDITORIAL With the aim of bridging the technological

our featured articles. Uncover how these

horizon, TM Broadcast International includes

transformative technologies are reshaping

an exclusive interview with Fadi Radi, Chief

content creation, production workflows,

Operating Officer of BLINX, offering readers

and the overall broadcasting landscape.

an in-depth look into the transformative world of connectivity. Radi shares his vision for the future, exploring how BLINX is reshaping the broadcasting landscape with its innovative news platform addressed to Gen Z.

From AI-driven automation to the immersive capabilities of NeRF, these articles provide a roadmap to the future of media production. Towards the end of this issue, TM Broadcast steps into the Blackmagic lab with our

For keeping an eye on sports productions, TM Broadcast steps into the dynamic world of basketball broadcasting with Stephanie Mignot, Chief Operating Officer of FIBA. In our interview, Mignot delves into the peculiarities of FIBA and basketball game transmissions, shedding light on the challenges and strategies that make FIBA

feature on testing its cutting-edge tools. Discover firsthand accounts and insights from the lab, where our expert Luis Pavía rigorously tests and experiments with Blackmagic tools, ensuring they meet the highest standards of performance and innovation.

broadcasts a truly immersive spectacle.

Come with us and discover the possibilities

Besides, readers will be able to explore

in a new TM Broadcast International

the groundbreaking intersection of

issue, finding new horizons in the ever-

Neural Radiance Fields (NeRF), Automated

evolving landscape of broadcast and media

Production, and Artificial Intelligence in

technologies.

Editor in chief Javier de Martín editor@tmbroadcast.com

of broadcasting and content production

Creative Direction Mercedes González

TM Broadcast International #124 December 2023

mercedes.gonzalez@tmbroadcast.com

Key account manager Patricia Pérez ppt@tmbroadcast.com

Administration Laura de Diego

Editorial staff press@tmbroadcast.com

Published in Spain

administration@tmbroadcast.com

ISSN: 2659-5966

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

3


SUMMARY

6

News

24

FIBA, setting a course for the future As we explore sports broadcasting, TM Broadcast introduces to its readers an exclusive interview with Stephanie Mignot, FIBA Media’s Chief Operating Officer. During this conversation we delve into the dynamic of FIBA broadcasting processes and cutting-edge technology that bring the thrilling game of basketball to audiences worldwide. As the global governing body for basketball, FIBA continuously strives to enhance the viewer experience, seamlessly blending the excitement of the sport with state-of-the-art broadcasting techniques. We invite you all to join us in gaining insight from a sport’s broadcast expert who plays a pivotal role in orchestrating the intricate dance between technology and sports, shedding light on the innovative strategies and advancements that make FIBA broadcasts a truly immersive spectacle.

34

A revolution in digital broadcasting: Blinx In an exclusive interview with TM Broadcast, Fadi Radi, Chief Creative Officer of Blinx, delves into his remarkable journey in the broadcast industry spanning over 25 years.

4


42

48

52

NERF An innovative approach in the field of computer vision and 3D graphics

The Power of AI in Broadcast, by Magnifi

Test Zone: Blackmagic Design Studio Equipment (II)

5


NEWS | PRODUCTS

TAG Video Systems integrates DreamCatcher to enhance replay creation TAG Video Systems , in a move set to redefine the landscape of replay technology, has announced the integration of Evertz DreamCatcher™ replay technology into its Realtime Media Platform. This development also provides a game-changing solution for the early adopter, Texas A&M University.

The November 2023 launch

DreamCatcher, an IP-based replay system designed for stadiums and fixed sports venues, is now seamlessly incorporated into TAG’s platform. This integration not only broadens the spectrum of choices available to replay users but also unleashes a host of functionalities.

have access to a diverse array

of this integration has already been adopted by Texas A&M. This seamless integration shows TAG’s platform interoperability. The IP and COTS-based open architecture enables compatibility not only with DreamCatcher but also with systems from other industry players such as EVS, Nevion, and Imagine. Users now of tools. The combination of TAG’s multiviewer and Evertz’s DreamCatcher allows for the display of six different metadata labels, including current time, video timestamp, clip name, clip ID, play status, playlist name, and playtime.

The inaugural adopter of this integrated solution, Texas A&M, attests to the benefits it brings. Jonathan Kerr, Chief Broadcast Engineer at 12th Man Productions, notes: “At Texas A&M, we have one of the largest football stadiums in the country. We’ve been using TAG’s multiviewer with DreamCatcher for several weeks now, and it’s been great. Being able to give our producers the ability to see the DreamCatcher data on our TAG multiviewer allows them to see when a playlist is going to end, when a clip is going to end, and how long the video has been going; all invaluable information that enhances our production values and elevates quality.” 

ROBERT ERICKSON, VP SALES AMERICAS, EMPHASIZES TAG’S COMMITMENT TO CUSTOMER NEEDS, STATING, “TAG ALWAYS HAS CUSTOMERS’ NEEDS TOP-OFMIND. WE’VE INTEGRATED EVERTZ DREAMCATCHER TECHNOLOGY INTO OUR MULTIVIEWER, PROVIDING OUR CUSTOMERS WITH THE OPPORTUNITY TO ENJOY TAG’S BEST-IN-BREED MULTIVIEWER ALONGSIDE THE INDUSTRY’S WIDELY USED REPLAY SYSTEM.”

6


NEWS | PRODUCTS

Avid unleashes upgrades with Avid | Stream IO to boost broadcast production

Avid has revealed the latest upgrades to its Avid | Stream IO™ software subscription platform tailored for broadcast production ingest and playout. The enhancements includes additional channel capabilities, playlist construction, closed captioning, and extended codec support. In a bid to elevate performance, Avid | Stream IO’s channel count has now doubled, allowing support for up to eight channels within a single system. This enhancement provides content producers with the flexibility to configure inputs and outputs freely, building on the initial four channel configuration available since launch. The expansion to eight channels in one solution enhances functionality and also

brings added value, according to the company. Accompanying the augmented channel count is the ability for content teams to construct media playlists directly from the Avid | Stream IO web remote console, facilitating seamless playback during production. This feature is a solution for manual operation and acts as backup for business continuity. Avid | Stream IO now supports closed captioning during playout and this release also introduces increased codec support with the addition of AVC-LongG12 and 25 formats. Operating on Commercial Off-The-Shelf (COTS) hardware, Avid | Stream IO is available through subscription and, in conjunction with the Avid

FastServe® family of video server solutions, offers a wider range of options. Featuring a flexible architecture that supports SDI streams today and enables future integration with various IP-based formats, new Avid optimized platform allows production teams to transition from legacy workflows and on-premises deployment to cloud and IP workflows at their own pace, as company says. Tailored for live entertainment and multi-camera productions, Avid | Stream IO supports all common production formats, including SDI, with upcoming support for compressed IP streams SRT / RTMP, followed by NDI and SMPTE 2110 later next year. 

7


NEWS | PRODUCTS

FilmLight releases Baselight 6.0 FilmLight celebrates official release of Baselight 6.0 and introduces Face Track, a new tool based on machine learning (ML)

FilmLight has announced the release of the latest version of its regarded Baselight grading software, Baselight 6.0. Baselight 6.0 benefits from several years of ML development and new features designed to bring major gains in productivity and creativity to today’s increasingly demanding and competitive post production environment. Many of the beta testers are already relying on the new tools, which include an improved and modernised timeline, a new primary grading tool, X Grade, a unique new look

8

development tool, Chromogen, plus a new and ground-breaking ML-based tool, Face Track. “We’re extremely excited to introduce Baselight 6.0 to the global colourist community,” comments Martin Tlaskal, Head of Development at FilmLight. “We believe this release moves our technology forward significantly, and provides a platform that works in synchronicity with today’s colourist. Face Track, for example, has been proven to greatly reduce the considerable amount of time colourists spend in the grading suite on

beauty work – typically the need to repeat processes and corrections for every appearance of each character. By harnessing cutting-edge advances in machine learning Face Track will revolutionise this – allowing colourists to track faces, make corrections or enhancements and apply them across multiple scenes.” Using an underlying ML model, Face Track finds and tracks faces in a scene, adjusting smoothly as each face moves and turns. It attaches a polygon mesh to each face, which allows


NEWS | PRODUCTS

FilmLight’s Face Track technology has been developed from scratch internally, but the company has also ensured Baselight 6.0 is fully ML-ready. By developing a framework, which it calls Flexi, FilmLight is now in a position to integrate future ML-based tools into Baselight quickly and easily – allowing FilmLight and its customers to access the best grading tools and stay on top of cutting-edge technologies now and in the future.

BASELIGHT 6.0 CHROMOGEN

BASELIGHT 6.0 XGRADE

perspective-aware tools such

and skin work. Paired with paint

as Paint and Shapes to distort

tools, Face Track becomes even

with the mesh. The colourist’s

more powerful. Colourists can

productivity is further boosted

spend more time grading and

by the ability to copy corrections

less time drawing shapes on

and enhancements made in

faces and tracking.”

Face Track to the entire timeline ¬– with a simple copy and paste, these previously repetitive corrections can be applied to the same face across an entire

Madrid-based freelance colourist, Raúl Lavado, adds: “One of the wonders of Baselight 6.0 is the Face Track tool,

sequence or episode.

which has revealed itself as a

Norway-based freelance

digital colour. This innovation

colourist, Cem Ozkilicci, who

revolutionises the way we can

recently won the Spotlight award in the 2023 FilmLight Colour

perfect skin tones, delivering unparalleled precision in

Awards, was part of the Face

facial tracking. The ability to

Track beta programme. “I’ve

dynamically adjust the colour

had the pleasure of using Face

and texture of moving faces in

Track and it is an absolute game

a scene takes the quality and

changer,” he says. “It is amazing

efficiency of colour correction

and cuts down time spent on

to new heights. Baselight 6.0 is

tedious keys and shapes – not

not just a tool, it is an experience

just for brightening/darkening

that redefines the art of colour

faces, but for lifting eyes, beauty

grading.”

shining beacon in the world of

Seamus O’Kane, senior colourist at The Post Arm in London, comments: “The new tools in Baselight 6.0 offer a creative jump that is inspirational. It feels like suddenly discovering you can breathe underwater. You thought it would be interesting, but the reality of it is astounding.” In Baselight 6.0, colourists will also find the RIFE ML-based retimer, a reworked Curve Grade, integrated alpha for easy compositing, a new sophisticated Gallery for improved searching and sorting, plus a wealth of new and enhanced colour tools such as Sharpen Luma, a built-in Lens Flaretool, Bokeh for outof-focus camera effects, Loupe magnifying for fast and accurate adjustments, an upgraded Hue Angle, and much more. Enge Gray, senior colourist at Racoon in London, concludes: “Having spent some time on Baselight 6.0, I can honestly say I’m very excited about how the new tools like X Grade and Chromogen, along with the other upgrades and additions, will help us deliver and realise the vision. It really is next level.” 

9


NEWS | PRODUCTS

Disguise and Cuebric launch AI-Driven photorealistic content generator for virtual production As part of the partnership, Cuebric has been integrated with the Disguise platform. This means creatives can now use AI to create the shape and depth of 2.5D environments, then import them into Disguise. The result is a plug-and-play scene that can be executed on an LED stage in only two minutes. “Real-time environments look spectacular on-camera, yet often require many hours of artistic and technical build,” said Addy Ghani, VP of Virtual Production at Disguise. “Thanks to our partnership with Cuebric, there’s now another option. Using generative AI, artists can build 2.5D plates, helping them go from concept to camera in minutes so they can tell unique

10

stories in a way that works for them.”

no matter how you move the camera on set.

To create 2.5D scenes, users can either add purely generative content or import images from elsewhere into the Cuebric platform. Cuebric then leverages AI rotoscoping and inpainting to segment the images into layers, transforming them from 2D to 2.5D based on the depth of objects. Once the Cuebric 2.5D scenes are imported into Disguise’s Designer software, each individual layer is depthmapped with an auto generated mesh. This means that the individual plates are not limited to flat planes, as 3D shapes can be built into each of the plates — resulting in a more realistic parallax effect that works

“Cuebric democratizes filmmaking, removing the cost barrier to creating gorgeous, immersive sets and backgrounds with its plug-and-play 3D effect solution. Leveraging AI to minimize the tedious and maximize the extraordinary in the creative process will breathe new life into the production industry,” said Cuebric co-founders, Pinar Seyhan Demirdag and Gary Lee Koepke. “Desire and demand for extraordinary content has never been greater, so there is no better time to be partnering with Disguise to put the full power of generative AI into the hands of the world’s greatest creators and filmmakers.” 


NEWS | PRODUCTS

Videosys Broadcast announces new features for its cameracontrol products TSL TallyMan protocols into its camera control systems, supporting both tally and UMD data.” One such customer is EMG Connectivity, whose Technical Director Martin Brosthaus welcomes this innovation, adding that it is especially useful for more complex camera setups or for remote production where equipment is often distributed over different areas, or even countries. Videosys has also introduced remote control of the AEON-CC and STX-CC transmitters via the web GUI on the IDU and the Epsilon Radio Camera Systems. This innovation significantly expands Videosys remote control capabilities and ease of use. Users can remotely control key functions such as transmit frequencies from the comfort of their Master Control Room or Outside Broadcast van.

ARNOLD ROOSENBURG WITH VIDEOSYS BROADCAST CAMERA CONTROL PRODUCTS

“TX control gives users the ability to access status information directly from the transmitter on the IDU or Epsilon control panel and change features such as videolink frequency and power

Videosys Broadcast is celebrating

customer feedback,” says

the season of goodwill by

Videosys Broadcast’s Managing

announcing a range of new

Director Colin Tomlin. “We often

features for its camera control

implement customer suggested

products including its flagship

features and improvements,

Broadcast.

dramatically speeding up their

Finally for 2023 Videosys now

Epsilon 4K base station Camera

output control remotely,” says Arnold Roozenburg, Sales and Marketing Director for Videosys

Control Unit (CCU).

production workflows. Customer

offers uni and bi-directional

demand was a key reason why

Ethernet control for Sony’s FX9

“We pay close attention to

Videosys Broadcast integrated

cameras. 

11


NEWS | STORIES OF SUCCESS

Reporter TV Kerala adopts LiveU’s IP-video EcoSystem for nationwide news expansion ahead of 2024 indian election Reporter TV Kerala is broadening its newsgathering capabilities across India ahead of the 2024 Indian election, opting for LiveU’s IP-video EcoSystem. The regional channel has outfitted its news teams with LiveU’s LU800 multicam units and LU-Smart apps. The portable IP-video solutions will enable the Malayalamlanguage news channel to deliver content in nine regional languages on both linear TV and digital platforms. Lamhas Satellite Services, LiveU’s local partner, is handling service and support. Reporters nationwide will utilize LU800 multi-camera encoders and LU-Smart mobile app licenses to provide daily live news coverage. LiveU’s resilience, powered by the LRT™ protocol,

12

played a crucial role in the decision-making process, along with the LU800’s multi-camera capabilities allowing for coverage from different angles using a single unit.

ability to cater to viewers who prefer news in their regional languages, stating that it provides an opportunity to add stories interesting to millions with strong connections to the Kerala region.

Anto Augustine, Managing Director of Reporter TV, emphasized the channel’s investment in top-tier broadcast technology to expand reach throughout Kerala and across India. The decision to choose LiveU was based on its quality, reliability, and flexibility, allowing the channel to offer diverse programming by covering events with multiple feeds from the same location using a single unit.

Manoj Shah, Director of Lamhas Satellite Services, expressed delight at Reporter TV’s choice of LiveU technology, committing to support the channel as it expands operations across the country.

Anil Ayroor, President of Reporter TV, highlighted LiveU’s

Ranjit Bhatti, Senior Director of South Asia at LiveU, underscored the significance of 2024 in Indian news and welcomed Reporter TV to LiveU’s customer roster, noting the channel’s role in solidifying business growth during this crucial year.


NEWS | STORIES OF SUCCESS

IMAGE CREDIT: CARLO BORLENGHI | ROLEX

Gravity Media partners with CYCA and Seven network for Rolex Sydney Hobart Yacht Race broadcast Gravity Media, company that

18th year as the broadcast

cameras on selected maxis, on-

specialized in complex live

and technology partner for the

water chase boat cameras, and

creative production and media

prestigious Rolex Sydney Hobart

live helicopter and drone camera

services, has announced its

Yacht Race.

footage.

The 78th edition of the race will

Hosted by Mark Beretta of

collaboration with the Cruising Yacht Club of Australia and the Seven Network as the broadcast technology and television production partner for the upcoming Rolex Sydney Hobart Yacht Race. The event, set to commence on Boxing

see a diverse fleet, including multiple former winners, tackling the 628 nautical mile course. A notable competition is expected among four 100-foot maxis: Andoo Comanche, the four-time winner and current

Sunrise, the commentary team on Seven will include veteran Peter Shipway, America’s Cup and Rolex Sydney Hobart Line Honors winning skipper Jimmy Spithill, and Olympic Silver Medallist Lisa Darmanin. The Rolex Sydney

Day from Sydney Harbour, will

record holder; SHK Scallywag;

feature Gravity Media Australia’s

Law Connect; and Wild Thing

Hobart Yacht Race coverage will

comprehensive television

100. Gravity Media Australia will

air on 7mate, Seven, and 7plus

broadcast production. This

provide complete broadcast

starting at 12:30 pm AEDT on

marks Gravity Media Australia’s

coverage, incorporating on-board

December 26. 

13


NEWS | STORIES OF SUCCESS

The Media Production & Technology Show and SMPTE UK Announce the Launch of the Media Technology Conference The Media Production &

Charlotte Wheeler, MPTS event

Technology Show (MPTS),

director, said, “The broadcast

the UK’s largest event for the

industry is at the forefront of

broadcast media, production

technological innovation, whether

and technology industry, and

that is in live sports broadcasting,

SMPTE®, the home of media

the use of AI, remote production,

professionals, technologists

virtual studios, post-production

and engineers, have announced the launch of a new technology conference for broadcast leaders that will take place on 15-16 May 2024. The Media Technology Conference will bring together senior technology leaders and experts in the field of broadcasting and innovation for two days of networking and knowledge-sharing. This new conference will be co-produced

editing or sound remixing. As the UK’s largest industry trade show, we want to provide a platform to bring together the most senior technology professionals who are leading this change to explore the latest trends and innovations, as well as plan for the future. Partnering with SMPTE UK will ensure the programme focuses on the most cutting-edge

by SMPTE and Media Business

technology topics and that the

Insight (MBI) Ltd, publishers of market-leading publications and events.

event attracts senior attendees

At the heart of the event will be an expertly curated conference programme that will feature a mix of insightful keynote presentations, interactive panel discussions, hands-on workshops and technology business insights. Topics will include emerging

from around the world.” John Ellerton, chair of SMPTE UK, added, “The Society of Motion Picture and Television Engineers

THE MEDIA PRODUCTION & TECHNOLOGY SHOW AND SMPTE UK ANNOUNCE THE LAUNCH OF THE MEDIA TECHNOLOGY CONFERENCE

is delighted to partner again with MBI for what promises to be a very exciting 2024 Media Technology Conference. We’re bringing the best of the global

The event will take place alongside the MPTS exhibition and content programme, which

technologies, AI in media, cloud

SMPTE community to London

for live production, IP workflows,

to share knowledge and insight

virtual production, security,

with senior leaders in the UK and

skills, sustainability, and many

the European media industry.

more. The conference will be

And by hosting this as part of

free to attend, but spaces will be

MPTS 2024, this is certain to be

Registration for the event will

reserved for senior technology

the most successful SMPTE UK

launch at the beginning of

leaders only.

conference ever.”

January 2024.

14

will continue to provide over 250 hours of exclusive and freeto-attend sessions across six theatres.


NEWS | STORIES OF SUCCESS

ETC equipment enhances Al Hadath TV studios in Riyadh In a strategic move towards enhancing its broadcast capabilities, Al Hadath TV has partnered with ETC dealer, Oasis Enterprises to outfit its newly launched studios in Riyadh, Saudi Arabia with state-of-the-art ETC control, networking, and fixtures.

powerful color tools and awardwinning software. To facilitate networking and robust data distribution in the studios, Response Mk2 DMX Gateways were selected after a meticulous year-long planning process for the modern facility.

manipulation of on-camera

As a news channel specializing in both regional and global current affairs, Al Hadath has embarked on an extensive expansion, culminating in the establishment of its studio. Oasis Enterprises collaborated closely with Al Hadath to realize the vision of a modern, dynamic space equipped with advancements in LED technology.

Karim Abdel Massih, Oasis Lighting System Manager, shared insights into the project, stating, “Our goal was to enhance the visual experience and flexibility of the studio, and we achieved this using the superior capabilities of ETC’s products. The Eos Apex console, the latest in the field, brought tremendous value to the project, meeting the operator’s requirements seamlessly.”

UHD content. The entire system

The chosen fixtures for the project were ETC’s versatile ColorSource Spot jr’s, seamlessly controlled by the Eos Apex console, renowned for its

ETC’s latest line of Eos consoles offers a powerful programming surface packed with advanced technology, enabling easy

colors. The ColorSource Spot jr’s, with their small form factor and stylish design, proved to be a flexible fixture for any-sized space, seamlessly integrating into the newsroom ceiling. The flickerfree mode ensures impeccable operates on a backbone of Response Mk2 DMX Gateways for data distribution, allowing for effortless configuration and error reporting over RDM directly from the console. Karim further emphasized the project’s significance, stating, “The Al Hadath studio project reflects our commitment to pushing the boundaries of technology, creating an immersive and dynamic space for unparalleled visual experiences.” 

15


NEWS | STORIES OF SUCCESS

Leyard Europe creates state-of-the-art working environment for students and researchers at the Joint Research Center Zeeland Five years on from the signing of a letter of intent in early 2018, the Joint Research Center Zeeland (JRCZ) officially opened its doors this summer, and brand-new research institute adjacent to the HZ University of Applied Sciences in Middelburg, the Netherlands, has been created for technical study programmes. Offering modern laboratories, JRCZ includes a data lab with a Google-like vibe, state-of-the-art chemistry and engineering labs, and an ecology lab for saltwater research, all equipped with the latest technologies. Playing a major role in the technical infrastructure, Leyard Europe has supplied multiple visualisation solutions for the facility, installed by integration company, Ocular. An innovative curved LED solution is utilised both for regular meetings and interactive group VR sessions using active 3D glasses.

JRCZ was established to analyse big data from varied sources, with a special focus on the Zeeland area in the southern part of the Netherlands, with its locks and waterways informing its research. To achieve this goal, a state-ofthe-art, multi-zoned auditorium was designed to provide visitors and students with an immersive experience of the data and its organisational parts. The auditorium boasts a large Leyard MGP Complete™ 163” screen, which serves as an impressive presentation area. Once the presentation is over, visitors can then move to the next room, where they will be amazed by the pièce de resistance created by the Ocular team – a large, curved LED canvas solution that provides a stunning visual display. Arne Van Vyve, Project engineer at Ocular since 2011, collaborated with JRCZ and Leyard Europe to design the innovative LED solution for the research centre.

16

“As with most great ideas, this one didn’t happen overnight,” Arne Van Vyve explains. “It involved a long preparatory process in which we worked towards one goal - to create the ultimate immersive experience for co-creation. The idea of constructing a curved LED wall came gradually, born out of several brainstorming sessions with our general manager Nicolas Vanden Avenne and the Leyard Europe team. The idea immediately appealed to me on paper, but of course the actual implementation was a bit more intricate and required a great deal of planning and execution.” “Initially, the tech team at the institute looked at a projection solution,” says Cris Tanghe, Vice President Product Europe at Leyard Europe. “However, we suggested they consider an LED system instead as it offers a reduction in the depth required for installation. We also felt it would be more appropriate for a future- and research-oriented facility like JRCZ.”

Arne Van Vyve explains that Ocular had already established ºa long-standing relationship with Leyard Europe, a leader in high-end LED solutions, when designing a specific LED solution for this project. Together with Leyard’s main point of contact, Cris Tanghe, the team explored various possibilities for creating immersive 2D and 3D experiences. “Eventually, we settled on the idea of a 5.4-meter-wide, 2.4-meter-high curved LED wall, with a radius of 2,377mm. The smaller the radius, the more pronounced the curve,” he adds. The curved LED wall has a total resolution of 4320 by 1620 pixels, with a pixel pitch of just 1.2 mm. “This means that you can barely tell it’s an LED wall if you stand just a metre and a half away. The quality of the display is exceptional, making the content look remarkable. The configuration provides visitors with a fully immersive and interactive experience,” continues Arne Van Vyve.


NEWS | STORIES OF SUCCESS

“By using LED technology,

eyepoint of participants wearing

allows remote participants to

the visual experience is fully

3D glasses to be precisely

view the content and engage in

optimised,” states Tanghe. “As

tracked and adjusted in real-time

conversations or brainstorming

there is no reduced contrast due

to the content displayed.

sessions from anywhere. This

to ambient light, the colour space is also maximised, producing bright and vibrant visuals. LED also has a lower footprint and higher return on investment compared to rear projection solutions. It can fit in a regular office space and has a very long

The unique structure of the screen, combined with its

for everyone involved.

impressive tracking solution,

Nicolas Vanden Avenne notes

allows it to be used for training

that if they had chosen to partner

sessions and workshops,

with a different screen supplier,

including the display of 3D

they would have had to work

models on the LED wall.

operation lifetime.”

“Furthermore, the JRCZ team

As the LED wall is also active 3D,

online 21:9 MSTFR video calls,”

VR glasses can be worn, and

creates an interactive experience

uses the curved wall for 2D

with multiple companies because of the variety of the products required. Another, and major, deciding factor was that Leyard

exclaims Arne Van Vyve.

Europe has offices and a factory

to be fully immersed in the

Next to the LED wall, there are

advantageous due to global

experience, and the content and

multiple 75” LCD touchscreens

supply chain shortages. “The

data being shown. This ‘VR studio’

deployed. On top a large

local European production, that

is a seamless, curved, digital

interactive LCD video wall of 4m

guarantees us short lead times

canvas giving participants the

x 1.2m was deployed allowing

and premium quality, gave us

sensation of absolute immersion

users to examine various types of

and the client peace of mind and

in the content being displayed.

documents, images, and videos.

allowed us to meet the project

The tracking is available via

A camera facing the audience

timelines,” says Nicolas Vanden

OptiTrack tracking, allowing the

and another facing the wall

Avenne.

motion tracking allows visitors

in Europe, which was particularly

17


NEWS | STORIES OF SUCCESS

Hubei TV trusts in Ateme’s TITAN solution for its MCR resource pool project Chinese Hubei TV choses Ateme’s solution for its Master Control Room (MCR) resource pool Introducing a flexible hardware allocation model, the Ateme solution brings unprecedented efficiency to Hubei TV station’s broadcasting operations Ateme, a global company specialized in video compression, delivery, and streaming solutions with innovation at its core, today announced that Chinese TV station Hubei TV chose Ateme’s TITAN encoding and transcoding solutions for its Master Control Room (MCR) resource pool

18

project. Hubei TV faces the daily challenge of managing a variety of signals from numerous studios. Ateme’s solution for the MCR resource pool project enables efficient encoding and transcoding of 4K and HD channels into different bitrates for diverse destinations. Beyond any-input and any-ouput flexibility, the MCR resource pool also introduces a flexible hardware allocation model, ensuring optimized functionality and significant cost savings for Hubei TV station. This innovative approach allows the

station to allocate licenses to different hardware resources in various locations, eliminating the need for constant server transportation, thereby reducing operational costs. “The last few years have been tough for the broadcast industry,” said Xiao Zhang, Sales Director at Ateme. “Competition is intensifying and it’s essential to have lean operations. We are excited to support Hubei in implementing their innovative MCR resource pool, addressing the dynamic needs of the broadcast industry.” 


NEWS | STORIES OF SUCCESS

Sony XVS-G1 live video mixer enhances production at Hessischer Rundfunk’s State Parliament Studio In the heart of Wiesbaden’s state parliament studio, Hessischer Rundfunk (hr) has embraced the future of live video mixing with the integration of Sony’s XVS-G1 live video mixer in its control room. The XVS-G1 serves as the central hub, managing all incoming signals and distributing them to two cross-media control rooms. At the rear, a meticulously organized array of green signal cables connects to the mixer’s 44 inputs and 24 outputs. Heiko Herre, the mastermind behind the system and project planning, explains, “The XVS-G1 seamlessly handles signals from 11 cameras and other studios above, ensuring a smooth production workflow.”

Crafted to meet the dynamic demands of live productions, ranging from sports and news to entertainment programs, the XVS-G1 proves its mettle in small to medium-sized studios, outside broadcast vehicles, and mobile production units as the company states. Reaching up to 4 M/E, 44 inputs, and 24 outputs in HD/1080p mode or 2 M/E, 24 inputs, and 12 outputs in UHD mode, the mixer flexibly handlings 1.5G, 3G, or 12G signals with standard format converters for inputs and outputs.

The XVS-G1 introduces new

Andreas Berghaus, Product Specialist at Sony Europe, underscores the significance of the XVS-G1 in live productions.

as the 2.5D resizer, chroma key,

features, including an internal clip player and the ability to convert SDR/HDR signals at both input and output. Acording tothe company, the innovation behind the XVS-G1 lies in its hybrid structure, integrating a central processor (CPU) and high-quality FPGAs alongside an optional graphics processor. This synergy ensures performance in signal and image processing with minimal latency. The mixer retains established features found in larger XVS systems, such internal image memory, ME split, multi-PGM, or macro, snapshot, and keyframe registers. 

19


NEWS | STORIES OF SUCCESS

Mo-Sys’ NearTime ® delivers cost-efficient VFX solution for Dr Who 60th Anniversary Special Mo-Sys Engineering has announced that NearTime, its automated re-rendering service for on-set Virtual Production has been utilised to deliver stunning VFX scenes for the Dr Who 60th Anniversary Special at a fraction of the cost. Painting Practice are an award winning boutique studio known for giving filmmakers the freedom to develop their ideas and extend creative possibilities, together with leading CGI, animation and VFX specialists RealTime approached Mo-Sys in 2022 to together form a new cost-efficient workflow for a particularly VFX-heavy special episode of Dr Who. James Uren, Mo-Sys Technical Director said: “This ambitious diamond anniversary Dr Who special was set to have more than 250 VFX shots, meaning traditional VFX approaches would

those pre-visualisations and

In parallel and in partnership

bring them onto set with the real

with AWS, Mo-Sys had built and

camera? Could we use Unreal

patented its NearTime solution.

Engine right up to the ‘Final Pixel’,

NearTime offers a dual workflow

automatically re-rendering what

which enables automated

had been filmed on set?

Unreal re-rendering in the cloud.

RealTime and Mo-Sys had recently collaborated on Netflix production Dance Monsters, where 6 cameras and 8 monster characters were combined in real-time - also using the Unreal Engine and Mo-Sys VP Pro, to film

Tracking data is re-rendered with background plates and delivered back with increased quality and/ or resolution in the same VFX delivery window, while making on set renders available for real-time feedback.

it as a live light-entertainment

Putting pre-vizualisation, on-

show. As part of this Mo-Sys and

set camera tracking, real-time

RealTime developed a pipeline

pre-viz, NearTime rendering

Painting Practice use Unreal

for transferring the precision

and automated VFX pipelines

Engine to create animated

camera and lens tracking data

together meant this VFX-heavy

pre-visualisations of complex

from Mo-Sys StarTracker through

special could be completed with

VFX sequences. Could this be

to post-production, to help

stunning visuals throughout at a

extended to whole scenes or a

automate and dramatically speed

fraction of the cost of traditional

whole episode? Could we take

up the VFX workload.

VFX.

be cost-prohibitive. So, we had to think differently.”

20


NEWS | BUSINESS & PEOPLE

Riedel Communications opens new state-of-the-art tech hub in Berlin

RIEDEL COMMUNICATIONS EXPANDS GLOBAL PRESENCE WITH NEW TECHNOLOGY HUB IN THE HEART OF BERLIN.

Riedel Communications has unveiled its latest Engineering Hub, the “Technology Hub Berlin,” strategically located at Checkpoint Charlie. This expansion, complementing existing hubs in Wuppertal, Eisenberg, Vienna, Zurich, Montreal, and Porto, underscores Riedel’s unwavering commitment to advancing R&D activities tailored for the broadcast and event industry. Each hub zeroes in on specific facets of engineering and innovation, with a strong focus on adapting to industry dynamics, including the shift to IP-based media signal transport in broadcasting. The Technology Hub Berlin is poised to lead the charge in developing forwardthinking, cutting-edge audio systems, leveraging the expertise of former employees from Jünger

Audio. This team, known for their extensive experience in crafting technologically advanced digital audio processors and algorithms, joined Riedel in 2019. Their expertise spans live production, television, radio stations, and streaming services. The hub’s Innovation Lab, powered by creative talents, spearheads product innovation in softwarebased media processing, networking, virtualization, and deployment through inventive ideas and preliminary developments. In addition, the Technology Hub Berlin is home to a dedicated system engineering team that focuses on defining, simulating, and developing next-generation wireless communication solutions. With a current team of eight professionals, the hub actively seeks international

talents to join its ranks in the vibrant and culturally rich capital of Germany. The office, designed to accommodate up to 15 people, aims to become a focal point for talent across various specialties, propelling product innovation at Riedel. “Berlin provides the ideal backdrop for our Technology Hub, offering a diverse pool of talent and fostering an environment conducive to innovation,” commented Peter Glättli, Executive Director R&D at Riedel Communications. “We are excited about the current team of experts and look forward to welcoming additional international talents to contribute to our continued success in developing groundbreaking solutions for the broadcast and event industry.”

21


NEWS | BUSINESS & PEOPLE

TVNZ impulses personalized advertising by protecting viewer data and privacy with AWS Clean Rooms TVNZ is the first media company in New Zealand to adopt the AWS data collaboration service to power personalized advertising personalized loyalty programs. TVNZ also uses Amazon Personalize, a machine learning (ML) service that enables companies to create personalized user experiences in real-time at scale, to recommend content based on individual viewing At AWS re:Invent, Amazon Web

data and viewing preferences,

behaviors. For example, a viewer

Services (AWS) announced

is increasingly valuable to

watching a rugby documentary

that Television New Zealand

advertisers and critical to

may receive recommendations to

(TVNZ), the state-owned and

delivering a personalized viewing

watch other sports programs or

commercially funded television

experience. The privacy of this

programs popular among other

broadcaster in New Zealand,

data is paramount to TVNZ,

sports fans.

is the first media company in

and AWS Clean Rooms helps

Aotearoa to adopt AWS Clean

the broadcaster achieve this.

Rooms to transform advertising

With viewer consent, TVNZ

management. AWS Clean Rooms

collects audience data, such

is an analytics service that helps

as program preferences and

companies in industries such as

viewing trends, and collaborates

advertising, media and financial

with its advertising partners,

services collaborate and jointly

such as McDonald’s and media

analyze their customer data sets

company OMD, to analyze these

TVNZ worked with Slalom, an AWS Advanced Consulting Partner, a global technology consulting firm, to implement AWS Clean Rooms. In just six weeks, Slalom understood the broadcaster’s requirements and deployed AWS Clean Rooms to

collective, anonymized data sets

enable TVNZ and its advertisers

underlying data.

in a protected environment.

to deliver personalized content

This generates unique customer

more securely and cost-

In the global media and

insights, such as identifying the

effectively. Key was the ability to

entertainment industry,

types of programs McDonald’s

share data in a secure, modern

71% of consumers expect

customers watch on the TVNZ+

data sharing environment that

personalized interactions.

mobile app. These insights

complies with TVNZ’s robust data

Data collected directly by a

help TVNZ and its advertisers

governance and privacy policy,

content provider about its

improve their advertising

which prevents the broadcaster

viewers and audience, such as

and media strategy, driving

from accessing advertisers’

audience metrics, engagement

audience engagement and more

private consumer data. 

without sharing or copying the

22


NEWS | BUSINESS & PEOPLE

Grass Valley elevates financial leadership with Michael Prinn as new CFO Grass Valley® has recently announced the appointment of Michael Prinn as the company’s Chief Financial Officer (CFO), effective December 4, 2023. Prinn brings a wealth of financial acumen and executive experience cultivated from notable roles as CFO for technology powerhouses such as Poplar Homes and SeaChange International. Bringing a track record of success in executive leadership and team building, particularly in the realms of automation and process efficiencies, Prinn’s addition to the Grass Valley team signifies a significant milestone as the

represents the final piece in our forward-looking leadership team. His proven track record in steering companies through transformative and growth phases aligns seamlessly with our strategic goals. His joining reinforces our commitment to enhancing operational capabilities and driving our growth strategy forward.” company propels into the growth phase outlined in its strategic

Prinn’s extensive background in steering financial strategies,

plan.

coupled with a knack for driving

Jon Wilson, Chief Operating

seamlessly with Grass Valley’s

Officer at Grass Valley, remarked,

vision for future growth and

“Mike’s appointment as CFO

success.

operational effectiveness, aligns

Net Insight and Teracom forge framework deal for critical mobile network time synchronization Net Insight has recently secured a significant framework agreement with Teracom, focusing on the deployment of its time synchronization solution Zyntai. The agreement, valued at over SEK 10 million, is dedicated to synchronizing Teracom’s mobile network for critical services and spans the years 2023 and 2024. Teracom Group, a stateowned entity providing secure communication services, has chosen Net Insight as its supplier

after a meticulous evaluation process. The requirement is for a robust and secure time synchronization solution independent of GPS/GNSS for Teracom’s mobile network dedicated to critical services. Net Insight’s Zyntai, a new synchronization platform, aligns with Teracom’s high-performance and security standards for time synchronization in its critical mobile network. This strategic agreement marks a significant step forward for Net Insight

in expanding its presence in time synchronization for critical networks. The initial phase, with a project value exceeding 10 million SEK, is slated for delivery throughout 2023, with the majority expected in the first half of 2024. “We need a GPS/GNSSindependent solution that is robust and easy to implement over our existing infrastructure, which Zyntai can provide,” says Gunnar Bergman, head of networks at Teracom. 

23


SPORTS| FIBA

24


SPORTS| FIBA

As we explore sports broadcasting, TM Broadcast introduces to its readers an exclusive interview with Stephanie Mignot, FIBA Media’s Chief Operating Officer. During this conversation we delve into the dynamic of FIBA broadcasting processes and cutting-edge technology that bring the thrilling game of basketball to audiences worldwide. As the global governing body for basketball, FIBA continuously strives to enhance the viewer experience, seamlessly blending the excitement of the sport with state-of-the-art broadcasting techniques. We invite you all to join us in gaining insight from a sport’s broadcast expert who plays a pivotal role in orchestrating the intricate dance between technology and sports, shedding light on the innovative strategies and advancements that make FIBA broadcasts a truly immersive spectacle.

25


SPORTS| FIBA

What are the peculiarities of FIBA and basketball game transmissions? We could say that there are many different peculiarities. Of course, to start with, basketball is a very different sport to football. With international basketball, we are very often playing

We want to keep the coverage as consistent as possible, which is a challenge when working with so many different host broadcasters across the world

in general entertainment arenas that are not dedicated to the sport, so they have to be specifically set up for basketball broadcasting. In terms of the game format, it is also less predictable. For instance, teams can call time outs to discuss strategy, during which there will usually be entertainment for the crowd in the arena, so we have to find ways to deliver a compelling broadcast product during these times. Amongst other activities, FIBA organises the national team basketball competitions. National pride brings emotions like nothing else. All basketball fans and followers know how the game of basketball is very fast and spectacular as well. We are working hard around the storytelling with always emphasizing on those three key elements.

26

Also, if you don’t understand how basketball works journalists have to guide the spectator if they don’t know the game. So, everything is about the storytelling, as you said. Oh, every game is different, and storytelling is key. Another thing which is still very unique to FIBA is their online Broadcast Academy. We want to keep the coverage as consistent as possible, which is a challenge when working with so many different host broadcasters across the world. This is why the FIBA Broadcast Academy, is so important. It is a revolutionary, innovative and cutting edge online educational tool, which has been developed in order to guide and educate the users in the best practice production of a basketball game.

Basketball is a truly global game and, as with any global sport, the styles and ways of broadcasting the games vary significantly around the world. The FIBA Broadcast Academy is the destination for basketball directors to visit and understand the basic fundamental principles and skills required for the optimum coverage of the sport of basketball. The site content has been developed through the analysis of the production of basketball games from around the world and the input of basketball directors that have been responsible for FIBA World and Continental Championships, Olympic Games as well as NBA. Right, then it’s something like guidelines for best practices. Exactly. It is a very practical tool, with a lot of examples and very visual. Directors and production crews around the world can freely access online anywhere anytime. What are the human and technical resources that FIBA usually deploy for covering the games? Could you detail in which solutions FIBA trusts for image and audio capturing


SPORTS| FIBA

27


SPORTS| FIBA

and managing? Does FIBA deliver video in HDR 4K?

specialty cameras. We have

had a few innovations for the

aerial cameras, crane cameras

FIBA Basketball World Cup

We have very different production standards depending on the game we have to cover.

and rail cameras. We have

where we had a camera in the

a lot of behind-the-scenes

backstop unit to have shots

cameras as well as robotic

from the court level. We have

cameras in the corridors. We

something that I think is now

We have a basic standard production format, which is five main cameras, two fixed cameras on the backboards, one beauty shot, replays with a minimum of two replay operators and, of course, sound assistance, and sound mixer. And then we go step-bystep to a premium standard production, which includes as well two super slow-mo cameras and team bench camera. We finally go up to the FIBA Basketball World Cup, which is the top tier event for FIBA, where we produce the live games with up to 36 cameras. If readers had the opportunity to watch the last FIBA Basketball World Cup that recently took place last September in Manila, they could see many different and unseen angles using the 36 cameras with 12 replay operators. What kind of cameras? They are usual broadcast cameras, but as well, a lot of

28


SPORTS| FIBA

being used more and more but was quite unique at least when we started in 2019: it’s a rail camera on the opposite side to the bench, that is following the game, and it has

We have a basic standard production format, which is five main cameras, two fixed cameras on the backboards, one beauty shot, replays with a minimum of two replay operators and, of course, sound assistance, and sound mixer a very particular inside view of the game. It’s at the same level as the court, so it has a very specific angle. We can show how fast and how precise the flow of the team is coming from one end to the other end of the court. It gives very spectacular super slow-mo clips. We are using cinematic cameras as well. As we believe the emotion is the one key element of the live coverage, such cinematic camera can definitely add value as well as the ultra-super slowmo cameras. It adds very emotional shots that you can use as for a replay element. You said the emotion is key. Nowadays, there’s a lot of younger audience that uses TikTok, or other kinds of devices for watching sports competitions. Are FIBA working on developing different formats for attracting younger audiences?

In general, FIBA has an extremely strong presence across the primary social media platforms, which helped deliver record levels of engagement during FIBA Basketball World Cup 2023. A lot of that short form game content is created and delivered by FIBA Media via WSC’s clipping technology, both during and postgame, to ensure that the best assets are available. In addition, we have a dedicated Digital Marketing team who creates a lot of digital short form content to promote our broadcasters. For instance, there was a specific team in the arenas at FIBA Basketball World Cup 2023 who produced tailored highlights packages for star players from unique angles and with an assortment of interesting graphics. These were provided directly to the individual players via our platform partner Greenfly, who posted them to their significant number of social followers.

29


SPORTS| FIBA

Does FIBA work in partnership with broadcasters and media groups when covering the games? Yes, we do. Depending on the competitions and the rights agreements in place, we work with broadcast partners and/or different production companies to cover the games around the world. As there are very many different parties involved, the FIBA Broadcast Academy is still the unique tool to make sure that everyone is briefed and coordinated and has a basic understanding on how we at FIBA like to deliver the games. How is the signal distributed? On which network infrastructure does FIBA rely? Does this change according to the circumstances of what you were saying, according to the place? Most of the games are still delivered on satellite as it is still offering the widest coverage to our many takers around the world. And because we are as well producing the games from very many different venues that are not always connected to a big fibre pipeline, we are still relying most of the time on a satellite distribution.

30

Nevertheless, as much as we can, we are also distributing the feeds via SRT, a technology that we have now been using for the last couple of years. For example, for the FIBA Basketball World Cup, we have, as well as satellite, used the SRT as a main path for distributing the international feed of the games that we are producing, as well as the superfeed, which is a second simultaneous feed that is produced to cover all the backstage and all the elements that are not used during the live coverage.

Because we have so many cameras and so many moments that cannot fit within the live game, we have created this second simultaneous feed for the major FIBA events and this was very successfully delivered via SRT. How do FIBA mostly produce their games? Remotely, locally, hybrid? Which is the best option according to your point of view? We usually try to produce locally. Whenever we can, we are going remotely as well.


SPORTS| FIBA

This is because sometimes it is the only possibility we have due to the lack of local activities or local broadcast equipment and knowledge. We are very much agnostic, meaning that we are always trying to find the best solution, the one that warrants the highest quality. Obviously, we have a budget to manage, so it’s very much an à la carte solution. We don’t have a set way to do things. Whenever and wherever we are, we are always trying to find the best possible solution for the best possible price.

Most of the games are still delivered on satellite as it is still offering the widest coverage to our many takers around the world

So according to your point of view, there’s no better option. It’s just according to the circumstances and the resources and the place. Which production or achievement in broadcast are FIBA most proud of? We could say right now, because we have just finished the event, it would be the FIBA Basketball World Cup that was delivered in Japan, Indonesia and the Philippines. We are very proud of having been able to deliver 92 live games very successfully within 16 days of competition and with such a huge production planning. We had four different venues for the group phase with 24 cameras in each of the venues, and then the final one was up to 36 cameras. So, I think we can all be very proud of the final result, which was appreciated by basketball fans all over the world. We suppose the language and different cultures was an issue to coordinate all the broadcasts. It was sometimes challenging, yes, because we put together a solution per venue, using different production companies with production crews that we had appointed. They were very good

international crews with different technical solutions. However, we still have to manage and work with local people as well, because they deliver the event for us. So yes, it is a lot of different challenges. At the end of the day, it is a team effort with great team spirit, and we always manage to finally deliver a very good broadcast production that we can be proud of. Does virtual production play any role in FIBA broadcasting style? We are not using so much of it. But again, at the recent FIBA Basketball World Cup we developed an opening title that used some virtual production. We had 92 different opening sequences, one for each game, so everything was totally customised to each game. Thanks to the technology and a big team effort, we could deliver this amazing dedicated opening sequence. So, what kind of solution does FIBA use for managing workflows efficiently? We are using an auto-clipping system provided by WSC* because, as I mentioned

31


SPORTS| FIBA

earlier, we are producing a lot of additional non-live content from the games. A top-class auto-clipping solution is absolutely essential for us to deliver all of the required nonlive additional content. The auto-clipping system allows you, once again, to customise the clip to whomever you have to deliver that piece of content. The system is very convenient and helps us a lot. We are also using virtual production for everything which is non-live content with a solution provided by NEP and the Media Bank. So those are the two technology providers, the two workflows that are helping. What do you think about the cloud? About production in the cloud? Does FIBA work with cloud production? We are very interested in such a solution but the challenge for us always remains the connectivity from the origin of the event. Because the venues change for every event and there are multiple venues even for one event, it’s very hard for us to guarantee that the appropriate connectivity is in place and at reasonable cost to be able to use a cloud

32

solution. It’s still a work in progress and we have not yet managed to use it so far but we are obviously very interested and always checking to see what we can use in the future. So how does FIBA treat live graphics? In which solutions are FIBA creating and managing them? The graphics are a very big element to the game, and they have to represent the

brand as well. The branding has to be consistent and also the information displayed has to be clear, concise and relevant to the viewer. We centrally create and manage the graphics. The creativity is usually through a tender process when we select a creative agency and develop a complete graphics package for all elements required. We then look after the onsite technical solution, which implements the graphics into


SPORTS| FIBA

the live game. Although we

manufacturers can do to

near future? Maybe you

are delegating the production

make better the coverage?

gave us a side with AI and

to some broadcast partners

We are always working on the

managed data.

best solution to access the

Yes, AI would be one. Also

accurate on-site scoreboard

using some form of player

data in real time to make

tracking system for the

it immediately available

viewers to get closer to

throughout the world. That’s

understand what the players

one that could be helped

are doing on the court. For

by future development and

the last FIBA Basketball

technology.

World Cup, we had access to

from time to time in different events around the world, the graphics solution is always something that we aim to control centrally. We can then ensure that all the templates are implemented correctly and that we are connected to the live stats information in order to display the graphics

Lighting is obviously a

in the correct way with the

key element to basketball

correct information. It’s very much centrally created and managed. Specifically for the recent FIBA Basketball World Cup, we developed some of our own statistics-based

production. Or actually to any production. But particularly in the venues that we have our events. The lighting setup is still very much a key element to have a successful broadcast

graphics in-house.

because it has such a massive

When you can, you manage

quality. If we had a wish the

it locally. It depends on the game. Okay, so... it’s very much more centrally than locally, I would say, right? If it is local we will appoint

impact on the coverage first would probably be to have all the basketball venues around the world with a topquality basketball friendly lighting system.

statistical graphics that were derived from the StatsPerform player tracking system. The players are the stars of the game, so getting closer to them, to their emotion, to their audio, to whatever they do on the court would always be interesting to the viewers. So, the last question, from your point of view, what will be the evolution of broadcast technologies and production models linked to best basketball game productions?

different providers, but we still

This would improve the

try to make sure that we are

coverage massively. With the

We recently tested an LED

always in control.

best lighting you could have

basketball court at a Junior

a really good production with

event in Madrid It is a great

That makes sense. If

maybe only five cameras.

innovation and opens up

you could ask for a wish

But without a good lighting, a

enormous opportunities for

to a genie in a lamp for

really good production will still

the future all of which will

overcoming a recurring

not look good to the viewer.

lead to a more enhanced basketball broadcast

issue when covering basket games. What would you

What technological

production for the fans to

ask for? What do you

developments are you

enjoy even more. Watch this

think technologies and

planning to deploy in the

space!!! 

33


PLATFORMS| BLINX

34


PLATFORMS| BLINX

A REVOLUTION IN DIGITAL BROADCASTING In an exclusive interview with TM Broadcast, Fadi Radi, Chief Creative Officer of Blinx, delves into his remarkable journey in the broadcast industry spanning over 25 years. From his early days at Al Arabiya to founding Blinx, Radi discusses pivotal moments in his career, transformative initiatives, and the unique blend of creativity and technology that defines the new platform for young audiences in MENA region. This interview unveils for our readers Radi’s strategic approach, challenges faced, and the groundbreaking role Blinx plays as the world’s first digital native storytelling hub. Blinx CCO provides insights into the technological backbone of Blinx, including the central role of NDI technology, cloud-based newsrooms, and the integration of AI and virtual production. Come and read about a case of success on the first try, learning how experience and passion combined turns out a winner platform.

35


PLATFORMS| BLINX

How and when do you start your career in broadcast? We have seen you have a “substantial background in leading and managing both broadcast and creative operations”, as yourself points in your Linkedin profile, and you came from Al Arabiya, before working for Middle East Broadcasting Corporation and finally founding Blinx. What do you highlight regarding your professional evolution? What developments are you more proud of? As a Strategic Creative Innovator with over 25 years of experience in the MENA region, my journey has been marked by a relentless pursuit of excellence, innovation, and transformation. Currently serving as the Chief Creative Officer, I’ve been instrumental in shaping the digital broadcasting landscape, most notably with the inception and success of Blinx.com, a platform that stands as a testament to the seamless amalgamation of creativity and technology. A pivotal aspect of my role has been to lead transformative corporate initiatives. One such initiative was the strategy with Blinx. com, where we integrated

36

FADI RADI, CHIEF CREATIVE OFFICER OF BLINX

traditional broadcasting with digital realms, leading to a significant uptick in audience engagement. Another notable initiative was the Content Localization Drive which

A pivotal aspect of my role has been to lead transformative corporate initiatives


PLATFORMS| BLINX

especially among younger demographics. However, every journey has its challenges. Adapting to rapid technological evolution, navigating cultural intricacies in content, and ensuring team well-being during intense project phases were hurdles I had to navigate. Strategies such as comprehensive upskilling programs, establishing cultural review boards, and introducing flexible work schedules proved invaluable in addressing these challenges.

emphasized crafting content

global appeal and regional

that resonated universally,

nuances. This approach saw

Feedback from clients and

striking a balance between

a boost in viewership ratings,

collaborators mirrors my

37


PLATFORMS| BLINX

commitment to excellence. the transformational impact on Blinx, citing it as a benchmark in digital broadcasting. the campaign as a harmonious blend of art, culture, and branding. My leadership style, described as “Transformational and Collaborative,” has been a cornerstone of my approach. Fostering a vision, empowering teams, promoting continuous learning, and leading by example are principles I’ve always championed. I believe in the synergy of strategy, creativity, and technological prowess. Understanding a brand’s essence, conducting market research, collaborative ideation, and integrating emerging technologies form the bedrock of my method in building state-of-the-art brands. In conclusion, my career has been a rich tapestry of experiences, from leading groundbreaking campaigns and steering innovative corporate initiatives to navigating challenges and setting industry benchmarks. With a blend of visionary thinking, technical expertise, and a humancentric approach, I’ve been privileged to shape narratives,

38

influence trends, and contribute meaningfully to the broadcasting and creative domain in the MENA region. How comes you create Blinx? What opportunities did you see in the market and what approach has Blinx to the broadcast industry? What are the main differences with traditional broadcasting? Blinx was founded to address the media consumption

preferences of younger, digitally-savvy audiences. We saw the opportunity to serve this demographic with engaging news and content across diverse platforms like social media, web, TV, wearable devices, and the Metaverse. Blinx differentiates itself by focusing on innovative production technologies and formats that appeal to its target audience. Blinx is claimed to be the world’s first digital native


PLATFORMS| BLINX

Blinx differentiates itself by focusing on innovative production technologies and formats that appeal to its target audience

What role does NDI play in developing and maintaining a fully IP infrastructure for Blinx? In what kind of converters and solutions does Blinx trust?

storytelling hub, what was the strategy to reach this achievement? Blinx’s strategy revolves around utilizing state-ofthe-art facilities, including a metaverse studio and XR studio, equipped with the latest production tools and AIenhancements. This approach enables Blinx to produce culturally relevant, engaging content tailored for its target audience, leveraging modern infrastructure and high-end production workflows.

NDI technology is central to Blinx’s operation. It facilitates an end-to-end 4K audio and video chain, allowing seamless transmission of video signals and efficient production processes. This modern infrastructure positions Blinx at the forefront of digital media technology. Utilizing cutting-edge technology, blinx crafts unique experiences across various screens and platforms. This includes leveraging extended reality (XR) technology and Viz AI to introduce the world’s pioneering ‘digital twin’ within a media hub in the metaverse. With Vizrt’s support, our Metaverse studio offers

an unparalleled immersive content experience. Which solutions does Blinx use for developing and adapting content? What are the main differences in the way Blinx creates content regarding the traditional content creation? Blinx uses cloud, AI, and data technologies to create engaging content. This approach allows the platform to deliver news and media that resonates with a young, digital-native audience. Blinx’s content is tailored for mobile viewing and is heavy on graphics, aligning with the consumption patterns of its audience. At blinx, our collaboration with vizrt began from the very conceptualization of our projects. Our shared vision was to shape the future of

39


PLATFORMS| BLINX

digital TV and platforms. Vizrt joined forces with the startup blinx, aiming to engage and empower Gen Z and Millennials through impactful storytelling that resonates with them. Moving away from conventional broadcasting, blinx offers refreshing narratives, crafted by the younger generation for their peers. Blinx is known for having built a cloud-based newsroom. What can you tell us about the challenges this development offered and how did your team overcome them? Developing a cloudbased newsroom for Blinx involved integrating various technological components, including a hybrid MAM and NRCS system (MIMIR and DiNA ) from Fonn group , cloud and AI services, and ensuring seamless operation across multiple platforms. The team overcame these challenges by collaborating with technology partners and leveraging advanced systems for content management and production. What role has virtual production in Blinx? And what role does AI play in Blinx’s production? Virtual production and AI play significant roles in Blinx’s

40

AI is extensively used for content personalization, automatic tagging, metadata generation, and automated news writing, enhancing the overall production and viewer experience operation. The company utilizes XR studio sets, LED screens, and advanced tracking for XR/AR production. AI is extensively used for content personalization, automatic tagging, metadata generation, and automated news writing, enhancing the overall production and viewer experience. What’s the importance of augmented reality for Blinx producers and viewers? How does Blinx use these technological developments for its advantage? Blinx employs AR graphics and interactive elements, creating high-quality virtual sets and augmented reality graphics. This technology is pivotal in producing content that is both immersive and appealing to the younger generation, enhancing the storytelling experience. We make our audience live the story with our storytellers , not as the current model of one to many

that has been used for so many years. What kind of solutions does Blinx use to manage workflows? What’s the secret for a streamlined broadcast with a costeffective approach and delivering disrupting content?


PLATFORMS| BLINX

Blinx’s workflow is streamlined

live graphics does Blinx

players (or to the lamp’s

through the use of advanced

work?

genie), which would it be?

tools like Viz Vectar,

Blinx uses Vizrt tools to create

If given a chance, a key wish

dynamic and engaging live

for technological development

graphics. These graphics

in the broadcast industry

are essential for Blinx’s

would be for more advanced,

Vizrt panels, and Mosart automation. The integration of cloud-based infrastructure and NDI technology allows for cost-effective scaling and flexibility in content production and distribution. In addition to control and meta data rich connectivity.

content, as they cater to the preferences of their audience who favor graphically-rich storytelling. The live graphics are optimized for different platforms and devices, ensuring the best viewer

integrated systems that further enhance the ability to deliver personalized, interactive content across various digital platforms, particularly catering to the preferences of younger

experience.

audiences.

What’s the importance of

If you could ask for a

Could you give us a

this kind of content for

technological development

Blinx? With what solution

to the broadcast /

for creating and managing

audiovisual market’s main

How does Blinx design and create live graphics?

forecast for the next future of the broadcast industry? Where is going the market, under your point of view, and which technological advances do you think will affect more to the broadcast industry? The future of broadcasting, as seen through Blinx’s perspective, likely involves a greater emphasis on digital and interactive platforms, extensive use of AI and cloud technologies for content creation and delivery, and an increased focus on immersive experiences like AR and VR. The industry is expected to evolve rapidly to cater to a digitally native audience, with technology playing a crucial role in shaping media consumption habits. 

41


TECHNOLOGY| NERF

By Javier Cantón Ferrero Chief Technology Innovation Officer at Plain Concepts

What is NeRF technology about? NeRF or Neural Radiance Field technology was introduced in 2020 and represents an innovative approach in the field of computer vision and

This technology is a great advance over previous techniques such as photogrammetry, which had been used to generate 3D models of an element or environment using only 2D images.

3D graphics that is used to

The core idea behind NeRF

create 3D representations

is, given a set of 2D images

of environments based on a limited set of photographs.

of an object or environment, captured from various points of view, being able to infer any new, uncaptured point of view

42

for that or environment. Until now, to generate 3D environments, polygons have always been used to represent elements and textures to have colour applied to them. This means that, as you approach an element of a scene and need a photorealistic result, an immense amount of polygons and textures with a very high resolution is required, which implies dealing with huge files.


TECHNOLOGY| NERF

NeRF does not use such a strategy, but instead draws rays for each pixel from the input images (similar to how a magician’s swords do when traversing a box where their assistant is). With that information, mathematical functions are defined to code for each point in space the color that said point has from all angles of view. This means that a point in space can return a different colour depending on where you look from, which is useful for representing certain physical effects found in nature, such as reflections on metals, changes in lighting, or refraction through transparent objects.

function, which represent

It seems like magic, but it

different viewing angles, a

works, and companies like

different colour is returned,

NVidia were among the first to

thus achieving incredible

launch a library called Instant-

information compression as

NGP that could be integrated

compared to 3D generation

into graphics engines so as

based on textures and

to be able to use this type of

All that colour-related information is not stored, but encoded in a mathematical function, so for different values in the relevant

of view with a photorealistic

polygons. All these mathematical functions are encoded within an artificial intelligence (AI) model based on Neural Networks. Therefore, once the model has been trained based on the set of photos from the original environment, we can move a 3D camera to points in said scene from which a photo was never taken and ask the AI model to generate a new image from that new point result, which also simulates all the physical lighting and

technology in real time. At Plain Concepts we became interested in this technology at a very early stage and taking advantage of the fact that we have been working for 10 years on our own graphics engine called Evergine -which we use to create products for companies-, we decided to integrate NVidia’s technology, achieving amazing results in real time, thus helping companies to accelerate the creation of industrial Digital Twins.

reflection effects mentioned above.

43


TECHNOLOGY| NERF

What uses can it have in audiovisual media? To help imagine possible uses of this technology in audiovisual media, let’s bear in mind its key advantages: ability to capture an environment with photorealistic quality through a set of images, incredible compression of data and generation of images from points of view that were not in the original set. The latter is achieved thanks to the fact that the colour information is stored in mathematical functions that are defined based on the colour of each pixel in the input images and the interpolation between all those pixels to obtain continuous functions. That is, you can ask for any new point of view, and these will always have a color returned for each pixel on the screen. This means that, for example, we can generate videos in which locations are recreated based on a set of photos or a video from which we can extract the frames to generate that particular set. Sometimes, a news reporter covers a piece of news and takes some photos or videos

44

with their mobile from a location, but that content may not be suitable for display, either because it is too short, because it was recorded on portrait view, because the framerate is not adequate or because the video fragment is not stabilized. Using this content as input, we can create a new video of that location with the resolution and aspect ratio we need, regardless of the original. This can last as long as needed, featuring camera movements with perfect stabilization. In addition, we can generate as

many videos as we want of this space by making different camera movements until we find the one that most suits our needs. All this is possible thanks to the fact that once the space is captured, the NeRF model can generate frames from any point of view. One of the most interesting free projects that allow checking the results that this technology can offer in the generation of videos is called NerfStudio, which is very easy to configure and install. In addition, this project is


TECHNOLOGY| NERF

being used by the scientific community as a test bed for each new advance in NeRF technology.

The use of NeRF in Virtual Productions The popularity of virtual sets increased a lot during the pandemic, as they became the solution that allowed to keep things running for many major production companies, such as Netflix or HBO. These sets, instead of using the green or blue backgrounds better known as chroma, where the sequences would be recorded and then, in postproduction, the environments would be added, use backgrounds based on gigantic led screens that project an environment, which is dynamic and changes and adapts to the camera’s perspective, creating a sense of real three-dimensionality. This technology is behind large productions such as “The Mandalorian” from Disney+, which made the shooting much easier, since the main character wears a metallic suit that reflects the entire environment. If it had been shot with a chroma as background, not only would

the environment have to be replaced in each sequence, but also all the green or blue reflections of the character’s costume would have to be removed. However, when using led screens that project a virtual environment, the suit will reflect the light of the environment, thus eliminating many hours of post-production. Therefore, as these screens cast light on the characters, we can quickly simulate different lighting or weather conditions, without having to wait for the postproduction phase to add all this. It is important to note that, therefore, the pre-production and post-production phases are mixed up, and in these virtual sets you must have the environments prepared before the takes are shot. For the rendering of these backgrounds, the latest advances in video game engines are currently being used. Therefore, 3D environments based on polygons and textures must be created, being it necessary to devote long development times to achieve photorealistic results that can fit well into the scenes. This is where NeRF technology can help immensely. If there

is the need to recreate familiar places in a city or in unusual locations like the Amazon rainforest or the Sahara desert, you wouldn’t need to send someone to take photos and videos, pass this information to a graphic production team, and spend weeks modeling and texturing a 3D environment to get a photorealistic look. It is enough to just send a person to the place and take the necessary photographs to build the NeRF model, which can be generated in a matter of hours. Then, already in one of these virtual sets, we could move the camera through said space to track scenes, and all this with the best photo realism, thus greatly reducing the time required to build the necessary backgrounds for each scene. I am convinced that large production companies will begin to create their own catalogue of environments captured with this technology for reuse in different productions, thus greatly reducing the cost of this technology. In addition, NeRF visualizations can be mixed as if they were layers by means of real-time rendering based on polygons, so it is not about this technology replacing others, but the key will lie instead in getting the best out of each of them to improve production times.

45


TECHNOLOGY| NERF

Challenges facing this technology NeRF is a recent technology, but since its emergence in 2020 it has attracted the attention of many different sectors due to the great potential offered. It is based on neural networks that require high computing power for real-time inference. The original paper embraces the CUDA technology, which only works on NVIDIA cards, and to achieve the best possible results, very well stabilized and high-resolution input images are required. Finally, a significant amount of VRAM video memory is required for training, which is higher in

46

the latest generation graphics cards.

challenge with the hardware

All this is essential if we want to make the most of this technology in real time, which is key in some sectors such as Virtual Productions in the audiovisual industry or Interactive Digital Twins in the construction and engineering industry.

3D Gaussian Splattings

Although important advances are taking place around this technology, such as the paper presented in 2023 called Zip-NeRF, which allows us to capture larger and larger spaces with increased sharpness, achieving these results in real time remains a major computational

become available with today’s

currently available.

We are living in a fast-paced world where we all want breakthroughs and want them immediately, without being willing to wait for realtime NeRF technology to hardware. This reminds me of Ian Malcolm’s line in “Jurassic Park”: “Life finds a way.” In May 2023, a new paper called “3D Gaussian Splatting for Real-time Radiance Field Rendering” appeared. It does not introduce a new NeRF algorithm based on artificial


TECHNOLOGY| NERF

intelligence, but uses the advances seen in the field of NeRF to build photorealistic scenes based on Gaussian Splats or particles used as strokes. This introduces a new scene rasterization system. In 3D, the traditional basic unit used to represent objects is the triangle, and each 3D element is made up of small triangles to which pieces of textures are applied in order to represent the colours of an element. On the other hand, with the 3D Gaussian Splatting technique, scenes are created from small particles as strokes in a painting, which have

different sizes, positions and transparencies. The colour information of each particle is also stored in mathematical functions called Spherical Harmonics, which, as in NeRF, can return a different colour for the particle depending on the angle from which it is viewed. Remember that this is important to be able to simulate physical properties of matter, such as the reflection of metals or the refraction of glass. The great advantage of this technique is that, although it achieves similar visual results, the associated computational needs are much lower. It is

as if we had experienced a 4-year leap in the ability to visualize NeRF in real time, thus bringing that future to our present.

Conclusions NeRF technology has caused a major breakthrough in computer-aided representation of scenes by achieving amazing automatic results that far exceed those seen in previous techniques, such as photogrammetry, which entalied significant difficulties in the representation of materials such as metals or glass. At the same time, productivity has increased in manual generation of real environments, and the latest advances in a rasterization technique called 3D Gaussian Splatting allow us to take a leap of several years in the possibilities of representing photorealistic scenes and creating videos. The real challenge, as usual, lies in our ability to combine this technology with existing techniques and make the most of its possibilities. There is an important technological leap ahead in the digitization of elements and spaces thanks to these technologies. 

47


TECHNOLOGY| AI IN BROADCAST

The Power of AI in Broadcast By Meghna Krishna, CRO, Magnifi

There has been a pervasive

AI to optimize processes

automate routine tasks, AI is

discourse surrounding the use

and tasks for our collective

now transitioning into a more

of Artificial Intelligence (AI).

advantage. Although AI is

sophisticated set of tools.

Certain individuals have voiced

not a novel concept in the

There are ongoing efforts

apprehensions regarding the

media industry, its role has

to apply natural language

potential hazards it presents

been redefined to facilitate

processing to diverse areas

to industries, even envisioning

human workflow. Previously

within the industry, with

catastrophic scenarios. While

used primarily by journalists

outcomes varying in terms of

others focus on harnessing

and broadcasters as a tool to

success. The shift is not just

48


TECHNOLOGY| AI IN BROADCAST

about optimizing processes; it has been instrumental in reshaping the landscape of creativity and efficiency in the dynamic world of broadcasting.

Integrating AI and Broadcasting The convergence of AI and broadcasting creates a powerful synergy. Blending data-driven insights with creative storytelling, albeit the wide-reaching platforms of broadcasting, provides the perfect canvas for AI to paint personalized and engaging experiences. This fusion is redefining how broadcasters understand their audience and is unveiling exciting possibilities for the future.

Personalization: The AI Advantage Generic ad campaigns are a thing of the past, thanks to AI’s ability to understand and predict individual preferences based on user behavior. By analyzing data points, such as browsing history, social media interactions, and content engagement, AI enables broadcasters to tailor advertisements that resonate on a personal level. This disposition towards hyper-

targeted content maximizes viewer engagement.

can imprint each video with

Predictive Analytics for Targeting

The Future of Communication

The convergence of AI and broadcasting equips broadcasters with the capacity to understand the present and anticipate the future. Predictive analytics, fuelled by AI algorithms, decipher patterns and trends from historical data to forecast viewer behavior. The predictive approach redefines how broadcasters make realtime decisions, optimizing content choice, ensuring the effectiveness of advertising campaigns, and maximizing engagement.

As the future of

Unified Collaboration

Create and Influence

AI-based video editors are revolutionizing the editing process by eliminating lowlevel communication essential to legacy solutions. Meta-tags, for example, enable autogeneration of highlights based on specific criteria, saving considerable working hours. These platforms centralize all video editing activities, allowing for seamless collaboration and autopersonalization, where brands

their own identity effortlessly.

communication undergoes a significant transformation, AI plays a pivotal role in pushing it forward. With the active audience becoming central to the entertainment experience, AI-driven technology creates a bridge with XR technology, offering new and immersive virtual experiences. While AI serves as a source of inspiration in the creative process, its true power lies in assisting with mundane tasks and improving efficiency.

There are currently over 200 million content creators worldwide, with an average of six and a half months needed for creators to earn their first dollar. Only 10% of influencers manage to earn $100K or more annually. As of 2023, the global influencer market is valued at $21.1 billion. Around 84% of the UK’s online residents follow influencers or content creators, based on the Statista Consumer Insights Global survey 2023.

49


TECHNOLOGY| AI IN BROADCAST

The demand for content generated and consumed on social media necessitates automated support to facilitate strategic creativity. Incidentally, it is a numbers game, and success hinges on effectively captivating the audience, which directly corresponds to the pertinence and ingenuity of the content. Broadcasters need to leverage social media to strengthen their connection with viewers and drive traffic to channels and platforms. But they are competing against an ocean of user-generated content. AI tools can help to quickly populate social platforms with engaging clips and heartstopping highlights.

Transforming Media Workflows Content workflows in the media industry are on the brink of a substantial transformation in the foreseeable future. AI-based plugins will revolutionize the linking of visuals with written scripts, potentially making non-linear editing for news-based flat packaging obsolete. Cloud-based video/ image processing and directto-play out or direct-topublish services will enhance efficiency, requiring brands to take on roles they’ve never adopted before.

50

Embracing the New Era The broadcast technology industry stands on the cusp of a new era fuelled by AI and digitalization. Viewers can await ultra-highdefinition content, immersive experiences, and personalized viewing recommendations, while broadcasters benefit from AI’s efficiency in content creation, advertising, and security. Embracing AI is crucial for industry players to remain competitive and ensure broadcast excellence. As AI continues to develop, its potential to shape the future of audio-video production is limitless, promising a dynamic and innovative broadcasting landscape.

The Future of AI-powered Audio-video Sync It is undeniable that today real-time communication (RTC) has become indispensable, influencing aspects ranging from online conferences to live video streaming, social media, and entertainment. Artificial intelligence (AI) integration in video enhancement stands out as a key avenue for the continued evolution of RTC technology.

Issues like poor lighting, inferior cameras, and bandwidth limitations have traditionally posed challenges in video conversations. AI-driven audio and video enhancement technologies offer solutions to these problems, potentially transforming the dynamics of interpersonal communication and media consumption. For instance, cameras may feature participant framing and resolution upscaling for improved visuals. Intelligent noise cancellation and acoustic fence features in audio equipment work together to minimize interruptions, while built-in smart assistants facilitate the smooth management of hands-free video and voiceassisted recording.

Especially for you: Content Customisation Leading music and video streaming platforms such as Netflix and Spotify thrive by delivering diverse content to a global audience with varied interests. Employing AI and machine learning algorithms, these platforms analyze user behavior and demographics. Employing the data, they tailor recommendations, providing


TECHNOLOGY| AI IN BROADCAST

An AI-driven Future The integration of AI into broadcasting has sparked both concern and excitement, redefining roles and workflows within the industry. But AI isn’t just about optimizing tasks; it’s a catalyst for reshaping creativity and efficiency in broadcasting. By blending data-driven insights with storytelling, personalized experiences become the norm, transforming how bespoke experiences by suggesting new songs, movies, and TV shows. In the UK, for instance, platforms like BBC iPlayer leverage similar technologies to understand viewed preferences, ensuring a customized content experience for their audience based on individual interests and demographics.

considerations regarding the definition of creativity and how creative professionals can adopt AI to enhance their work processes. These AI tools have the potential to assist creative fields, and understanding their impact requires interdisciplinary scientific investigation into culture, economics, law, algorithms, and the

Participatory AI Generative AI is reshaping creative processes across diverse domains, including visual arts, music, script writing, and video production. Rather than being considered the downfall of art, it is embraced as a new medium that possesses distinct capabilities. The development of generative AI models has prompted important

interrelationship between technology and creativity.

broadcasters engage their audiences. Predictive analytics forecast viewer behavior, while AI-powered tools streamline editing and content creation, amplifying collaboration and personalization effortlessly. As communication evolves, AI becomes instrumental, helping to deliver exciting new immersive experiences and changing the way we experience entertainment. In the race to create more

Explorations into tangible

content than ever, AI-driven

interfaces and physical

support becomes essential,

interactions to enhance

helping broadcasters navigate

generative AI literacy and

a multi-platform strategy that

explainability in design tools.

leverages both long and short-

The objective is to empower

form content. As we embrace

all creative professionals

this new era, AI’s potential

in their current and future

remains limitless, promising a

collaboration with AI through

dynamic and innovative future

the creation of participatory AI

for broadcasting and creative

systems.

industries alike. 

51


TEST ZONE| BLACKMAGIC DESIGN

Blackmagic Design Studio Equipment Synergy. 2 of 2. Because second parts can be good. Yes. Today we use a title such an abstract concept that describes those situations in which the whole is more than the sum of its parts. Let’s see how many benefits we can derive as we combine the right pieces of this puzzle, and how interesting it can be from a profitability point of view. By Luis Pavía

52


TEST ZONE| BLACKMAGIC DESIGN

After the first part, in which just the functions and qualities of the mixer resulted in a great deal of content, we will address in this second part the functions of the camera and the remote panel and skip the basic features already covered in part one. We will also delve into all those aspects that are optimized, simplified and

The sensor’s resolution increases to up to 6k, but also in size, which is now Super35. To allow that resolution gains are not achieved at the expense of sacrificing sensitivity.

made more profitable by combining the right elements, much in the style of the ancient alchemists. We will end with some ideas for possible scenarios in which these items of equipment make up a winning team. In reviewing the camera, for instance, we have some aspects that deserve attention. The sensor is the first of them, because in addition to conditioning the bayonet change, the ND filters are already built in for an increased capacity. The sensor’s resolution increases to up to 6k, but also in size, which is now Super35. To allow that resolution gains are not achieved at the expense of sacrificing sensitivity. It has a native dual ISO at 400 and 3200, which allows increasing up

to a maximum ISO of 25600. With this combination we have a much more versatile equipment that will give us better results as the lighting becomes more compromises, always offering 13.7 f-stops

supporting the Canon EF-type bayonet can be used, allows us even greater flexibility for placement than we had until now. Even by remotely controlling iris, focus and zoom, if lens compatibility allows. And, speaking of possible locations that based on its features, can be the most peculiar ones, it is here where we find some room for improvement. And not only in this model, but in common with the entire Studio range: the fixed screen. Important: There are no objections as to the quality of the screen or its multiple functions. And certainly the

(data according to BMD

fact that it is fixed allows a

specifications).

very compact and manageable

We also highlight the change in visual narrative that is achieved with a sensor of greater size and resolution, making it a candidate for productions with a neater and cinematic style. But since it also shares color science with the rest of the Studio cameras, expanding

size and weight for the entire the set. But exactly for this reason, in our opinion, it entails certain limitations because, not having any type of mobility, the operators can find themselves at shooting angles where their view of the screen is limited. And that is the small contradiction that, although in view of its

the existing facilities with this

studio concept, it is something

new model will not be an issue

unlikely to happen, it must

at all. What’s more, having

be taken into into account

now access to a much wider

when making certain creative

range of optics, since all those

decisions.

53


TEST ZONE| BLACKMAGIC DESIGN

Because this possible limitation is offset by multiple features that are precisely aimed at facilitating operation. The entire menu is managed with clear and easy-to-select options on the touch screen, through a system based on pages and with icons and texts that are large enough to ensure visibility. The main options on the menu are recording, monitor, audio, preferences, settings and LUT. Because, it is true, it is possible to work directly with LUTs on camera. The system supports connections via SDI and Ethernet. In the case of a conventional studio configuration, we have already commented that a single SDI return cable integrates the remote control, the tally and the intercom, in addition to the program return so that operators can see what is being broadcast. Let’s think about the different types of studios: when our conventional cameras are connected to the central control through a fiber link to their CCU, or a triaxial in slightly older cases but still in use, we probably have a hose that can carry all those return signals, tally and intercom. But additional dedicated

54

equipment will always be needed for these functions. And if our cameras are not connected via fiber or triax, we may need to implement these functions by means of additional cabling or radio frequency systems. In this case, both the laying of additional cables in a sophisticated installation and the coexistence of multiple wireless microphones on stage in combination with the operators’ intercoms, can certainly compromise the available radio frequency spectrum, which is always limited, even when working with digital channels. So having all that functionality in only three cables plugged into the camera (SDI Out, SDI In and power) is a considerable advantage. And it will be one of the factors that will enable use in places and in situations in which the technical infrastructure associated with the installation, both due to complexity and economy, could make certain projects unfeasible. But it is possible to go even further, and in an even simpler way. Literally. It is possible to locate the camera on the other side of the

It is possible to locate the camera on the other side of the planet by means of a single Ethernet cable. As simple as that. The camera has the ability to generate its own stream without the need for additional devices. planet by means of a single Ethernet cable. As simple as that. The camera has the ability to generate its own stream without the need for additional devices. And, since it also supports power via PoE (Power over Ethernet), a single cable provides all the functionalities. And if our switch does not provide PoE, we also have 12v power via 4-pin XLR. Although this latter option has some limitations that we must take into account. While the camera is connected to the mixer via SDI, the delay can be typically one frame. This will be the ideal configuration to use in auditoriums, stages, social or sporting events and all those live situations in


TEST ZONE| BLACKMAGIC DESIGN

which viewers are watching or hearing the action as it happens. On the other hand, the Ethernet connection will always shave a longer delay. Depending on how critical the situation is, even with all devices connected to the same switch, it would be necessary to assess whether the actual delay in our network is acceptable or not. Because not all needs, not all situations, not all audiences move within the same parameters, we will have to evaluate and decide in each case.

On the other hand, if it is a situation in which we are broadcasting content to viewers who are not watching it “on-site”, a delay of a few seconds will be perfectly feasible.

certain number of cameras

Now dealing with combined aspects regarding the equipment, we are talking about an Ethernet connection in which we do not need more than the camera and the mixer. Not even a switch. Because let’s remind that the mixer integrates its own 4-port switch. Although the switch will be necessary if we intend to connect a

between camera and mixer

simultaneously or make a long-distance connection through the internet as channel. Indeed, once the secure protocol connection is established this way, any camera (or other equipment we will cover later) becomes one more input of our TV Studio HD8 ISO mixer. It also maintains all the return, tally, remote control and intercom functions. Although it falls out of the scope of this laboratory,

BLACKMAGIC STUDIO CAMERA 6K PRO

55


TEST ZONE| BLACKMAGIC DESIGN

ATEM CAMERA CONTOL PANEL

simply by indicating the range

production control through

It should be noted that the

of possibilities for the camera,

the mixer console itself which,

two USB-C ports can be

it is possible to get our camera

as we said, also features all

used both for the focus and

signal to a distant destination

the call and intercom control

remote zoom controls of

via the internet and deliver the

functions with the cameras

the Blackmagic itself, and

signal in SDI format through

and the studio or engineering

to record content in 6k 12-

a specific converter: the

areas.

bit RAW format. The really

Blackmagic Studio Converter.

All other connections, controls

interesting thing in this case is that these contents share

And precisely the intercom

and basic functions of the

offers yet another funny

camera are those found in

feature: in addition to

the immediately lower model,

the standard 5-pin XLR

Studio Camera 4K Pro, such

connector, the classic mobile

as the additional integrated

This is the feature that allows

headphones with integrated

microphones to the XLR

us, thanks to the .drp file

microphone can be used in

balanced audio inputs and

that the mixer also records

the headphone minijack jack

the micro input minijack for

together with our production,

for this function. This will

maximum versatility, so we will

to make a remastering of any

allow us to connect with the

not be covering them here.

production with maximum

56

a time code with the files that are recorded internally by the mixer.


TEST ZONE| BLACKMAGIC DESIGN

In addition to simplifying and optimizing connectivity, one of the features we liked the most is the way of managing up to 8 cameras. By simply turning one of the knobs in each section, the camera on which each RCP works is selected, easily and quickly.

quality, with an extended colorimetry and a more elaborate color grading, and even adding the necessary editing adjustments in a tremendously efficient way. Finishing now with reviewing the camera, and before going with the third item of equipment to consider, we briefly get back to that remark from a few paragraphs back, when we commented on the connection options of other teams to the mixer. Indeed, just as it is possible to send the camera signal via Ethernet, the same

treatment can be applied by sending the output of other compatible equipment, such as other mixers, to our main mixer via the internet. And what is the sense in this? Very simple: something that is very powerful and versatile: to greatly expand our possibilities for creation. And once the point has been made, we put this idea on hold again until we go over the practical applications of the set. The third piece of equipment we have had is the ATEM Camera Control Panel, which is perhaps the best known because it has been available on the market for a little longer. The starting concept is very simple: bring together 4 camera RCP controls in a single console, thus offering all their functionality while sharing power and connectivity. In fact, the rear panel is very simple: power supply via 220v and 12v, a mini switch with two Ethernet ports, and a USB-C port. The suggested configuration invites to use one of the Ethernet ports for the mixer and the other for the computer that controls the entire system. Although it is

not mandatory to do so, it will be the way to leave the greater number of free ports in the mixer. And, in order for them to communicate, the only thing our panel needs to know is the mixer’s IP address. In addition to simplifying and optimizing connectivity, one of the features we liked the most is the way of managing up to 8 cameras. By simply turning one of the knobs in each section, the camera on which each RCP works is selected, easily and quickly. But, as there are two memory banks, A and B, simply by pressing the corresponding button, the four RCPs are switched at once. What if I want to always keep one or two cameras accessible between banks? It is very simple. For example: by activating bank A we select cameras 1, 2, 3, and 4; and if bank B is activated, we select cameras 1, 2, 5, and 6. Thus, when switching between banks, cameras 1 and 2 will always remain accessible. Or we can make any other combination in any order at any of the banks. While the camera assignment can be changed in conventional RCPs, this is not usually as straightforward

57


TEST ZONE| BLACKMAGIC DESIGN

ATEM CAMERA CONTOL PANEL

or simple. This is one more feature in the line of optimizing efficiency and performance of the equipment. But, if the Camera Control Panel is only connected to the mixer, how are the different cameras linked and distinguished? The answer to this can be giving by skipping to one of the features justifying our “Synergy” title: the configuration of the entire set is achieved simply by making sure that each Studio Camera has assigned in its menu the same identifier as the mixer input. That is, to the camera that is in the SDI 1 input we assign the number 1, and so on with the others. This is the only key. Only with this, and without having to do anything else we have configured, in addition to

58

the obvious functions of optics control and colorimetry, a few additional functions. As, for example, that the three tally settings (off, pre and on the air) are already operational. Or that simply by clicking on the iris joystick we set that camera at the Aux 1 output in the mixer. Simple! That is why we have been proactive and have placed the monitor for the Aux1 output of the mixer with the person who controls this panel. Therefore, operators do not even have to go to the mixer to select their own cameras regardless of the program and make all the adjustments as required without interfering with the production. But let’s finish with the description of the RPC, which will not take much longer,

before we let our enthusiasm run free and fully immerse ourselves in the synergies and opportunities. The four panels are identical, and no big surprises await there, since they have the usual functions and methods so typical in these devices, thus making life easier for professionals who often have to use equipment from different brands. The only difference would be the screens at the top with their respective keypads, where the first of them is the one that also gathers the panel’s configuration parameters. Going over them quickly from top to bottom, we have 5 scene memories or presets to store frequent settings; we continue with the ND and DC filter selectors (neutral density and color correction), which


TEST ZONE| BLACKMAGIC DESIGN

will naturally only work if the camera has these options available. Also individual controls for gain, color bars, white balance, and shutter speed. In the following section, we find the direct colorimetry settings for highlights and shadows, with which we can also adjust the colorimetry by half tones by holding down the corresponding button while operating the relevant controls. And in the last block we find the display with the indication of the camera assigned to each RCP, which changes to red while on the air, the joystick to control iris and pedestal with its

Simply put, this panel offers, in a compact and easily portable console, the full functionalities of 4 simultaneous RCPs, and we do not miss any feature, plus the fact that they are sufficiently agile so as to handle up to 8 cameras with ease.

corresponding settings for limits, sensitivity and preview, as well as a useful button that allows each of the RCPs to be disabled individually, and another for call by individual camera in the intercom. Simply put, this panel offers, in a compact and easily portable console, the full functionalities of 4 simultaneous RCPs, and we do not miss any feature, plus the fact that they are sufficiently agile so as to handle up to 8 cameras with ease. Those who have been interested in this kind of equipment, surely have already delved into the relevant features and read or seen many of the presentations that have already been made. Therefore, we are now fully aware of all the benefits that we find when combining these devices, beyond their use as standalone elements. We must not lose sight of the fact that there are many other devices from different manufacturers that perform functions that are similar. But the aspect that makes them somewhat differential is that those items of equipment that have a similar cost are far behind in functions and

features, while those that offer similar functionality are well above in price. And what seems even more important to us, with significantly greater technical complexity. Let’s bear in mind that, to set up a studio with all the features listed, including remote camera controls, tallies and intercom, we just need this HD8 ISO mixer, this Camera Control Panel and a computer for the production table. The only decision left to make, depending on the type of production, the number of cameras such as the Studio camera, optics, and the appropriate microphone. Microphony through which, as we saw, we could encapsulate up to 16 stereo channels on a MADI line through a single SDI cable. And all the infrastructure to run the system at full capacity only requires adding two SDI cables per camera, an HDMI monitor for the multiviewer, optionally one or two SDI monitors for auxiliary outputs, and an intercom headset, in addition to power supplies. In other words, a very moderate investment in equipment and a very simple investment in infrastructure make it possible to set up a studio ready to go.

59


TEST ZONE| BLACKMAGIC DESIGN

A studio from which we could stream live. And, if we want to expand options, to record the productions for further remastering, we would only have to add USB-C discs to the cameras. From here we can continue to grow as much as we want by adding all kinds of equipment but, starting from these minimum elements, let’s see what we can think of. The first thing we could come up with is that a single person could run the whole set, as long as the following the action with the cameras is not a requirement. Or assigning one operator per camera if necessary. If we think of fixed installations, large facilities usually have technical equipment, human resources and budgets that are commensurate to their large scale but, below certain levels, this type of configuration is no longer feasible due to all the necessary resources and associated costs involved. This is the first environment in which now, thanks to a reduced investment in equipment, simplicity of installation, and ease of operation, we find many more customers and potential users who can have these functionalities in their facilities.

60

And, if we need to carry out more complex implementations, by simply adding a few more computers to the network, several functions can be handled simultaneously by several people. That is, great scalability is achieved without hardly increasing costs. Because, if we remember that computers are mere controllers and that they do not do any kind of audio or video processing, very simple machines are enough for the task. In this way, the possibility is now available to auditoriums, assembly halls in universities or schools, small music or event halls, theaters, business conference rooms, hotels or celebration halls, places of worship, etc. that could not consider this possibility before. One of the activities that is currently booming, and

Logically, this set is especially interesting for audiovisual media teaching centers, due to the added incentive of having advanced equipment available. It has all the features of more complex configurations, without the cost of infrastructure, and can also be renewed more frequently because its amortization period is significantly shorter.

which can benefit from

Small local television stations,

this type of equipment

entities and organizations

is e-sports at any scale.

that intend to have a regular

Games and tournaments

broadcast of their activities

can be broadcast from

can also benefit from its

anywhere with very affordable

advantages. And in the same

budgets, something that was

way, there is room for all

unthinkable until recently. And

content creators who need

besides, they don’t even need

live productions by their

to be all in the same place.

own means, such as theater


TEST ZONE| BLACKMAGIC DESIGN

the equipment, install it and

and experience of those

operate it to make a live

responsible for carrying out

broadcast from anywhere.

each “mini-production” and

That is, the range of potential customers is greatly expanded. If we think that now we can also design work environments with remote cameras that connect through the internet, we see that

assembly, a highly specialized technical team is not required to carry out the installation and set it ready to go. This

hands.

always insist on: knowing

through the internet devices

its portability and ease of

if it is left in unprofessional

investment.

groups, etc.

a specific place. Thanks to

results of the best equipment

will be something we will

development, connecting

that is going to be fixed in

equipment can exceed the

further without increasing the

groups, music groups, dance

thinking just of an installation

that proper, well-operated

In this regard, professionalism

idea that we had pending

necessary to limit oneself to

let us never forget, please,

we are going further and

Now let’s return to that

On the other hand, it is not

the final production. Because

other than cameras, such

how to choose the most appropriate equipment for each circumstance and offer our customers the best results in the best conditions.

as other ATEM mixers: let’s

For this reason, our goal

appreciate the fact that it is

and key point that we want

now possible, in theory to

to convey beyond the many

replicate -at a scale of course-,

benefits of the equipment,

a production such as the

is the large number of

Olympics at the level of, for

advantages that we can

example, a championship

achieve by interconnecting

between several sports

them to each other. That is

centers, where the small

the synergy giving us the title.

productions that take place in

Which, in turn, opens up a

further opens the field of

different venues converge in

wealth of new creative options

potential customers to small

a central production control

and business opportunities,

producing companies, which

that makes the live broadcast

simply because budget is no

can now hold events for those

possible. It is both technically

longer a limitation.

customers who need so only

possible and financially

in a timely manner.

feasible.

Taking simplification to the

Obviously, the insurmountable

serve as a mere seed for

limit, conceptually it is possible

difference will be in the

us to go even further in our

for a single person to move

skill, professionalism

projects, and also succeed! 

We close this text with the desire that these ideas will

61


62


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.