TM Broadcast International #114, February 2023

Page 81

EDITORIAL

Content is more important today than ever before. With the M&E industry continuously and exponentially evolving beyond its borders, the ways of serving the precious infotainment have surpassed all the limits we have always been used to.

The techniques for each of the production stages are evolving in a continuous and also completely disruptive way. Virtual production techniques that merge audiovisual, film and video game worlds, distribution methods that parallel the telecommunications industry, infrastructures based on the benefits of computer networks and ondemand consumption platforms where content is available 24/7. This, in fact, already sounds like water under the bridge.

Now, the horizon towards which the broadcasting world is looking is that of immersive and virtual reality. More specifically to all those possibilities that would provide “living” content in the already known metaverse. And yes, we said it right: live. Because it is no longer a matter of consuming the content, but of feeling it, experiencing it and enjoying it as if the viewer were living it in his own skin. Can you imagine the possibilities?

At TM Broadcast International we wanted to bring these concepts down to earth through two experts in the field. On the one hand, we have talked with the EBU, specifically with Grace Zakka, Senior Project Manager at EBU; to know the projection that

European public broadcasters are making on these new modes of immersive consumption. On the other hand, we have contacted Carlos J. Ochoa Fernández, Metaverse & XR Senior Advisor, to show us his vision on the capabilities of the metaverse.

On the other hand, we also wanted to focus our attention on one of the technological evolutions that our industry is experiencing. Much more plausible and commonplace is the introduction of 5G in Broadcast. Advances are constant and the integration of telecommunication in our media, both to produce and distribute content, is much more real and everyday than the development of immersive consumption in the metaverse. While it’s time to move on from experimentation and get down to work, we wanted to find out if 5G networks are ready to deliver everything they promised. In our magazine you will find the most cutting-edge 5G Broadcast developments in which the Technical Universities of Madrid and Valencia, two of the world leaders in this technology, are involved. You will also learn about the current status of the 5G network with respect to broadcasting from the 5G MAG association.

In addition to these highlights, we also bring you the BBC’s experience of broadcasting the European rugby tournament Six Nations Championship; a full report on Setanta Sports, the OTT sports distribution platform operating in the Baltic States, Eastern Europe and Asia; and Fujifilm’s X-H2S camera lab.

3
Editor in chief Javier de Martín editor@tmbroadcast.com Key account manager Susana Sampedro ssa@tmbroadcast.com Editorial staff press@tmbroadcast.com Creative Direction Mercedes González mercedes.gonzalez@tmbroadcast.com Administration Laura de Diego administration@tmbroadcast.com Published in Spain ISSN: 2659-5966 TM Broadcast International #114 February 2023 TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

Metaverse EBU

Interview with Grace Zakka, senior project manager at the European Broadcasting Union, with a passion for platforms - specifically public broadcasters’ own platforms; 3rd party platforms (aka big tech); and how those worlds collide. Lebanese based in Geneva; a metaverse explorer, novice gamer, and an ex-ad woman.

Metaverse and broadcasting

Interview with Carlos J. Ochoa Fernández, founder of One Digital Consulting, to understand all that we can achieve if the technological industry of immersive realities goes the way that remains to be done.

SUMMARY 4 News 6
and broadcast,
20 5G Special report 5G MAG 5G and broadcast technologies Audiovisual multimedia experiences over 5G networks 44 Playout Driving business outcomes with a media partnership approach 68
Metaverse
by
5 Test Zone Fujifilm X-H2S 84
Sports 72
The OTT sports platform for the CIS market
Setanta
80
The BBC after the European rugby champion: Six Nations Championship

PlayBox Neo and SNews launch Integrated Newsroom Solutions

PlayBox Neo has announced a further expansion to the software connectivity of its television channel branding and studio playout servers with support now provided for the Arion newsroom system from SNews Broadcast Solutions.

SNews Arion is designed to give newsroom teams to search and filter incoming and archived news. Newsroom teams can create their own rundown including an exact count of surpluses or overruns. View layouts and colors can be customized to match a channelpreferred design. Arion also allows configuration of subsections, expected length and video clip duration. Externally sourced news can be verified automatically through channels such as RSS, Twitter and Facebook.

ProductionAirBox Neo-20 is equipped with features designed specifically for live production. These include the ability to trim or reposition every clip in a playout while the scheduled session is on-

air. Commands such as next, jump or shuttle allow playout order to be modified seamlessly without interrupting the current playout session. Up to four independent players can be configured on a single ProductionAirBox Neo-20 server. Each player can be controlled via its own playlist. Each of the four SDI/NDI interfaces can be assigned as program or preview outputs. Singlechannel or multi-channel user interfaces are available to streamline the operation. Content manipulation and delivery can be performed with the near-zero latency.

“This is the latest in an ongoing series of partnership agreements with major broadcast

product developers,” says Pavlin Rahnev, PlayBox Neo founder and CEO. “SNews has achieved great success with Arion which is in operation across some of Latam’s highest-profile broadcast networks. MOS interconnectivity allows Arion-equipped newsrooms to perform all the core tasks of content accessing and news run down playout directly from the PlayBox Neo Production user interface. Arion’s News Graphics template generation and its playout are directly integrated with our TitleBox Neo-20 graphics processor. This allows newsroom staff to concentrate on their core activity without distraction, saving time and maximizing production efficiency.” 

NEWS - PRODUCTS 6

Unified Streaming launches API-based OTT channel builder: Unified Virtual Channel

Unified Streaming has recently launched Unified Virtual Channel. It is an OTT-only software solution that delivers a traditional TV-like experience.

It works repurposing and mixing pre-encoded media assets, previously prepared for VOD streaming, into a live linear stream. Because its API integrates into new and existing streaming workflows, the product saves companies significant time and effort.

To begin, users need a library of ready-to-

stream VOD content; a live stream (to mix VOD with live); playlists; and an understanding of how an API connects with software.

With that, consumers can create all kinds of channels such as FAST, live events, sports, kids, pop-up, etc. To start, they would need just sending a playlist to the API. The solution will scan, sync, and stitch the content. To transition content, they just would need to change a new playlist.

The solution enables content curation, based on

viewing histories, interests, demographics, and more. Content owners can target ads, teasers, trailers, and shows that resonate with specific audiences.

“With this new API-based approach we’ve tried to make it as simple and straightforward as possible to launch new virtual channels. We’re really excited to see the innovative ways our customers will use it,” said Mark Ogle, Senior Engineer and Unified Virtual Channel lead developer at Unified Streaming. 

7 NEWS - PRODUCTS

Italian soccer league relies on Dalet solutions

Dalet has announced that the Italian soccer league’s production center, based in Lissone, has used the company’s solutions for sports content management and production processes.

The top-flight Italian football league has consolidated the production, management and distribution of its content into a centralized media infrastructure, increasing the production value of more than 400 matches as well as delivery in progressive UHD and 1080p formats over a complete IP infrastructure. The enterprise deployment is subscription-based to maximize flexibility and control costs.

The company’s Galaxy five solution serves as the control layer to manage incoming streams from the stadiums and manage that content in fast production storage, nearline storage and production center archives.

The underlying media asset management platform and workflow engine tracks metadata and orchestrates key workflows, including match content captured via 100 Dalet Brio IP ingest channels, as well as distribution of digital content packages to major broadcasters, media partners, telcos and teams via WEBPortals and direct contribution to the stadium

during live events via Dalet Brio IP output.

“We took on an enormous challenge in building a new production infrastructure in such a short time – not just the workflows, but also a completely new facility in Lissone able to produce highlights, log match metadata and archive the assets. We needed reliable partners that would deliver and Dalet, together with CVE, did just that for the production part of this project. The solution has now been operating successfully for the whole of the season,” comments Piercarlo Invernizzi, CTO, EI Towers. 

NEWS - SUCCESS STORIES 10

Skyline Communications announces that Sunrise LLC selects DataMiner platform for the automated provisioning of their latest generation of CIN and Remote PHY Nodes.

Sunrise LLC is one of the main private telecommunications company in Switzerland. These nodes form the outer edge of the company’s Distributed Access Architecture and are directly attached to their Converged Interconnect Network (CIN).

Many of these nodes will gradually be implemented, so it is crucial for Sunrise to set up a platform that can

eliminate potential human errors. With DataMiner, they will be able to fully integrate and automate provisioning workflows using templates.

“We trusted Skyline to deliver us the required portfolio of workflows with DataMiner’s Process Automation engine – and that’s working as expected. It was the platform’s open architecture, enabling its integration in our existing OSS platform, and the ability for our team to maintain and customize the different workflows which made the difference for us. We are also grateful for Skyline’s

expertise and valuable contributions during the entire project, from the workflow identification phase, across the definition of the different templates, all the way up to the testing phase,” said Daniel Hürlimann, Head of Access Network DevOps & Design at Sunrise LLC.

“DataMiner has been able to support the team’s commitment to service excellence for many years already, and we are both honored and grateful to now also get this extended role that will allow us to facilitate this important transformation for Sunrise LLC,” said Dominique De Paepe, Market Director Service Provider at Skyline. “It’s also a real pleasure to team up with a company that is so passionate about fulfilling their customer commitment. Their mission to communicate better and work smarter, perfectly exemplified through this project, was what triggered them to rethink all workflows and accept nothing less than a fully integrated process.” 

NEWS - SUCCESS STORIES 12
One of the largest Swiss telcos chooses Skyline solutions to automate workflows

Blue Lake Public Radio (BLPR) relies on Elenos Group solutions for fast transmitter repair

out to us very quickly”

Woodworth explained.

“We were only operating at reduced power for a few days. Once we got the new transmitter on the air, BE was also efficient in getting our old unit overhauled and back to us promptly to provide a much needed back up.”

Broadcast Electronics is part of Elenos Group. The company has announced that its customer Blue Lake Public Radio (BLPR) noticed that their radio station WBLU encountered server damage to their transmitter due to powerful storms.

“One of the 1kW transmitters that we use for our station in downtown Grand Rapids went down,” said Klay Woodworth, Director of Broadcasting for BLPR. “Our choices were to bundle up the old unit and send it in for repair or purchase a new unit, we chose to purchase a new unit and then get the older one overhauled so that we

would have a back-up.”

Broadcast Electronics Sales Manager Ben Marth had his team build, test and deliver a new STX-1kW transmitter within 48 hours of learning of the problem.

The STX series of transmitters provides energy efficiency, integrated RDS, HTML web interface and is able to be upgraded to HD Radio when BLPR decides the time is right.

“From the first call we made to BE, customer service was excellent. Our staff was able to secure the equipment we needed and BE had it prepared and shipped

“We are extremely pleased we were able to promptly return WBLU to full power, and thank Blue Lake Public Radio for trusting Broadcast Electronics to help them deliver their message to the Grand Rapids area.” said Rich Redmond, President and COO of Broadcast Electronics – Elenos Group, “We are thankful to have a very dedicated team of broadcast professionals at the Elenos Group who understand the challenges of operating radio and TV stations in a 24/7/365 environment, and stand ready to jump in and support our customers in need.” 

NEWS - SUCCESS STORIES 14

Watford women’s soccer club integrates Pixellot solutions to improve training sessions

Pixellot is one of the companies specialized of AI-automated sports video and analytics solutions. Recently, it has announced a partnership for the 2022/23 campaign with English football club Watford F.C. Women to deliver video streaming,

data, and analysis. The objective is to enhance training methods.

The club will use the Pixellot Air integrated mobile solution for video capture, review, and analysis. After training sessions or competitive games, the solution

delivers automatically produced video for coding, tagging, and annotating on a dedicated coaching platform. The technology is already used on football premier teams such as Real Madrid, Bayern Munich, Southampton FC and FC Barcelona.”

NEWS - SUCCESS STORIES

Silver Spoon develops a CBS Sport Christmas Day 2022 simulcast on Nickelodeon with AI and AR technology

Silver Spoon is a studio that creates real-time content and experiences in its own virtual production platform created with Unreal Engine technology.

This enterprise has announced that, recently, they have created an immersive viewing experience for the “kidfriendly” simulcast of CBS Sports’ Christmas Day 2022 game on the Nickelodeon network.

The Christmas Day 2022 was the network’s third simulcast designed to bring generations of football fans together. The idea with this “kid-friendly” experience was to move the boundaries even further.

The studio augmented the action on the field with mixed-reality effects and holiday-themed characters, including Santa Claus, a crew of mischievous snowballs, a giant Yeti

who constantly wreaked havoc during the game, and, of course, the famous Nickelodeon slime.

To achieve it, Silver Spoon combined artificial intelligence live AR graphics and mixed reality in a live broadcast. Studio’s virtual production platform for the game utilized Unreal Engine and Pixotope for real-time rendering and compositing.

“This broadcast clearly demonstrated producers are not constricted to a floating graphic on the screen,” said Dan Pack, Managing Director at Silver Spoon. “You can do anything anywhere to add several layers of entertainment to an existing broadcast. There are so many ways to use AR to complement and

enhance what’s happening in the game, and with the degree of graphics control we can give operators, the viewer has no idea what’s coming next.”

“The show directors and the creative team from Nickelodeon were all involved with us from the beginning,” said Laura Herzing, Executive Producer at Silver Spoon. “All the creative was developed in a way that the directors knew what was being included and were able to give input on how it would fit into a broadcast. This end-to-end collaborative process helped us achieve our goals of creating narrative storytelling, adding unexpected delights for the viewer, and pushing technological boundaries like never before.” 

NEWS - SUCCESS STORIES 16

Flying Features relies on LiveU to broadcast the Santos Tour Down Under

LiveU has announced that, for the third consecutive year, Australian aerial filming company Flying Features has entrusted the company with the live broadcast from the sky of the Santos Tour Down Under cycling race.

LiveU’s high-definition live video streaming and remote production solution was used to transmit live footage to the race’s host production company, Gravity Media.

Adam Huddlestone, coowner of Flying Features, said, “LiveU’s video quality is remarkable. Thanks to its rock-solid reliability, we were able to take live shots from the remotest areas. LiveU doesn’t need line of sight so we could go as low as we wanted, with creative flexibility, without thinking where we needed to go. We used LiveU for the whole race, four hours each day, following all the cyclists from the sky. We were really happy with the

performance throughout the week, going from 0 to 500 feet, with only one second delay.”

“Before LiveU, some of our shoots involved waiting to send the video by hard drive when we arrived back at the heliport but now the live and more efficient workflow means we can hit morning shows, which pick up the live footage immediately. Looking ahead, we believe that drones will play a more central role and we’re already using drones to cover multiple live events in conjunction with LiveU,” said Peter Davis, co-owner of Flying Features.

Chris Dredge, Country and Sales Manager, LiveU Pacific, said, “Flying Features are innovative and longstanding LiveU partners, using their creativity and aerial skills to create incredible live aerial pictures for their clients using LiveU. So many live sports events over the last few years have been able to offer their viewers a more exciting and dynamic viewing experience due to their innovative use of our technology. Both the cycling and yacht races offer important use cases of the reliability and flexibility of LiveU technology, and what can be achieved.” 

17 NEWS - SUCCESS STORIES

DataCore Software acquires Object Matrix

Object Matrix is object storage and media archive specialist. DataCore has purchased the company to incorporate appliances and cloud solutions to its portfolio.

This acquisition reinforces the Perifery line of edge devices and solutions, while adding unparalleled talent and expertise to the Perifery team. Also, the company will accelerate the DataCore.NEXT vision to enhance the customer’s journey from core to the edge and cloud.

“Gartner predicts that more than 50% of enterprisemanaged data will be created and processed at

the edge by 2025. We’re excited to expand our Perifery portfolio with innovative solutions that will enable us to lead in edge markets,” said Abhijit Dey, general manager of DataCore’s Perifery business. “The Object Matrix product line perfectly complements our worldclass solution portfolio, increasing reliability and agility for customers in the fast-growing media and entertainment edge market. Working with the Object Matrix team, we look forward to bringing industry-leading innovation, and providing continued technology and customer support.”

“This announcement signifies an exciting new stage for Object Matrix, allowing us to extend our reach and product ambitions within DataCore while continuing to develop state-of-theart on prem, cloud, and hybrid media storage solutions,” said Jonathan Morgan, CEO of Object Matrix. “By leveraging DataCore’s experienced leadership team, worldwide distribution — consisting of over 400 channel partners and more than 10,000 global customers — combined with worldclass engineering, sales, and marketing, we are in an excellent position.” 

NEWS - BUSINESS & PEOPLE 18

HBS opens a subsidiary in USA

HBS CEO. “Combining HBS’ track-record and global experience with Platao’s understanding of the U.S. broadcast industry, and his connections, will be the catalyst for some exciting opportunities for us to keep evolving how we capture and distribute major sporting occasions.”

Host Broadcaster has announced the creation of a subsidiary to develop broadcast opportunities and cover projects hosted in the U.S. and North America. HBS LLC will open its premises this month with based out of Miami, Florida.

The entity will be focused on managing the current projects HBS is working on in the U.S., as part of its recently extended contract with Concacaf, as well as generating new business in the territory with future missions.

The branch will be led by José ‘Platao’ Rocha, who joins HBS in the role of Managing Director of the U.S. entity. Formerly with

U.S.-based broadcaster DirecTV Latin America, where he was VP of Development & Production, Platao has 25+ years of experience in the broadcast industry and will primarily lead the business development in North America. The initial team based in Miami will be completed by Enrique Rabasco, Head of Production, and Matthieu Thebault, Head of Business Administration.

“Bringing someone onboard with the wealth of knowledge that Platao has, specifically in the American market, gives HBS a huge head-start as we begin to study the growth possibilities in the territory,” stated Dan Miodownik,

“The role of Managing Director at HBS in the U.S. allows me to apply all I have learned on the receiving end of broadcast services to the benefit of others in the industry,” said Platao. “HBS brings a unique approach, recognised worldwide, to major events. My job is now to identify the needs of the industry in North America and find solutions leveraging the wealth of knowledge developed by HBS over the last 20 years.”

The first mission to be delivered by HBS LLC will be the 2022–23 Concacaf Nations League Finals and the 2023 Concacaf Gold Cup events, held in June and July. 

19 NEWS - BUSINESS & PEOPLE
20 VR/AR/XR & METAVERSE

How the idea of the metaverse could change the content we create

The metaverse, the ultimate dream of what immersive worlds will be, has yet to be built; indeed, it has yet to be imagined. The possibilities are endless. The promise of interacting with realities that we will never experience either because they have been left behind or are too far in the future is exciting for the content consumer. Imagine enjoying live events in interactive virtual worlds, sports competitions beyond the reality that can be felt in a stadium, or educational trainings that provide the most inaccessible and expensive tools -such as a professional broadcast camera- with the lowest associated costs.

This is an interview with Carlos J. Ochoa Fernández, founder of One Digital Consulting, in which the reader will be able to understand all that we can achieve if the technological industry of immersive realities goes the way that remains to be done.

What is the metaverse?

“What is the Metaverse? Yes, no one ever asks me, I know. But if I wanted to explain it to anyone who might ask me, I wouldn’t know what to say.”

How to define something that does not exist, when everyone talks about it, repeating the same mantra over and over again.

Perhaps we should let ourselves be carried away to ancient times and try to establish some sort of symmetry between philosophy and science, look into the depths of the relationship between space and time, and see where that would lead us.

In Plato’s philosophy, the universe is understood as a duality comprising the

21 METAVERSE IN BROADCAST

sensible or material world and the intelligible world or of forms or ideas.

For Plato, forms or ideas exist on a different (higher) plane than material reality, and are immutable, eternal, and ideal. On the other hand, the sensible world is the one we perceive through our senses and is subject to change and deterioration. That is, forms or ideas are the true reality and are known to the intellect, while the sensible world is an imperfect imitation of forms.

In short, for Plato the universe is a duality comprising the sensible world and the intelligible world, where the true reality is the intelligible world.

On the other hand, the metaverse refers to a virtual world, a reconstruction of a three-dimensional space made by a computer and a human being, which can be explored and experienced through technologies such as virtual reality, augmented reality and mixed reality. The metaverse is a human recreation, with all its limitations -including

technological limitations-, which in turn allows users to interact with objects, avatars and other digital entities in a virtual environment.

In short, the universe is a concept that refers to everything that exists in its different states, while the metaverse is a human recreation, which allows interaction with objects and virtual entities in a virtual environment, beyond time, physics and imagination.

Another interesting aspect

to consider, and one immensely appealing when searching for references between universe and metaverse, is the space-time relationship. Something that has chained the reality of the human being throughout history, trapping him in his physical, mental and emotional deficiencies and limitations. However, the opportunity offered by the Metaverse, to teleport to past or future moments, outside the laws of physics, is something really exciting

22
VR/AR/XR & METAVERSE

that transcends the borders of reality, without limitations of space, nor time-based restrictions.

To sum up, the definition of the Metaverse could be simplified (without no pretense to become a reference), as a term used to describe a (set of) virtual universe(s), inhabited by millions of people (avatars) in an immersive, shared environment. It is often represented as a virtual reality space where users can interact with each other

in different ways, with digital objects, etc.

In the metaverse, users are able to take part in different types of activities and with various roles or profiles. Games, digital commerce, creative spaces, concerts, educational simulations, virtual events, virtual communities, are some of the most representative examples. The notion of metaverse has become immensely popular through film and science fiction literature, but it also has its own representation in art, painting, music. Not to mention more entrepreneurial and advanced spaces, in which large corporations are investing, such as in the implementation of this technology in hybrid scenarios, which more integrated with the real world.

The Metaverse hype is striving for a future more oriented to immersive, richer experiences. The convergence of emerging digital trends, such as the progress seen in virtual worlds and space media, artificial intelligence,

blockchain adoption, realtime remote productivity, and the democratization of e-commerce, has led to the creation of the ‘Minimum Viable Metaverse’. And although many questions remain unanswered, it is already clear nowadays that the potential is enormous and hence the large investments that are being made in different companies in converging sectors.

What is the link between the metaverse and broadcast?

Broadcast is the various ways of transmission that make reference to distribution of audio, video, graphics or other types of content to an audience through different media channels, including television, radio and the Internet.

Over the past few years, the broadcasting industry and media have undergone a significant evolution, primarily driven by technological advances and changes in consumer

23 METAVERSE IN BROADCAST

behavior. Digitization and online transmission; the use of mobile phones, thus facilitating access to content at any time and place; social networks, sharing and distributing content, the growing availability of data analytics thereof, which is changing the way media is marketed, produced and distributed; the use of artificial intelligence to automate tasks, improve content creation and stremaline audience orientation; and the last great mega-trend, which is the creation of interactive content, allowing to create experiences in interactive media, in immersive platforms or metaverses.

When we talk about metaverse, we usually refer to a virtual world or a collective computergenerated space, where users can interact with each other. It is a term used to describe the concept of a shared virtual space, normally created by the convergence of a virtually enhanced physical reality and a physically persistent virtual reality, including the combined aggregation of all

virtual worlds, augmented reality and the Internet.

These developments continue to shape the broadcast and media industry, and further evolution is likely to occur in the coming years and perhaps the convergence with the Metaverse will be the next step forward.

In any case, we should not forget that each medium requires a specific language that must be understood, adequate and transmitted through the medium to an audience. The transition to a new medium or language requires a process of understanding, learning and acceptance, which we must monitor intelligently.

What steps must M&E companies and public service broadcasters take in order to adopt this new way of consuming content?

Currently, content consumption habits are evolving very quickly. As well as tastes and trends in the media. New languages

are emerging that require forms of expression that sometimes are very simple and sometimes very complex. But with a clear, direct and brief message. To adopt the way of content consumption in the “metaverse”, media and entertainment companies and public broadcasting services must follow a series of steps that we believ it is important to note:

• Research and analysis: A lot of careful and through work is needed to investigate and analyze the technology of the ‘Metaverse’ and the potential impact on its industry.

• Developing a clear, visionary strategy: It is essential to develop a clear strategy to analyze how the commitment towards the ‘Metaverse’ will be addressed and how this will be integrated with their existing channels and operations.

• Acquisition of skills and resources: It is necessary to carry out a gap analysis program in order to acquire the skills and

24
VR/AR/XR & METAVERSE

resources necessary to create and distribute content in the ‘Metaverse’.

• Content creation: A content creation strategy must be developed and put un place, in such a way that the contents are suitable for the experience in the ‘Metaverse’ and therefore attract users.

• Active participation: Get involved in groups and communities of experts, actively taking part in the construction and development of the ‘Metaverse’, coordinating actions with other key players in the industry for the development of standards and norms.

• Creation of a community: The creation of a community, seeking links between related parties, through networks and proactive dynamization activities, will be one of the key factors for the project’s success. Only in this way will we be able to access the channels of the groups that are the focus of our action.

• Monitoring and adaptation: It is very important to closely monitor the evolution of the experience in the ‘Metaverse’ and be willing to adapt to changes in technology and user demands.

These are just some of the most important steps that

we believe should be taken for a successful adoption of the ‘Metaverse’, although these must be adapted and focused in a more customized way to each user and company.

What are the main technological challenges associated with the consumption of entertainment content in the metaverse through the XR technology?

Over the last 20 years involved in the field of immersive technologies and having taken part in a good number of projects and experience development, I have been able to witness firsthand the dizzying evolution in devices and development engines. And there is no doubt that this greatly facilitates the process of acceptance and acquisition for end users. However, the big challenge remains the final ‘content’. Complex, interactive, reusable, multi-platform and multi-device content, capable of being adapted

25 METAVERSE IN BROADCAST

and updated quickly and providing added value to users. That is, a tangible and real benefit, beyond the ‘wow’ effect.

In any case, if we can make a brief summary of or note what we think as the main challenges of the world of entertainment for the future:

• Human-computer interaction: creating experiences in the ‘Metaverse’ that feel natural and are easy for users to use is an ongoing challenge and requires a deep understanding of humancomputer interaction, the technologies being used, and how to interact with the medium.

• Scalability and performance: among the current barriers to virtual

worlds, we find scalability and performance. This makes them in the greatest number of cases, unfeasible. This is one of the most important barriers to overcome, as these technologies must be able to handle a large number of simultaneous users and ensure smooth, uninterrupted performance. Here everything having to do with connectivity and accessibility becomes especially relevant.

• Security and privacy: another critical aspect to face in the immediate future is all that has to do with ensuring safety and privacy when it comes tro user identity. The ‘Metaverse’ must guarantee 100% privacy and security of user data, including protection against hacking and

privacy of personal information in all its aspects (physical and virtual).

• High-quality content: the production of content in the ‘Metaverse’ requires a large number of resources and skills, as well as content from various sources, formats under a common ground: ‘highquality’ and authenticity of content must be developed and integrated, effective solutions must exist for the creation and distribution of highquality content. This is a key aspect, where the adaptation of standards will play a key role in the future. Understanding the medium, the content and its dissemination are key aspects when it comes to shaping our business model. As well as the target audience and the device to which we are going to release this content.

• Platform integration: this aspect, is a utopia at present, but it is deeply rooted in its core definition. And if we believe in the Metaverse

26
VR/AR/XR & METAVERSE

as an evolution of the Internet, each world must be configured as a portal, from which it is accessed from a common platform. The ‘Metaverse’ must be supported by a wide range of platforms and devices, which does require an open, collaborative approach from the technological side.

are key factors to seek our difference and value.

This is where we find several reasons why a user may want to access virtual worlds to consume entertainment content in a different way.

Virtual worlds offer us an opportunity to feel that we are there, a feeling of belonging and uniqueness. Creating immersive and enriching spaces and experiences on different devices, allowing users to interact with the context in a meaningful and personal way is one of the keys.

We are also discovering new ways to interact with content in a novel and creative way, which can increase its appeal and sense of fun. As if we were inside a video game.

New times, new media, new languages for collaboration. We all want to create our content and realease it. We want identity, reputation, to monetize our work and socialize in networks where we find affinity, complicity and markets and opportunities, where empathizing and interacting

One of the distinct aspects as compared with other media is a social connectivity that allows users to interact with others, sharing experiences in attractive, idyllic environments, thus generating empathy.

As for how virtual worlds

are accessed and how technology should evolve in the coming years, we can currently access them through different more or less complex virtual or augmented reality devices, such as VR or AR glasses. However, the various manufacturers and brands in the sector are focusing on the development of more advanced technologies to make access easier, simpler and comfortable for users. This could include improving the ergonomics of devices, reducing costs and removing technical barriers hampering access. In addition, technology and communications will need to improve in order to ensure a seamless and smooth experience and offer easier connectivity and full integration with other platforms and devices.

What practical applications will the metaverse have in the consumption of multimedia content? What are the existing limits?

What technical

27 METAVERSE IN BROADCAST
If a user is already at ease watching content on normal devices, why would they want to access virtual worlds to do this? How is access now and how should technology evolve to provide easy, simple and convenient access to content?

and commercial innovation is needed to overcome them?

The multimedia world is the quintessential field of the ‘Metaverse’, and the opportunities for growth and development are almost endless. A new medium, open to imagination, creativity, innovation, to discovering new formats that are more immersive and experiential, cusomized and that cannot be viewed or experienced in any other medium, will be the key to the future of these platforms. Living your unique, inimitable experience, metaphorical and ephemeral universes at your fingertips and on demand.

One of the aspects that we cannot ignore is that the world’s population is aging rapidly. And this group demands highquality content and experiences.x Likewise, this group and others are barred from accessing certain activities in the real world for multiple reasons. Financial, physical, etc… And the opportunity to develop practical and real experiences for this group, is one of the largest globally.

Some of the sectors where great experiences and projects begin to develop are:

• Cinema and live entertainment: users

can access cinema and live entertainment experiences in virtual worlds, enabling a more immersive and participatory experience.

• Games, sports and health and life: virtual worlds offer new ways to play and watch sports, perform physical and mental activities, thus allowing users to interact with content in a more meaningful and personal way.

• Online lectures and events: virtual worlds enable online lectures and events to be held through more immersive, interactive and enriching experience for participants.

VR/AR/XR & METAVERSE 28

And a long list... yet to be discovered...

However, there are currently some limits to the use of the metaverse regarding consumption of multimedia content.

• Some technical barriers remain, as current technology is not yet accessible to most users, and lack of connectivity, need for specialized devices, and shortage of smooth experiences may limit or delay adoption.

• High costs: the equipment and services required to access the services and contents in a virtual world can be expensive for a large majority of potential

users, which limits accessibility for most of them.

• The scarcity of quality, specialized content limits its appeal to many users. And creation of content, in immersive environments, is especially expensive and complex to develop.

Therefore, to overcome these barriers, both technological and commercial innovation are necessary, which entails heavy investments, something that is very important. If we look at the critical factors, according to a good number of consultants and as proven by our own experience of years of development, the critical aspects to improve would be:

• Accessibility and ergonomics of devices: technological evolution must allow for simpler, easier and more comfortable access and handling devices for users.

• This aspect requires rethinking the business. Although nowadays a

high-end television, or a smartphone have very high prices for most users, ways should be found to reduce the costs associated with access to the ‘Metaverse’ for increased accessibility. And this involves quality of content and contribution of value. These will be the keys to the winners of the future.

• Development of quality content, with investments in the development of quality content in the metaverse to attract and retain users. Content must be experiential, with new narratives, enabling freedom of movement to users, selection of scenes, feeling the world, experiencing it through the 5 senses.

• And finally, improve user experience. The technology will need to improve so as to guarantee a seamless, smooth experience for users, ensuring connectivity in an agile manner, and seeking cross-device and crossplatform integration. And even though these

29 METAVERSE IN BROADCAST

experiences may not be identical, increase the perception of immersion adapted to each medium.

What will be the possible practical applications in the future?

If we were to view a feature report by the BBC or NBC from 40 years ago about the future of Virtual Reality, we would be surprised to see that the fundamental messages are all the same… In the

future, Virtual Reality will allow us… blah blah blah… And although devices, development engines -in short, the technology- have evolved to a great extent, the theoretical areas of application remain the same. So, what has failed? The real-virtual connection is still far from being established, and that is the most critical aspect for the acceptance of technologies that must solve problems of today’s real life and not offer vague solutions to problems that do not exist or have otherwise been

already solved by other means.

This simply means that we must seek to address the solution of real problems in industry, medicine, education, leisure or entertainment from a different perspective, one that is improved, more satisfactory and which brings us benefits in one way or another, but real benefits, either as individuals or collectively.

Although, in theory, opportunities are almost endless, the process

30
VR/AR/XR & METAVERSE

of adaptation and transformation is not simple and requires an important and methodologically welldeveloped adaptation stage carried out in a sustainable way.

Among the most relevant opportunities, the following should be highlighted:

• Education and training: the ‘Metaverse’ can be used to create more immersive and effective educational and training experiences. Collaborative project development, networking, expert communities, community development and global space networks, etc.

• E-commerce and new business opportunities: virtual worlds can be used to conduct business meetings, hybrid product displays and sales in a more efficient and customized way. Living or creating an immersive, unique, customized experience for a user can be expensive, but even more expensive is getting and retaining a customer.

• Tourism and travel: virtual worlds can be used to create virtual tourism experiences, allowing users to explore places and cultures around the world without having to leave home. This aspect is vital for groups of elderly people, with physical or material impossibility to travel. Or just because they may not be in a position to tour the world in a week and teleport from the North Pole to the Bahamas at the click of a button.

• Entertainment and social events: the metaverse can be used to create more enriching social and entertainment experiences, such as

parties, concerts, and sporting events. This is an area where the generation of creative multiverses opens countless windows for development opportunities. The creative space is truly spectacular, and it is a space yet to be explored.

• Health and wellness: the metaverse can be used to design and develop more customized, effective health and wellness programs focused on cultures and people, thus allowing users to interact with health professionals in a virtual environment. Immersing yourself in a calm bath in the middle of the ocean without danger, is an experience that would be impossible in the real world and violates all laws of physics.

There is no doubt that all these potential areas of application, as has already been reiterated on several occasions, will depend on the evolution of technology, including increasing accessibility, reducing costs, improving user experience and developing quality content. 

31 METAVERSE IN BROADCAST

What is the metaverse?

That’s a good question

– there isn’t one agreed upon definition yet; but I personally like this one from The Future Today Institute that defines the Metaverse as the persistent, immersive, virtual world beyond. Particularly the word “persistent” meaning

Grace Zakka is a senior project manager at the European Broadcasting Union with a passion for platforms - specifically public broadcasters’ own platforms; 3rd party platforms (aka big tech); and how those worlds collide. Lebanese based in Geneva; a metaverse explorer, novice gamer, and an ex-ad woman.

We had the opportunity to hear her opinion on how the media have to adapt to new technologies and what they have to learn from them.

the presence that is consistently updated.

What is the link between the metaverse and the broadcast?

The link is what links broadcast with any new realm – audiences. If audiences are there exploring, creating, learning – then broadcasters should

explore this space as a new way to engage with them.

It is really important for media, specially public service media (PSM) to reach and engage with their online audiences; one of the core values of PSM is to reach and serve all audiences.

Finally, it is a new way for us to tell and share stories.

32
VR/AR/XR & METAVERSE
Grace Zakka, EBU: “The metaverse will be a place to relive certain events in the past or imagining new ones in the future”

Which steps should M&E companies and broadcast public services take to embrace this new way of consuming content?

It’s not always straight forward for legacy media organizations to explore new spaces. Small steps help – starting with a use case or a project; bringing curious people in the organization together to experiment and play around with the possibilities…but most importantly would be to learn from the experience

and find out why and how they can thrive in this new realm.

Once organizations start to understand this space and how they can work within it, what are the possibilities and the limitations, then they can plan accordingly and start setting strategies.

What are the main technological challenges associated with consuming entertainment content in the metaverse through XR technology?

On one level, we are already consuming content using XR technology – AR filters on your smartphone is one very practical example. But the challenge remains at how fragmented the tech behind the metaverse is and will be. There isn’t a single company that is building the metaverse; there isn’t one standardized piece of tech…so this will be a challenge for broadcasters wanting to build experiences and for audiences who will be lost on which tech/world to go for.

33 EBU

If an user already consumes content conveniently on regular devices, why would an user want to access virtual worlds to consume it? How is it accessed now and how should technology evolve to provide simple, accessible and convenient access to the content?

But who says you can’t access virtual worlds conveniently on regular devices? Any gaming experience on your phone, or your gaming console can count as a “virtual world”. It is already so much more accessible.

I agree that wearable tech still needs some work before it can be truly mainstream like your smartphone is. And until wearables get comfortable, affordable, and accessible, I’m afraid people will still rely on their phones or gaming consoles.

Where are we now in terms of the practical application of these technologies in broadcasting? What are

the limits and what is needed to overcome them?

I would say public broadcasters are in the experimentation phase –we see some cool projects here and there trying out different ways to “be in the metaverse”. They need to keep trying to see what works, relying on data to see where their audiences are, then figuring out the formula of creating content audiences could engage with.

What will be the possible practical applications in the future?

I’m really loving the emerging trend of metaverse for good –having a safe space where people can be themselves, more accessibility features to better help people in need, being able to relive certain events in the past or imagining new ones in the future…

Are there any pilot projects in EU public broadcasters dealing with this trend?

I wouldn’t say pilot! We’ve seen some really interesting projects from EBU Members.

What projects does EBU have in hand in this regard and what are its objectives?

Our Members are working on some really interesting

34
VR/AR/XR & METAVERSE

projects. One that was recently completed, and has won some awards, was coming from BR in Germany. They created a world using VR chat to remember the tragic events of the Munchen ’72 games.

https://www.br.de/ extra/munich-72/socialvr-munich-72-100.html

Regarding EBU’s work, what is its objective?

When it comes to the Metaverse, or any other “new” topic, our main purpose is to help our Members understand and navigate the space. One of our greatest strengths as the EBU is facilitating knowledge exchange between our Members,

they learn from each other and we learn from all of them.

At the moment, we have some Member use cases the community can learn from, we also have different communities who look at the metaverse – from strategy to the tech needed to build it. 

35
EBU

Metaverse and broadcast

Technology is constantly evolving and the metaverse is a concept that is already very present in our lives.

What is the metaverse?

It is still a concept and not yet fully developed, so beware of people that are too definitive about what it is.

The Metaverse – along with Web 3.0 – is one of the names that are often given to the next iteration of the Internet, which promises to further break down geographical barriers by providing new virtual degrees of freedom for work, play, travel, and social interaction. In practice, it’s an enhanced and virtualreality-based internet user interface – mostly 3D, mostly interactive and mostly social – that will replicate the inherent depth and intuitiveness of the real world, as opposed to the flat interfaces (think Instagram, Amazon.com,

Netflix, Cnn.com, Salesforce, Microsoft Office, etc.) that we use today.

Often mentioned in close connection with the term “eXtended Reality” (XR, combination of Virtual Reality and Augmented Reality) and with decentralized-infrastructure technologies such as blockchains and NonFungible Tokens (NFTs), the Metaverse promises to be the next step-

change in the evolution of networked computing after the introduction of the World Wide Web back in the 1990s, when we went from text-based interfaces to “browsing” of hypertexts with multimedia components, and the introduction of Web 2.0 back in 2004, when websites and apps started featuring real-time and persistent user-generated content.

The metaverse is often imagined as a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual reality. Some examples of expected areas of application include gaming, social media, entertainment events, work collaboration, training, and commerce.

36
VR/AR/XR & METAVERSE
We spoke with Guido Meardi, CEO at V-Nova, to get his opinion on the present and future of the metaverse. Guido Meardi, CEO at V-Nova

What are the main challenges associated with consuming entertainment content in the metaverse through XR technology?

Aside from the big claims, so far most metaverse experiences have only been satisfactory to visionary GenZ and GenY early adopters, eliciting skepticism in many GenX and Baby-Boomer observers (especially financial analysts).

The overall user experience and the visual quality for XR must be exceptional, given that you are watching from a screen a few centimeters away from your eyes. Users expect highly detailed and realistic 3D models, at least at the levels of blockbuster video games, but there are many more pixels to render, for each of the two eyes, at a higher frame rate: this requires a lot of computing power and advanced graphics technology. And there is the rub. To provide Fortnite-like graphics, XR actually requires more than twice the processing power than what would be

necessary for a more typical TV display. Since we don’t want to wear a Playstation5 (or two) on our face, either we accept much more basic graphics quality, or we must tap into processing power that goes beyond that of the XR headset.

The only alternative to the much-criticized reminiscences of Second Life thus lies in “split computing”, i.e., decoupling the consumer client device (which must be light, power efficient and cost effective)

from the rendering engine (which must be computationally powerful and possibly leased by the minute of use). The latter makes sense in the cloud, but this comes with the added challenge of having to maintain ultralow-latency interactivity within the constraints of the delivery networks, often including a wireless component. Mainframe 3.0, with a twist.

Cloud split computing is also essential to

37 V-NOVA

interoperability. Currently, there is no standard for how developers can create virtual worlds (“metaverse sites”) that can be experienced across multiple platforms, from smart glasses to XR headsets to mobile phones. The simplest way to achieve this interoperability – as well as to provide lag-free experience for multiple users interacting in a same virtual world – is to perform the rendering

computation in the cloud, so that each end user device receives a point of view of a same virtual scene and the device itself only needs to be compatible with ultra-low-latency video streaming, enabling very different devices to access the site regardless of their computing power.

How is broadcasting of multimedia content linked to the metaverse? What can it bring now

and what can it bring for the user?

So far broadcasters –and media companies in general – have been mostly watching the Metaverse bandwagon from afar, with a mix of curiosity and detachment. Quite different from the fashion world that rapidly embraced gamification and NFTs as a way to stay relevant to new generations and develop new (and often surprising) sources of revenues such

38
VR/AR/XR & METAVERSE

as selling limited-release garments for gamers to wear when playing their favorite games.

As mentioned above, the Metaverse is a social interactive space, between humans and a wide variety of content. Traditional video (for instance shown on large virtual panels), volumetric immersive video (wherein viewers can change their point of view in real time with 6 Degrees of Freedom, as

if they were there) and in general broadcast of multimedia content will still form a key component of the Metaverse, whether it relates to entertainment, sports or news. Similarly to how we consume this content today, we will come together in the Metaverse to enjoy multimedia content together, indeed we are seeing the first instances of these services – see for instance Meta Xstadium and Horizon Worlds. Of course, this increases the challenges of having to provide a realistic common environment where people can interact in real-time. Initial attempts to provide similar experiences entailed putting multiple users in front of a virtual flat video (whether rectilinear or 360-degree), while thanks to cloud rendering with lowlatency XR streaming we are now seeing examples of both photorealistic

6-Degrees-of-Freedom movies and photorealistic

6-Degrees-of-Freedom metaverse events with the participation of multiple people, each controlling a game-like avatar. Between Meta Quest headsets,

Bytedance Pico headsets, Playstation VR2 headsets and the like, more than 10 million high-spending families can already be reached with these types of services, which makes for an interesting early-adopter audience.

Broadcasters should play a role, but they still bear some scars from their past unsuccessful attempts with stereoscopic 3D and 2018’s unsuccessful stints with VR 360 video, so they fear that this may follow those same steps. As trite as this sentence sounds, I believe this time is different. If they won’t act, large social networks and tech companies are already working on filling the gap in their place.

If a user already consumes content conveniently on regular devices, why would a user want to access virtual worlds to consume it?

For the same reason why users wanted UHD HDR on 70-inch OLED panels even though they already had black and white analog TV

39 V-NOVA

conveniently working on 24inch CRT screens decades before.

Users have always embraced new technologies that allow them to consume content in a more realistic way. We went from still to moving pictures, from black and white to color, from 4:3 analog to 16:9 digital, from SD SDR to Ultra HD HDR and now we are headed towards increasingly immersive video and sound. Immersing ourselves further with the content we enjoy is a natural next step, and as soon as viewers get used to higher quality, it’s difficult for them to accept anything less.

However, for a new level of quality to be successful, the technology needs to provide a seamless way for the user to access the content. This means light devices, competitive price points, interactivity and – of course – engaging content.

Is the technology ready to deliver realistic worlds? Why is a realistic sense of total immersion necessary? How can technology

help to achieve these capabilities?

This is one of the most important topics of debate, since creating realistic metaverse experiences is a significant technical challenge and some of the example Metaverse sites that were made available so far have been judged underwhelming. In fact, some of the experiences recently rolled out have been more detrimental than useful to make a case for the Metaverse, since they gave a false impression that technology is still a long way off, while it is not.

From my privilege

pointview, I’m happy to answer yes, the technology is ready to deliver realistic worlds. But (there is a “but”) only with the proper combination of best-of-breed available technologies, which neither consumers nor most industry observers have yet been able to experience in action. As I mentioned before, the only way to achieve photorealistic immersive quality with lightweight and relatively inexpensive client devices is to decouple and untether client devices from rendering engines, separating computation from display.

40
VR/AR/XR & METAVERSE

This largely entails moving rendering engines to the cloud and transmitting to each end-user device a personalized video of its Metaverse view. In this way, latest-generation GPUs such as NVIDIA’s RTX 4090 – which boasts almost 10x more TFLOPs than Playstation5’s GPU – can be used to produce photorealistic immersive experiences, maxing out the display capabilities of client devices.

Remote rendering requirements in terms of bandwidth and latency are significant, but they are being overcome through the deployment of latest

compression technologies, such as the new MPEG5 LCEVC compression enhancement standard, which allows to stay within realistic wireless bandwidth constraints, reduces compute requirements and guarantees more consistent ultra-low latency. The overall infrastructure cost for similar experiences are in the ballpark of less than 1 USD per hour, which is acceptable for a range of high-value experiences.

In short, current infrastructure and devices are already capable of serving realistic Metaverse experiences to a subset of well-connected consumers,

mostly for use cases capable of justifying the per-hour cost of remote compute: some of these experiences will start being showcased and rolled out over the next two years. Availability of content will drive the usual virtuous cycle, driving more users to equip themselves to access that content.

Technology that makes the metaverse work today comes from the world of video games. What can the technology associated with broadcasting contribute to improving the performance of this digital context?

We could say that multiplayer video games like Fortnite are already a form of Metaverse, but we may also say that the Metaverse is precisely about using those user interface archetypes for applications other than gaming.

Traditional media companies bring to the table their ability to mobilize large concurrent audiences by producing

41 V-NOVA

engaging content – typically requiring a lower degree of interactivity.

In a word, broadcasters will contribute scale, progressively bringing to the Metaverse all the necessary creativity, technology, and infrastructure to craft metaverse events with millions of concurrent users.

The Metaverse will provide a great place for people to interact and socialize around broadcast events.

How can cloud technology support the growth and expansion of the metaverse? What about mobile networks

such as 5G and 6G?

5G has been a painful financial investment for telco operators, since they are investing tens of billions of dollars without concrete opportunities to grow their revenues, often with negative Return on Invested Capital (ROIC). At least metaverse companies –similarly to social networks and streaming companies – will continue benefiting from those investments.

Cloud technology is fundamental to the growth and expansion of the metaverse, since as I mentioned a sensible way to provide realistic experiences on lightweight display devices entails

time-sharing use of cloud compute resources. We must eliminate the need for each user to have to invest on an expensive gaming machine (and/ or having it next to them) to access the metaverse. For the Metaverse to scale as a new type of Internet user interface, it must be available to everyone when needed, with the compute capacity that one needs at any one time. Cloud is perfect for this.

Decoupling rendering engines from receiving devices has the abovementioned advantages but does put an enormous amount of strain onto the delivery network to ensure a high-quality and ultra-

42
VR/AR/XR & METAVERSE
V-Nova LCEVC Imagery.

low latency experience. A convergence of new technologies will help in this regard, from more efficient networks such as 5G, 6G or Wi-Fi 7 to novel multi-layer compression approaches such as MPEG-5 LCEVC, which are ideal for ultralow-latency transmissions thanks to their capacity of discarding data packets during sudden drops in wireless bandwidth without compromising the overall quality of the signal.

How does V-Nova approach the evolution associated with the retransmission of entertainment in the metaverse? What technological solutions does it propose to meet all these challenges?

V-Nova’s compression technologies are key enablers for high-quality volumetric worlds, as demonstrated by our 2022 Steam VR release of the first-ever photorealistic volumetric movie, Construct (Construct VR - The Volumetric Movie on Steam (steampowered.com)), awarded with a Lumiere

Award by Hollywood’s Advanced Imaging Society.

At the same time, V-Nova’s LCEVC is a key enabler of high-quality and ultralow-latency XR streaming, multiplying the number of households and offices that can be served with cloud rendering.

MPEG-5 LCEVC (Low Complexity Enhancement

Video Coding) is ISOMPEG’s new hybrid multilayer coding enhancement standard. It is codecagnostic in that it combines a lower-resolution base layer encoded with any traditional codec (e.g., h.264, HEVC, VP9, AV1 or VVC) with multiple layers of residual data that reconstructs the full resolution. The LCEVC coding tools are particularly suitable to efficiently compress details, from both a processing and a compression standpoint, while leveraging a traditional codec at a lower resolution effectively uses the hardware acceleration available for that codec and makes it more efficient. LCEVC has been

demonstrated to be a key enabler of ultra-low-latency XR streaming, producing benefits such as the following:

(1) Bandwidth that fits within typical Wi-fi constraints, even for the highest XR qualities

(2) Reduction of latency jitter, possibly the biggest impediment to quality of experience with cloud rendering

(3) Low processing complexity, which translates into compatibility with existing XR devices

(4) Ability to send 14-bit depth maps along with the video, which is critical to AR applications

V-Nova has developed the first, and so far the only, LCEVC SDK available commercially. We stand behind the adoption, implementation, and deployment of LCEVC as it is one of the smartest video compression technologies available, able to overcome some of the key challenges now being faced by the Metaverse. 

43 V-NOVA

5G MAG

What is 5G MAG?

5G MAG is an association whose objective is to set up a framework for collaboration between the Media industry and the Telecommunications industry. The latter an attempt is made to include both carriers and manufacturers of the most IP and 5G sides. That is, companies that are looking to the future of technologies.

Right now, we are working on 5G, and not strictly on 5G as a standard, but collaborating with the standardization body, that is, 3GPP. This body comprises technologies such as LTE, 4G, 5G and soon to come, 6G.

Basically, we deal with global standardization bodies. The idea is that through the technologies that can be implemented on mobile devices with a global scale we will be able to develop technical solutions for, say, production of audiovisual content; as contribution networks, production tools; and for the part of content distribution -streaming, for instance-, use mobile networks to provide quality of service.

44
5G
5G Broadcast will only evolve if industry communicates

This is necessary because, for example, if you take a look at how distribution is currently being carried out, broadcast is done with standards such as ATSC or DAB. We, however, do it with 3GPP, which are the standards featured on mobile phones, cars and connected homes, etc.

It is about providing it with a global perspective so

that we find technological solutions supported by standards that have a global scope. Why 3GPP vs ATSC or DVB standards?

This question can be answered by taking into account two aspects. From the point of view of broadcasting, getting involved in these standards associated with mobile telephony would give it a more globalized footprint

than ATSC can provide. That is, phones are sold all over the world and there is 3GPP technology all over the world. Therefore, the opportunity to compete on this scale would provide greater opportunities.

On the other hand, technology is becoming global and, in addition, there is a lot of content consumption through new devices. This means that people no longer buy a radio to get radio content, but do so with their smartphones through a certain app and connecting to their device a headset or a smart speaker.

That is, we would get the advantage of introducing broadcasting into global technological ecosystems that are already firmly established on the Internet.

Is there any degree of reluctance to implement broadcasting to 3GPP standards?

The main obstacle that certain countries are facing is that they have committed to develop certain technology that a few years

45 5G MAG

ago was on the forefront, which is now becoming obsolete.

Technology is moving in a direction that right now calls into question other, more specific standards. Is a young person going to buy a 60-euro device just to listen to the radio? Most likely not.

People who have committed to developing certain networks are directly fighting the direction that the evolution of technology is taking. We focus on the area of technological development for all those who may want to dive into a world of possibilities.

What timeframe do you foresee to meet the goal of bringing the broadcasting industry under the 3GPP standards?

We split the tasks into three steps.

The first one is a definition of use cases and opportunities: the technology that is being developed allows you to

achieve all this series of functionalities. That would be the mantra. We also consider whether there is any need that the standard does not cater to. On that side we try to improve it to make it grow in the future.

The next step would be to achieve standardization. For example, the documents published by 3GPP have more than a thousand pages. In order for users to know how to carry out a specific function, they must search through a lot of information. Our job is to make profiles: shorter standards and more oriented to specific applications and services.

For example, should an end user want to provide streaming services on mobile applications with quality of service control or latency control, they would need to follow certain steps that address application design or network connection issues. Our duty is to provide brief, specific instructions for the various applications.

The third and final step is development and

implementation. We have a software development community where, once use cases and standards are understood, they are put into practice. We have developed our own Github with software tools that can be installed on mobiles or networks.

In the long term, do you expect the broadcast industry to adhere to this set of standards? How soon could this situation occur?

It all depends on the users. If the trend in terms of using multimedia applications is for users to buy specific equipment, it will take longer. However, if users go for connected devices, however reluctant the industry may be, the industry will have to play

46 5G

along. Many times the traditional broadcast industry complains about global technology providers, but I don’t think those companies are more shrewd or intelligent; they simply cover a need that wasn’t being met. When you see companies like Amazon biting into the traditional broadcast market, perhaps it’s because this market hasn’t moved fast enough to offer solutions in that realm.

As for deadlines, 5G is actually a continuum. This means that it is subject

to ongoing evolution. We don’t treat the 5G standard as something finite with a given launch date. Surely the 5G standard that we will see in two years has nothing to do with the one being implemented now.

What does the standard allow today and what does it have to achieve in order for this to become a regular possibility?

We are now seeing that there are three possible uses in production.

The first corresponds to the introduction of 5G SIM cards in backpacks. Actually, this solution has very little added value because what you’re doing is just swapping one card for another. However, the 5G standard provides much more value than simply the one linked to this use.

Regarding the evolution of added value, we find two cases. One would be the use of private networks. As the 5G network right now doesn’t offer the features needed to make a seamless video content contribution, what users do is deploy a

private network to provide clean spectrum in order to create a dedicated channel. The other possibility is to do the same using the carriers’ 5G networks.

For both cases, what is needed are a series of configurations of the technical possibilities to be able to settle the solution in advance and guarantee the contribution. That is, a concrete way to negotiate with the network a series of parameters to achieve good quality in each specific project.

Truth is that in regard to the latter, all these functionalities are yet to be developed. Therefore, the industry should roll up their sleeves and get acquaited with the standards, understand the technology and opportunities and, above all, implement solutions that allow you to test that series of advanced functionalities.

One of the biggest handicaps most heard from content producers is that the network is not available to cover events that are not controlled.

47
5G MAG

Does that capability exist at present with the standards we have but not implemented yet?

And if so, what is the problem?

Such capability does exist and the problem, in my opinion, is in the lack of communication between the industry and mobile carriers. If you use a private network you do not need to negotiate anything with

a carrier, but should you need it you would have to make them understand what your requirements are, what tools you would like to see implemented on that network and be as specific as possible concerning your needs. A consensus must be reached regarding functionalities or technologies. Once agreed, you can establish a business model with the network collaborator. In

Industry 4.0, things are being done in a more synchronized way, in terms of functionalities, but the audiovisual industry lacks unity in regard to several aspects of technology.

It seems that everyone is waiting for atotal availability of a network that would offer infinite possibilities, but that still does not arrive, and the feeling is that

48
5G

5G is not delivering to promises. Would this be a wise reflection?

Of course, that conclusion is what I constantly see in the industry. You expect a lot, but without specifics in terms of requirements. Many variables have to be taken into account. What do you exactly need? What is your bandwidth? What is your scenario? How many base stations do you need? What capacity do you need: constant or dynamic? All these questions are often not answered, but they are necessary. From the carriers’ point of view, too much is being asked and things are not being made easier. The industry must understand that 5G is under continuous development and laso that Release 18 provides many more facilities than Release 15, launched in 2020.

You say that carriers need to know a series of requirements, but how do they access that knowledge?

The key is to establish a framework for exchange and collaboration. The

industry has to understand what can be done and carriers have to understand what the technical requirements are. You also have to be patient, sometimes some things will go well and sometimes they will not, but even when things go wrong you may be able know why is so and what to do to improve them.

I have attended to some lectures and the case has been presented to connect cellular bonding with 5G. Then they tell you that the latency was a second or two and blame 5G for it. The problem is that you are using cellular bonding, and that involves some protocols and incredible latency times, and maybe if you knew that the 5G network can be configured properly to offset this you would not have to resort to cellular bonding and thus you would prevent those huge latency times. Functionalities have not been properly understood and in the end the result is not as expected. That’s why it’s so important to know technology firsthand

by sharing experiences and knowledge.

To what extent has 5G-MAG achieved this collaboration goal?

Since 5G-MAG was created in 2020 until now there has been an immense evolution in terms of technology understanding and software or product development. We’ve done quite an important work to improve it, but above all, the most useful thing has been the implementation of the standards and development of software.

For example, we have a development program called 5G-MAG Reference Tools and I would like to highlight two main products: one would be integration of 5G broadcast on streaming platforms -a streaming platform where linear content is broadcast that will broadcast that content directly on the smartphone or through a certain app-; and the other is Media Streaming which is a set of standards -where I would highlight the case of BBC, which distributes radio through mobile networks

49 5G MAG

but providing quality of service to radio users through the Internet.

We also have our own CDN node, based on the 3GPP standard. It can be implemented both on the network and with the service provider. It guarantees a whole series of functions for configuration, and it provides statuses on behavior, reception, quality of service and control of buffers on the phone.

These practical cases of which you speak of are a reality and offer the quality you comment, right?

All this already exists, and we are developing it and putting it into practice, but it was -so to speakhidden.

Would you highlight any other case studies that may underline these unknown 5G capabilities we’re talking about?

From a production and contribution point of view, 5G-MAG ran a study on three use cases.

Regarding production of content, examples of camera-to-studio or camera-to-cloud, what we did was find out what network functionalities are useful for that

50
5G

type of service.

The second use case was developed within a private network for production, not only contribution of videos, but producing something by interacting with signal sources and receivers. And last, what we think is more complex. The implementation of 5G over microphones on a musical stage. That is, using 5G instead of using the current analog microphones with certain frequencies on the UHF band.

In this regard, we have analyzed the different capabilities or technical requirements to perform all this, resulting in the launch a work item. That is, develop a specific work theme so, for example, when you arrive with cameras or professional devices to a 5G network, all this equipment can be registered in the network automatically without having to acquire SIM Cards from a carrier. The standard allows you to make an automatic record of all the equipment, which will disconnect once you finish

covering the event. It allows you to set up a totally private network. Another issue would be timing and synchronization. That is, you would not have to use a master clock provided from outside, but the 5G network itself would provide it. This is a functionality that the network allows, but one that has not yet been put into operation.

And then there’s the whole service part in terms of uplink, network slicing, how to download some features from the equipment devices to the network or Edge Computing issues. That is, if you have to carry out some complex operations, how to interact with computational tools within the 5G network.

It is necessary to detect which processes, due to latency or performance issues, can be carried out on the network and which ones cannot, and what the limits are. Many times, I’ve heard things like “do the encoding on the network”, but it doesn’t make any sense. Encoding has to be done on the equipment

because, having a wireless connection, what you need is that connection to be optimized. So, is the goal to get a number of features and functionalities of the whole standard that will serve in a common way to everyone or not?

It is, and it is not. On the one hand it is not so because each application can have its own requirements. However, what would be very useful is understanding if any of these functionalities that are very specific for an audiovisual scenario have features in common with other scenarios.

And are you also working on achieving something like that?

Exactly. In fact, at 5G-MAG we are taking a look at other partnerships as, for example, 5G-Eisia, which is in charge of the automation of industries. In these instances, we have detected use cases and technology that is basically the same. 

51 5G MAG
52 5G

5G and broadcast technologies

Broadcasting technologies seem to be evolving at the same pace as other technologies around us. In Spain we are still using first-generation DVB-T (Digital Video Broadcasting – Terrestrial) Digital Terrestrial Television (DTT) technology, pending a migration to DVB-T2 technology (Digital Video Broadcasting – Terrestrial 2nd Generation), which has been on the market for more than 10 years. The switch to DVB-T2 technology will enable an increased spectral efficiency of DTT networks by more than 50%. That is, improve the ability to transmit information by more than 50%, which will

result in an increase in the quality of the content being broadcast. For example, more high-definition and ultra-high-definition content.

Since the introduction of DTT by the end of the last century, we have gone through the mobile telephony standards 2G mobile telephony (digital telephony and SMS text messaging), 3G (beginning of the mobile internet and arrival of the first iPhone), 4G (mobile broadband and popularization of apps and social networks) and now 5G, which is much more than a new generation of mobile communications with higher connection

speeds. 5G’s ambition is to provide wireless connectivity in the digital transformation of many industrial sectors. 5G also offers new possibilities for the broadcast industry.

On the one hand, 5G technology can facilitate the convergence of mobile, fixed and DTT networks, as it can now reach smartphones and tablets, which had not been possible by means of any DTT technology. In fact, the international digital television standardization body, DVB, (Digital Video Broadcasting) declared that there would be no new DTT standard, and that a possible new broadcasting

53 BROADCAST TECHNOLOGIES
Communications of Research at the Institute of Telecommunications and Multimedia Applications (iTEAM) of Universitat Politècnica de València (UPV).

standard would have to come from the 3GPP standardization forum, the one in charge of the standardization of mobile technologies.

On the other hand, the audiovisual production sector uses at present different wireless technologies and, therefore, will be among the first to take advantage of 5G. Thanks to 5G’s inherent compatibility with IP (Internet Protocol) and the use of cloud computing resources, it could also act as a catalyst to improve existing workflows or enable new ones in various scenarios.

The Institute of Telecommunications and Multimedia Applications (iTEAM) of Universitat Politècnica de València (UPV) in Spain is one of the landmark research centers in the application of 5G technology to broadcast technologies, both for the distribution side and for contribution (content production). From the mobile communications group of iTEAM-UPV, we have actively participated in standardization processes of broadcasting technologies, such as DVB-T2 (and its mobile profile T2-Lite), DVB-NGH (Next Generation Handheld) and ATSC 3.0, as

well as in the optimization of 3G mobile broadcasting technologies (MBMSMultimedia Broadcast Multicast Service) and 4G (eMBMS, enhanced MBMS).

Much of the work on 5G and television has been done in research projects of the European Union’s Horizon 2020 programme by the partnership for public-private collaboration in 5G 5G-PPP (5G Infrastructure Public Private Partnership, www.5g-ppp.eu). Specifically, in the 5G-Xcast, 5G-TOURS, 5G-RECORDS and FUDGE-5G projects, three of which have been coordinated by iTEAM-UPV (5G-Xcast, 5G-RECORDS and FUDGE-5G). Domestically, it is also worth highlighting the Castilla La Mancha 5G project within the national 5G pilot plan of Red. Es, as well as the collaboration with Radiotelevisión Española (RTVE) within the framework of the Global 5G 2019 event, in which the first live broadcast for television was made using exclusively 5G technology in Spain. It was also the first worldwide broadcast carried out with a stand-alone 5G network.

Next, I will briefly explain the main innovations of the

54 5G

two main projects: 5G-Xcast for 5G content broadcasting and 5G-RECORDS for 5G content production, as well as highlighting the main results of the other projects.

The 5G-Xcast project: Broadcast and Multicast Communication Enablers for the Fifth-Generation of Wireless Systems (€8 M budget, 18 partners from 10 countries, run period June 2017 to July 2019, www.5g-xcast.eu) was the first international research project on 5G Broadcast, and became the flagship project on 5G content broadcasting worldwide. From the project’s much effort was devoted to the definition and analysis of the technology popularly known as 5G Broadcast, and also to the design of a native 5G solution for broadcast.

This technology -known as 5G Broadcast- allows broadcast services to be transmitted to 3GPP devices, and in some areas has come to be considered as a possible substitute for Digital Terrestrial Television. This technology meets essential requirements of DTT, among which we can highlight:

• Receive-only mode, in which having a SIM card is not required to receive the service. This allows free-to-air broadcasting, like in traditional DTT.

• Transmission mode in which all radio resources (spectrum) are dedicated to broadcast transmission.

• Ability to create singlefrequency networks, in which several transmitters broadcast the same content on the same frequency in a synchronized manner.

Although the technology is known as 5G Broadcast, it is based on 4G mobile phone technology referred to as LTE (Long Term Evolution). In fact, the technical name of the technology is none other than LTE-based 5G Terrestrial Broadcast. The 3GPP marketing move conceals the fact that 5G Broadcast does not feature the technological advances of 5G technology (which we will explain below). The 5G-Xcast project was the first to demonstrate that 5G Broadcast technology would have considerably lower spectral efficiency than secondgeneration DTT technologies

55
BROADCAST TECHNOLOGIES
* Manuel Fuentes, De Mi, Hongzhi Chen, Eduardo Garro, Jose Luis Carcel, David Vargas, Belkacem Mouhouche, and David Gomez Barquero, “Physical Layer Performance Evaluation of LTE-Advanced Pro Broadcast and ASTC 3.0 Systems”, IEEE Transactions on Broadcasting, vol. 65, no. 3, pp. 477-488, 2020.

(such as DVB-T2 and ATSC 3.0). This work* received the award to the best article published in 2020 in the specialized journal IEEE Transactions on Broadcasting.

Despite its limitations, 5G Broadcast technology represents a milestone in the world of broadcasting, being the first 3GPP broadcast technology with a high potential to reach all smartphones and tablets. Although numerous pilots have already been conducted around the world, there is still no commercial rollout for 5G Broadcast. But don’t forget the fact that 5G Broadcast was designed for a purpose other than DTT standards. While, for example, DVB-T2 is highly optimized for the distribution of linear television services for fixed reception on rooftops, 5G Broadcast has the ability to reach portable and mobile 3GPP devices, and has a rich legacy of 4G LTE technology. In addition, 3GPP has recently included in Release-17 the possibility of using 5G Broadcast for bandwidths

traditionally used by DTT (6, 7 and 8 MHz). Without making predictions about what may happen at the World Radiocommunication Conference (WRC) in terms of the spectrum currently dedicated to broadcasting technologies, 5G Broadcast technology is sure to play an important role at that conference.

In the Castilla La Mancha 5G project (budget €1.2 M, 6 collaborating companies for the broadcasting case of use, duration July 2020 to January 2023, https:// www.red.es/es/initiativas/ proyectos/proyectopiloto-5g-castilla-lamancha-2a-convocatoria) iTEAM-UPV developed a 5G Broadcast solution using a radio platform defined by open source software and generalpurpose processors. In collaboration with Telecom CLM and broadcast equipment manufacturer

BTESA, a pilot has been conducted on a network equipped with high-power infrastructure in the city of Toledo, with enough radio frequency power to cover the entire city. This

was the first pilot using an SDR (Software Defined Radio) platform for both the transmitter and receiver on an infrastructure such as those traditionally used for DTT broadcasting. SDR solutions not only enable rapid testing of new functionalities for the standards, but also allow greater flexibility for software updates and decreasing costs. Major manufacturers of DTT equipment already have software-based solutions. This work has had a great impact on the 5G-MAG

56
5G

(5G Media Action Group) industry association, where collaboration has been undertaken with other partners for the development and validation of the SDR platform and this group has provided an open source version of the code https://wwww.5g-mag. com/reference-tools.

On the other hand, the 5G-Xcast project was a pioneer in defining pointto-multipoint (multicast/ broadcast) components for native 5G networks -based on the new NR (New Radio) and 5GC (5G Core)

technologies- which provide greater efficiency and flexibility as compared to LTE both at radio level and at the system architecture level.

The 5G-Xcast project proposed a paradigm shift in point-to-multipoint transmissions on 4G LTE (remember that 5G Broadcast technology is based on LTE), where pointto-multipoint transmissions were initially conceived as a complementary service with a predefined static configuration. The solution being proposed in 5G-Xcast integrates pointto-multipoint broadcasts into the overall system architecture as an internal network tool to optimize transmission when multiple devices consume the same content in the same cell. This allows dynamic and flexible use of point-tomultipoint transmissions by the network in a manner transparent to the user, in order to maximize the efficiency of network resources. Multicast/ broadcast components were designed with the aim of maximizing compatibility and interoperability with point-to-point broadcasts (unicast), to allow dynamic

switching between different modes of operation, as well as their potential use in parallel. This approach leads to a simplified, unified service design across access networks. In addition, it not only enables the deployment of stand-alone 5G networks for distribution of radio and television services, but also encourages all network equipment and all devices to support pointto-multipoint broadcasts. The fact that 5G Broadcast requires specific equipment supporting the technology at network level and -mainly- at terminal level, is probably the main barrier that has slowed down the investment in this technology, which leads to the eternal chicken-egg dilemma the first mobile broadcasting technologies (DVB-H, MediaFLO, etc.) had to face.

In 5G-Xcast, point-tomultipoint components were designed for the radio interface, the Radio Access Network (RAN) and the network core (5GC). The work of 5G-Xcast has had a major impact on 3GPP in the third version of the 5G standard (Release-17),

57 BROADCAST TECHNOLOGIES

where 5G-MBS (Multicast Broadcast Services) has been specified with multicast/broadcast components for NR and 5GC. However, the requirements for DTT services are not being taken into account. The problem is that the first two versions of 5G do not support point-to-multipoint broadcasting, and therefore the opportunity to include this functionality from the outset so that all devices would support it was missed. Support for 5G-MBS technology will be optional, and the degree of deployment in future devices is yet to be seen when Release-17 hits the market (devices with the second version of the 5G standard (Release-16) are currently starting to arrive).

The 5G-RECORDS project: 5G key technology enableRs for Emerging media COntent pRoDuction Services (budget €7.5m, 18 partners from 11 countries, duration September 2020 to October 2022, www.5grecords.eu) addressed the design, development, integration, validation and demonstration of 5G components for professional production of multimedia content. The project was structured around three actual content production use cases: live audio production, multi-camera wireless studio, and immersive content production, each of them with a number of 5G-enabling technologies and innovative multimedia components.

In addition, the project focused on the use of private 5G networks, similar to a WiFi network but with the difference that the spectrum is not shared with any other network, thus ensuring greater performance without relying on the public network of a telecommunications operator. Another advantage lies in privacy, since all the information circulating on these networks is only accessible to their owners. These on-demand 5G networks, or private “super WiFi” networks with revolutionary features for the industry make a lot of sense for remote professional content production environments with higher reliability, lower latency and

58
5G

the ability to sync different devices. The project developed a complete 5G multi-camera production system that was used in a broadcast for the initial stage of the 2022 Tour de France in Copenhagen (Denmark), which was a total success and had great media impact.

The immersive content production use case focused on the use of 5G millimeter bands (26 GHz), which enables bandwidths of hundreds of MHz and Gigabit-per-second speeds comparable to downstream or download fiber optics. Upstream, current terminals have limitations in reaching those speeds. Despite these limitations, the project organized a live concert that was captured with multiple perspectives

and shots in which the filmmaker could move the point of view of the image freely. And with an Edge Computing deployment to ensure low latency, it was possible to play content both live and in replays.

Last, the audio production use case focused on Ultra Reliable Low-Latency Communications (URLLC), as professional live audio production has very demanding requirements in terms of response time, availability, and synchronization. This use case was not demonstrated in a pilot, but the progress made in the testing and validation of 5G URLLC technology have been a great leap forward of the state of the art and have interested some major manufacturers of 5G chips.

I would not like to finish this article without mentioning the first 5G touring orchestra that was performed under the 5G-TOURS project (www.5gtours.eu), led by RAI and in collaboration with LiveU, which merged music and distributed content production with 5G on the streets of Turin in December 2020.

With all these advances, it can be said that, unlike content distribution, where there are still reasonable doubts about the use of 5G for broadcasting services… and a future new 6G Broadcast technology may be needed!, 5G will be widely used for content production in a not-toodistant future. 

59
BROADCAST TECHNOLOGIES

Audiovisual multimedia experiences over 5G networks

60 5G

The increased capabilities offered by the evolution of communications networks provide a set of possibilities beyond mere direct exploitation of bandwidth, latency, high density and scalability improvements. This new horizon, which can be characterized at a very high level by the integration of intelligence in the network, features the ability to manage network

resources dynamically, based on various service criteria, as well as to maximize the use of computing possibilities in Edge and Cloud.

Among the potential applications, those relating to audiovisual multimedia are among the most demanding ones, with very high requirements on network capacity and computational resources.

For content services, decentralization that is allowed by the flexibility of the new networks, together with the power to deploy services on the Edge, provides the real ability to change classic workflows, bringing the processes linked to the distribution of contents to the point of acquisition thereof, thus facilitating, through the virtualization mechanisms,

61 MULTIMEDIA

an ability to dynamically deploy the processes linked to the services, and finally, relying on the programmability of the networks, the adaptation of the same to meet the specific demand arising therefrom, along with the possibility to adapt to any potential modifications at specific times and even integrate, to ensure the service, real-time quality control systems that allow anticipating decisionmaking on the way in which the services are provided or the resources assigned to them.

In this line of thought, from the Visual Telecommunications Application Group (GATV) of Polytechnic University of Madrid (UPM), work has been carried out on the deployment of proofs of concept that would allow assessing the actual capacity to provide high-level audiovisual content services by making the most of the aforementioned set of capabilities: Network Function Virtualization (NFV), Software-Defined Networks (SDN) and computing at the edge of a multiple access network, the so-called Multiaccess Edge Computing (MEC). On this basis, the architecture and services within the communications network are structured. By virtualizing network functions, various network functions are implemented through software (VNF, Virtualised Network Functions), typically over a Network Function Virtualization Infrastructure (NFVI), which decouples network functions from hardware, resulting in increased infrastructure flexibility and reduced operating and equipment expenses. Additionally, Physical Network Functions (PNFs) are hardware boxes that provide a specific functionality.

On the other hand, SDNs handle the routing and forwarding functions in network device software. The use of SDNs offers three important keys: it separates the control side from the data side, provides centralized management, and finally, turns the entire network into a programmable entity. With SDNs and

62 5G

NFV, the complexity of device design is reduced, an efficient network configuration is achieved, and the working context can react to status changes much faster than through conventional approaches, which provides great flexibility and cost-effectiveness in the implementation of services, in this case, audiovisual content distribution services.

Last, implementation over the MEC enables migration of processing and storage resources closer to demanding users, thus reducing latency and traffic aggregation as required by audiovisual multimedia services. NFV, SDN and MEC are mutually complementary technologies that lead the evolution of network architecture, offering new services, in this case, operated for the provision of content services.

One of these pilots was carried out within the framework of the 5G-Media Programmable edge-to-cloud virtualization fabric for the 5G Media industry project, which aimed to create a flexible environment based on a service platform with a Software Development Kit (SDK) that would facilitate agnostic network users to deploy solutions, in the form of virtualized network functions, for the implementation of audiovisual multimedia services on the Edge. The main goal behind this pilot was to verify the feasibility of carrying out virtualised remote production on 5G networks in real time, an experience that took place in the Matadero (Madrid) premises, in collaboration with Radio Televisión Española (RTVE) and Telefónica, on the Radio 3 broadcast of “Life is a dream”, showing how technological advances in the 5G domain and edge computing were able to provide enough potential to offer high-quality audiovisual content services through a dynamic and efficient allocation of resources.

63
MULTIMEDIA

It offers an alternative to the issues associated with audiovisual multimedia services and applications that go beyond the capacity, latency and bandwidth requirements offered by the network.

Nowadays, professional production of events for broadcast is to a great extent associated to a large investment of resources: money, large equipment to be hauled, OB vans and long preparation

and verification times for installations. In addition, dedicated connections are established between the event’s venue and the station site to ensure the required high throughput and transmission quality, with the associated cost, being the bandwidth requirements for conventional television production around several gigabits per second. All in all, this gives an idea of the magnitude of the coverage of an event and the

investment associated with its production.

In contrast to this approach, 5G technologies propose a new paradigm for the management of services distributed and deployed in the Edge, thus guaranteeing the quality of the service, generally evaluated as quality of user experience, even for the most stringent and demanding network content service requirements.

The remote production pilot targeted live production of an event from a location other than the event itself. To do this, it takes advantage of the ability of the 5G network to send the camera, audio and control signals to a production room, thus making it unnecessary to take that equipment on site during production, nor sending over people in charge of the equipment. The architecture used is detailed below.

Acquisition was made by means of three SONY cameras, two PMW-500 and one PDW-700, which deliver a signal of 1280x720 pixels

64
5G

with a depth of 10 bits per RGB channel, 50 frames per second and progressive scanning, as per the SMPTE 296M standard. Those signals are fed into the uncompressed Edge by using SMPTE standard ST 2110 Professional Media Over Managed IP Networks, working with video, audio and ancillary data in separate elementary streams. This enables separate processing and generate the desired workflow for each of them, even allowing their management at different points. SMPTE ST 2110 also allows the active image area to be sent with savings close to 40%. These frames were embedded with an Embrionix device allowing two different raw streams to be embedded.

The hardware chosen for this task is an Embrionix SMPTE ST 2110 device [27] which is controlled by software and allows inclusion of 2 different raw streams, and management from a proprietary program that facilitates network configuration and setting up routing parameters,

as well as the Session Description Protocol (SDP) file containing the configuration and the signals that are sent to the MEC for handing the IP video signals. To manage these IP signals, which are very demanding in terms of bandwidth requirements, the local area network (LAN) needed in place is created by means of a switch configured with a Small Form-Factor Puggable transceiver (SF) and connected to the service provider’s network through a 10 Gbps connection.

A schematic view of the architecture can be seen in Figure 1.

The VNFs deployed on the Edge are based on open source. They are flexible, scalable and capable of evolving more easily than traditional

networks and can be used for both live and delayed production. They allow to automate tasks and create customized workflows by using intelligent systems, adding capacity where and when needed. In addition, updating is simpler than compared to its physical equivalents.

The VNFs developed under the pilot are as follows:

• vUnpacker: allows the use of the UDP protocol with the IP SMPTE ST 2110 standard. It enables the decoding of RTP video over IP, creating a regular TCP workflow in a matroska format at the output. As input, the functions use an adaptation of the SDP file.

• Media Process Engine (vMPE): allows modifying and mixing video signals. The final program signal

65 MULTIMEDIA
Figure 1

(PGM) is produced near the site to harness the computing power of the network edge. For the pilot, its function was to allow switching between the three input signal sources, as well as creating a composition between two different sources. The MPE is split into two modules:

- Server: is the VNF that deploys the editor’s kernel. It provides two types of outputs: previews and program signal (PGM). Preview signals are sent to the client application in low resolution (with very high compression). For this compression, M-JPEG is used by the server. The PGM signal is composed of the source selection or source composition selected by the filmmaker it is provided as a raw signal with a 4:2:0 sampling (packaged in matryoshka format).

- Client: is the graphical interface for production of the program signal. In the case at hand, the filmmaker is located in the central facilities and not at the event site. For server-

client communication, functions use a simple TCP-based command-line protocol, with three main types of operations: clientto-server commands, command-response errors, and server preview signals.

• vCompressor: it is responsible for encoding audio/video signals to reduce their bandwidth by using the H.264 standard. It is based on open-source coding techniques and libraries included in ffmpeg. Compression introduces

66
5G

IndicatorTradional5GRemoteImprovement

latency into the signal transmission and can be critical in some cases, so implementing virtual functions at the network edge is an advantage for reducing latency.

As a final summary of the pilot’s workflow, the camera baseband video signal is carried via HDSDI and converted to IP using the SMPTE 2110 standard. Thereafter, the VNFs described above interact with the signal. The vUnpacker obtains raw video over the IP signal, the vMPE acts as a video switch controlled by the filmmaker from the transmission

facilities (where it can view the preview signals from each source) and finally, the vCE compresses the PGM (program) signal, using an H.264 encoding format. The output signal is the one finally used in the broadcast.

The pilot’s overall results were highly satisfactory. Regarding bandwidth, the bandwidth used for the three monitoring signals was 4.93 Mbps, and 10 Mbps for the PGM signal. Latency, which is tremendously critical for any audiovisual multimedia service, was controlled by a GPS time application, and

the measured average was 500 ms. Finally, regarding the use of virtualized resources, the use of processors (CPUs), memory, disk storage managed with the NFVI has been measured. The largest allocation was made to the compressor -8 CPUs and 4 GB of memory- and the workload on them when in operation ranged between 50 and 75% of capacity.

The table 1 summarizes the main impact indicators compared between the two models and the improvement provided by 5G-based remote production. 

67 MULTIMEDIA
PGMBandwidth18Mbps10Mbps45% End-to-endlatency1s-2s500ms50%-75% Deploymentme4hours10min96% Powerconsumpon10kW2kW80% Cost~€50,000~€37,00026.75%
Table 1

Driving business outcomes with a media partnership approach

The pace of change in media has never been faster, and the very nature of the change has never been more complex.

International markets are accelerating as media companies find new ways of bringing live video experiences to consumers at an unprecedented scale. But with market opportunity often comes complexity and risk.

Global macroeconomic conditions in 2023 are challenging and increasingly unpredictable amid rapid shifts in consumer spending, supply chains, labour markets, and access to investment. Today, media brands are required to execute a globalised and coherent distribution strategy while

excelling in multiple new disciplines across brand positioning, commercial pricing strategies, and subscriber acquisition and retention. Businesses must be strategically agile to tackle all these factors and succeed. Faced with endless complexities, inflationary pressures, and rising costs, media companies already have enough to contend with. Adding the burden of building and running a high-availability playout environment makes that one headache too many. In this fast-evolving, competitive marketplace, a ‘build it yourself’ approach can’t keep up with the speed of transformation in media — and it’s a strategy that incurs considerable cost and risk.

Broadcasters, streaming services, and content providers are increasingly focused on the core elements of their business, looking to maximise operational efficiencies and achieve sustainable commercial growth. As a result, world-leading organisations are trusting proven expert partners in playout services to take care of technology and operations and allow them to focus on content and creativity. Playout

68 PLAYOUT

partnerships built around economies of scale, goldstandard technology, and industry-leading talent are helping media brands navigate a path to the future with reliability and cost-predictability.

Operational challenges in a multi-platform world

The media landscape is converging faster than ever before. Media businesses

are operating in a hybrid environment across an ever-expanding set of frontiers, delivering highquality content experiences to diverse global audiences across both traditional and newer digital viewing platforms — simultaneously and round the clock. All while content owners deploy revenue strategies encompassing a mix of advertising, subscription, and pay-per-view models. Organisations with longestablished and successful

business models in domestic markets are experimenting with hybrid models as they take their business across borders and continents — for example, BBC Worldwide, DAZN or WWE. In the race to reach fast-growing digital audiences with tailored, high-value programming, content owners are launching free ad-supported streaming TV (FAST) channels — and they want to do it quickly. Industry analyst

69 RED BEE MEDIA

Omdia forecasts global FAST revenues will hit $12 billion by 2027, doubling in market size over the next four years. In a rapidly exploding FAST market, the clue is really in the name — media owners need to act now to get ahead of the competition.

Doing it alone and

undertaking complex, expensive and capital intensive infrastructure projects to harness new digital opportunities doesn’t make sense for broadcasters that need to secure business agility, efficiency and cost assurances. And in 2023, that means everyone. Through a media

partnership approach, organisations can harness economies of scale and unlock critical efficiencies through sharing assets such as data centres, networks, and Master Control Rooms (MCRs), with shared teams deployed across multiple flavours of services and business models. It also addresses the increasing

70
PLAYOUT

challenge of maintaining talent with the necessary and future-proof skill sets.

Reduce business risk with a flexible approach

Whether you label it agility, flexibility, or responsiveness, the ability to try new things is at the

core of our industry — it’s media DNA. From pop-up channels, new subscription models, one-off live events, new interfaces, or a myriad of potential changes to how content is delivered, technology and skills are often the only limiting factor. Leading European broadcasters like TV5MONDE have favoured a media partnership approach to launch global OTT services seamlessly and efficiently, bringing high-quality content to viewers across continents. Media services partnerships overcome the inherent risk of placing ‘big bets’ on the future of our market. Gaining access to best-in-breed playout infrastructure and skilled expertise that can deliver outcomes based on deep technical capability reduces risk while enabling the innovation that powers the next generation of breakout services. These risks are best managed by media services partners for whom this is core business and who can diversify them across a broad customer base.

Show me the outcome

While it is still possible to rationalise building in-house based on arguments of competitive differentiation or where the transition to alternative business models is too difficult to contemplate, progressive organisations are embracing a media services partner approach that is outcome focused. Ultimately, business leaders don’t want to be trapped in continual refresh cycles for in-house technologies to meet constantly evolving requirements. Instead, they want to trust proven partners that can deliver a whole host of non-core activities as a service while allowing creative minds to free up focus on content, subscriber acquisition, and achieving business goals.

A media partnership model isn’t just about simplifying playout complexities in a fragmented market; it’s about playing where it counts and driving business value and sustainable growth. In a media industry undergoing considerable upheaval, world-leading broadcasters should no longer gamble on an uncertain future. 

71
RED BEE MEDIA

Setanta Sports

72 OTT & STREAMING

The OTT sports platform for the CIS market

Setanta Sports is a company that specializes in content distribution. Its bet is on the commercialization of sports competitions through digital markets, specifically on OTT platforms. Its base of operations is in Georgia and its area of action is in its own country, the CIS market area and the Baltic countries. Its platform, in full growth, is designed by Endeavor Streaming and is in full technological expansion. Their goal is to offer consumers the best possible quality and personalized experience. We spoke with Tamar Badashvili, CEO of Setanta Sports, and she gave us an update on their evolution plans.

73 SETANTA SPORTS

How was Setanta Sports born and how it has been developed through the years?

The story of Setanta Sports started more than 20 years ago. It was an Irish company that started buying sports licenses of some countries in the European market. After that, they expanded all around the world. The company is still run by the same founders, but is headquartered in Georgia. With all these sport licenses, the managers started to sell them to other bigger companies like beIN Sports, for instance. With these profits, around two years ago we acquired

business in Georgia, Setanta Sports as it is today.

One year ago we started an active communication of our OTT platform together with our TV channels. After some time, we looked at the figures in Georgia and we found out that they were quite impressive. That’s how we decided that we were going to replicate the same business model in other countries.

With this in mind, we acquired the whole CIS market, excluding Georgia and also the Baltic region, then we decided to enter the Ukrainian market just before the war started. We signed the agreement by

the end of January and the war started in the end of February, unfortunately.

After that, we decided to diversify our targets and, recently, we entered the Philippines market. We are continuously expanding our business and I expect that next year we will have some additional market or even several markets in our portfolio. Now, we move on to other countries such as Azerbaijan, for example, which is quite promising from today’s perspective.

How has your OTT been developed?

Our OTT platform has been developed by Endeavor

74
OTT & STREAMING
Tamar Badashvili, Deputy CEO at Setanta Sports Levan Gvaramia, Product Development Department Director

Streaming, a company specialized in sports digital markets. With its platform as a base, we purchased and continue to purchase licenses and broadcasting rights for various sports.

Apart from these distribution services, what other services does

Setanta Sports offer?

The business is developed in three main directions.

The first one is the TV distribution. We cover 15 markets and we have TV channels in those markets. It is precisely the main part of our business. In most countries, we have at least two channels, even three.

The second part is the OTT platform, because we understand that most people, especially those who watch soccer, switch to digital channels on a daily basis. Users watch sports and use a second device to consult statistics or share emotions on social networks. On our OTT or social networks, you can interact and have the opportunity to share your emotions with people.

The third part of our business is, precisely, social networks. They include, above all, the YouTube channel. On this channel we have many subscribers and also viewers who frequently watch our content because we have the highlights of each market. We also show highlights and interviews on these channels. Now we are working on regional content that will also be part of our communication strategy on YouTube, as well as Facebook and Instagram.

The world of digital content distribution is evolving towards the development of the user experience. One of the possibilities to improve it is to add interaction to the platforms. Do you plan to introduce these capabilities?

We have not integrated social tools into our platform yet. We are developing them for our news portal, which is in the creation phase. There we will give our users the opportunity to interact with the content.

On our OTT platform, as it is made and supported by Endeavor Streaming, the modifications we would like to make would entail great efforts, especially for security reasons.

Regarding our platform, at the moment our business is focused on trying to offer the highest quality in streaming. Therefore, introducing extra functionalities or social tools would run the risk of not offering the highest possible quality.

Wouldn’t it be a good idea to integrate all these capabilities into a single platform?

In fact, we are considering it for the future. And I’ll tell you more; right now we are negotiating with one of the main companies that offer the co-watching experience. Thanks to this capability, viewers of certain sports content would be able to enjoy the competition together, share moments and emotions without giving up the quality that watching this content on the platform gives you.

75
SETANTA SPORTS

The second tool we want to introduce is a source of statistics. The goal is to prevent our users from having to resort to a second platform or application.

We have obtained these two directions of development thanks to our own research. Interestingly, our findings tell us that users of such platforms do not want a joint chat, as other leading streaming platforms have, or a forum, but after what happened with COVID-19 what they want is to have the possibility to enjoy the content together.

What is your business model?

The most important part is, of course, content distribution. On a descending scale, in second place are the revenues we receive from OTT. We expect that in the future it will become the main source of our revenues. Below that is sponsorship. We achieve this through sponsorship agreements with other companies in other industries, such as betting, telephony, finance, e-commerce platforms, etc. Another direction from which our revenues come

from are the sublicenses we sell to other television channels. Finally, there are the revenues from advertising on our platforms.

Additionally, we’re also selling the social media partnerships together with the other brands, but this is the smallest part of the business.

Speaking of advertising, are you thinking of introducing some kind of OTT advertising model?

We believe that this is not the best experience for the

76
OTT & STREAMING

user who already pays a price to access the content.

In our case, in live broadcasting we do not have commercial breaks or back shots with advertising, but we do in video-ondemand content. The VOD content we offer are highlights.

However, we did introduce these models where the platform is free-to-air. In the Uzbekistan and Philippines markets we offer the platform for free, users just have to register. In that case, we add the advertising spot.

What experience personalization strategies does the Setanta Sports platform

offer?

We are in the process of implementation. Now we are focusing on micro-segmentation of information. When users register on our website, we know their cell phone number and also their email address, and then we do the micro-segmentation and send them information about the content they are interested in. If the user sees the Premiere

League, for example, they will be interested in other leagues as well, so we promote LaLiga content in their profile. The effort we are developing is to make these promotions in an individualized way and not by segments.

On the other hand and in this regard, we also want to offer personalized advertising experiences following the same model of personal interests.

What technological tools are you using for these processes?

77
SETANTA SPORTS

In fact, these are solutions that we are researching in-house and that, once the optimum point is reached, we will outsource to other companies. We are doing this because the current market offer did not cover all our needs. For example, some companies’ solutions were useful for marketing automation, but their solutions did not include content personalization. Thanks to this research, we completed a technical document with all our needs. We sent it to companies and, as a result, a certain company will create the most suitable tool for us.

How is the content protected and how is the security of user data guaranteed?

We have a world-standard security measures in place. We are protecting the Digital Rights of each content that we offer to our audience (DRM) as well as the end-user data.

We are fully compliance with the GDPR standards.

We care about the users and their privacy as we care for our own. It’s a challenging topic, however, we follow all the standards that are applicable from the privacy perspective and by being DGPR compliance we raising the bar for the other service providers in those non-GDPR countries.

What has been the biggest technological challenge faced by the platform?

The technological advancements that welldeveloped countries have seen last few years, is something that is not quite applicable for the majority of the countries we are

operation in. It introduces different layers of the technological challenges that you need to tackle and resolve, so that you are delivering from grass to glass the quality that is in accordance to our benchmark. It requires a lot of fine-tuning your hardware and software capabilities to make sure your quality is equally adaptive for any user in any point of counties out of bunch. To make sure, that all the services are available 24/7/365. We worked hard to have the minimum delay for live streams, have the highest applicable quality individually for the users, and give them freedom of

78
OTT & STREAMING

What are Setanta Sports’ future plans?

The first is technical development, how to get more features for users.

The second is moving users, transferring users to the OTT platform, to increase subscribers to our platform. We are trying in different ways, from B2B to B2C. For example, we now have coverage of up to 25 million households on the TV channel. We understand that it is a great opportunity for us to use this TV channel in order to promote our OTT platform. This is the main idea, to have all these people on our platform.

We also want to get these numbers up with diversification. We don’t want to focus on just one industry, we want to cover much more like for example with supermarkets.

That’s really interesting, how are you going to do it?

Well, we are not only considering it for retail stores, but we are also thinking of introducing it in insurance companies, construction companies, real estate companies, etc. There are many possibilities in this regard, there are many business opportunities because there are many companies that can offer their users subscriptions to our platform.

We do it in the following way: when we approach another company and we see that they have, for example, a customer loyalty card, we offer them to buy the subscriptions with a big discount so that their users can benefit from it, or we give them the subscription and tell them that this is the price we want to receive from them, always with a discount.

choice from where they want to onboard, or enjoy the platform otherwise.

What makes Setanta Sports different from other competitors?

When we look at our competitors, we look at both the big players and the OTTs that cover our region. The biggest advantage we have, in the countries where we operate, is that we are the only ones that have the full range of content in the market. There are no other OTT platforms where you can watch La Liga, Premier League, League One, UFC or Formula One together.

Even if we compare ourselves with the big players, it is difficult for them to have all kinds of content available. Moreover, in those cases the distribution rights are not exclusive, so the competition between them is even greater.

Thanks to the coverage of so many regions, we have the best prices and the best offers. This is how we are gaining the competitive advantage over the others. 

79
SETANTA SPORTS
80 SPORTS

The BBC after the European rugby champion

Six Nations Championship

The Six Nations Championship annually pits Europe’s most powerful rugby teams against each other: Scotland, France, Wales, England, Ireland and Italy. Originally contested only by the British nations, the championship moves many rugby fans in the Atlantic Islands, as well as in France, which definitively joined the competition from the 1947 edition, and in Italy, a country that has been fighting for the title since the year 2000. The tradition of the competition and the historical sport rivalries promote that this European tournament attracts a great number of followers in the old continent.

In fact, in Great Britain, the distribution rights are shared between two of the country’s main broadcasters: the BBC and ITV. In this interview with Matthew Griffiths, Series Producer at BBC Television; you will find the details of the production that the BBC will use to bring the thrilling action of this sport to British homes.

81 SIX NATIONS

Since the last edition of the championship held in 2020, particularly remarkable for its broadcasting evolutions, what have been the evolutions of this 2023 edition?

For this year’s Championship broadcasters and the Six Nations have worked together as usual to bring viewers closer to the action. This year there will dressing room cameras at all the stadiums for the first time.

It was public knowledge that the 2020 edition was broadcast in HD. Have the systems already been upgraded to broadcast in UHD on any of the broadcast channels?

BBC Sport will continue to cover the Six Nations in HD.

In the 2020 edition you also did not introduce remote work models, what was the reason back then?

With the scale of each Six Nations match we felt the benefits to producing the

matches on site outweighed the benefits of producing them remotely. So we will continue to produce them locally.

BBC Sport will be deploying an onsite OB vehicle based model for this edition of the Six Nations.

What are the needs of rugby broadcasting and how can technology help achieve them?

To successfully broadcast a Six Nations rugby match, you need to be able to convey all of the action on the pitch as well as the atmosphere and the sense of occasion that comes with matches on this scale. Our matches are covered by over 30 cameras, including 4 RFs, 4 remote mini cameras in the corner flag posts, 2 HiMo cameras, a drone and a 360 wire cam. We utilise 8 replay lines.

What technical infrastructure and manpower will the BBC deploy to cover the Six Nations tournament?

BBC Sport at each match will deploy two OB Units,

one for match coverage and one for the BBC unilateral presentation. Connectivity will be achieved through a mixture of fibre and satellite facilities to provide diverse redundant transmission circuits. All of this will be expertly engineered and operated by over 70 members of crew.

What are the particularities of your capture systems? What technologies have you deployed and what formats are you working on?

82
SPORTS

BBC Sport through our facillities partner EMG Will be utilising a mixture of Sony Camera models to capture the coverage including HDC2500 Camera channels and HDC4300 Camera channels to capture camera angles in mixture of 2x, 3x and up to 6x speeds, along with deploying a Spidercam system.

To enable BBC Sport to get ‘closer’ to the action they will be deploying RF mini cameras in the try line flags using camera systems developed by Broadcast RF.

These systems all working in 1080i.

All replay requirements will be facilitated by a mixture of EVS Servers.

What kind of graphic infrastructure do you have?

The world feed match graphics are designed and provided by AELive and driven by Vizrt machines.

What will be the distribution of the championship, on what channels will it be held, on what infrastructure

will it be based?

All of our Six Nations matches are broadcast on BBC1 and iPlayer.

Distribution and production rights are shared among different broadcasters. The BBC will be in charge of production and, consequently, of all matches played on home pitches in Scotland and Wales.

In addition, the BBC has acquired exclusive rights to the women’s Six Nations and the men’s under-29 tournament. 

83
SIX NATIONS
84 TEST ZONE

Progressing and improving video: 4:2:2 10bit to 6.2K/30P

FUJIFILM X-H2S

The appearance of new technologies in the field of image, either still or in motion, always generates uncertainty among manufacturers, doubts among users and the technical change of the devices used. The breakthrough of smartphones has undoubtedly been one of the factors that has completely changed the market of traditioanl cameras.

The analysis study Digital Camera Market: Growth, Trends, Covid-19 Impact and Forecasts (20222027) presents a horizon where high-end DSLR and mirrorless cameras will not be replaced by smartphones.

FUJIFILM, a specialist manufacturer in the audiovisual industry, continues to be among the best-regarded manufacturers in the field of professional photography and video. We had the privilege of getting to know first-hand the FUJIFILM X-H2S camera model, a professional photography and video camera with high technical performance together with the FUJINON XF 18-120MM F4 LM PZ WR lens. The result has been high-quality images, with a very realistic color and great detail.

This X-H2S camera is an EVIL (Electronic Viewfinder With Interchangeable Lens) mirrorless

FUJIFILM X-H2S
Lab test by Carlos Medina, Audiovisual Technology Expert and Advisor
85

camera and therefore, with a different system to DSLR cameras. It is a member of the FUJIFILM X series with APS-C format with interchangeable optics. This camera was introduced in late May 2022.

The magnesium body of the X-H2S camera is sturdy,

compact and perfectly sealed so that it can work in nearly all possible situations. Its build, good design on the handle and weight (just over 600 grams including battery and card) make it easy to use, thus enabling a quick, intuitive handling when configuring and shooting/recording.

Let’s go over some of the highlights of this design.

First of all, it only features a dial mode (AE, S, A and M program; custom settings – C1 to C7 – Filter and Video mode); an upper LCD monitor where the different settings of the photo shot or video recording are presented; several quick

86
TEST ZONE

access buttons to activate and/or set parameter values (ISO, WB, AF ON, AEL…); front and rear control dial; separate photo shooting and video recording button; two capture holes for the builtin front microphone; an on/off switch; hot shoe for accessories; and the menu button and navigation arrows that are now so common in this type of photo/video cameras.

Secondly, in this very compact body we must praise FUJIFILM’s wise decision on connectivity for the X-H2S: HDMI type A, 3.5mm microphone jack, 3.5mm headphones jack, USB type C.

Thirdly, it features a rear variable-angle folding 1.62 Mpx LCD monitor that also works as a touchscreen and an electronic viewfinder (EVF) of 0.8x magnification

and 5.75 Mpx at 120fps that offers great detail and precision when framing. In addition, this model has some novel features, such as to possibility to choose between two vertical handles (one gives greater VG-XH capacity; and the other also offers better communication - FTXH file transmitter) or an external fan (FAN001) that allows recording at high

87
FUJIFILM X-H2S

temperatures without the system being affected: it is placed in the space that remains when the rear LCD monitor is completely unfolded.

The X-H2S camera is not only good design and construction. The best is inside. It has an impressive stacked, backlit sensor, a 26.16 Mpx X-Trans CMOS 5 HS (5th generation of the X series) sensor and a highperformance processor, the X-Processor 5. These two technical specifications put this model into the spotlight for of all those professionals who need to perform bursts of still images and/or for highquality, high- speed video recordings.

In still photography, burst mode, you can apply AF / AE tracking on continuous shooting without blackouts up to 40 frames per second, and you can shoot more than 1,000 frames continuously when setting high-speed burst shooting mode to 30 frames per second (JPEG) or 20 frames per second (RAW), when using the electronic shutter.

Seven image stabilization steps are also offered due to its five-axis stabilization system in the camera body and its new AF and subject tracking/detection system which, after the various tests we performed, gives a very stable visual result.

A major step forward is the camera’s ability to track a moving subject and precisely focus on area AF or in low contrast conditions thanks to the Deep Learning technology, which automatically detects and tracks not only the human face/eyes, but also animals, birds, cars, motorcycles, bicycles, planes and trains to maintain autofocus.

Being FUJIFILM aware of the importance of the still photo storage format, offers us three possibilities: 14-bit RAW, HEIF -High Efficiency Image File Format- (10-bit 4:2:2, allowing files up to 30% smaller than standard JPEG files,) and JPEG (8-bit 4:2:2).

This camera has plenty of features, but where we see great improvements is in its video recording

performance and in the cooling by means of dissipation integrated in the body. The camera supports 10-bit 4:2:2 video recording at 6.2K/30P; 4K/120P and FHD internally. The sensor’s reading speed during video recording has been improved to 1/180 seconds, thus suppressing the rolling shutter effect.

Compatibility with three Apple ProRes codecs makes this camera model a highly regarded one for the most demanding professional jobs: ProRes 422 HQ, ProRes 422 and ProRes 422 LT. In addition, the HDMI type -A output allows external recordings to 12-

88
TEST ZONE

bit RAW ProRes with sizes up to 6.2K and frame rates up to 29.97 fps.

But we evaluate this camera for professional video environments in a very positive way because of its new F-Log2 with extended dynamic range up to more than 14 steps, which allows a much smoother tonal expression.

This ability to process video data to store them is only possible by updating the X-H2S to a recording medium that makes it possible to maintain these high information flows, which is why it has a slot for CFexpressTM Type-B memory cards; and another

slot for SD UHS-2. To provide more details on the performance of these cards, we have used the CF Express ProGrade 325GB card that allows storing 15 minutes in 6.2K 25p quality; 49 minutes in 4K 25p; and 3 hours 14 minutes in FHD 25p.

Finally, the camera settings and menus are perfectly identified: photo shooting, video recording, image quality adjustment, AF/ MF, shooting modes, flash, movie adjustment, time code, playback, user settings, sound and screen settings, among others.

X-T3, X-T4, X-Pro3, X-S10 or the X-T30 II, X-H1 are

some of FUJIFILM’S bestknown models and right now we have to take this manufacturer into account with this X-H2S camera.

FUJIFILM has really done its homework considering its previous models and knowing the solutions offered by other manufacturers. In this case, the FUJIFILM X-H2S camera has been designed to enjoy and facilitate the work of video recordings without fear of making mistakes. It is clear for FUJIFILM: they want to remain in the photography and video sector under a single motto: progress and improve. 

89
FUJIFILM X-H2S

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.