EDITORIAL The market frequently shifts based on international events that have an impact on an entire industry. Trade shows are, no doubt, an important dynamic element that all companies related to the broadcast world need both for showcasing novelties and for getting to know them first-hand or negotiating purchases. After a year with hardly any trade shows, positive news is beginning to reach our newsroom. Not only because vaccination is gathering pace in most countries, but also because the exhibition activity is beginning to recover. In this sense, it is worth highlighting the news related to IBC which, according to an official statement, maintains the idea of holding the next edition in September this year. This is to be finally confirmed in a few weeks. The same happens with the Bitam Show, a fair organized by this publishing
group and one of the most relevant in southern Europe, which has also recently confirmed its 2021 edition for November. The shady contrast to this information comes from the hand of ISE (a fair somewhat distant from the pure broadcast sector) which, according to an official statement, is canceling its events in Amsterdam and Munich this year. Time will undoubtedly confirm or deny this development. But the truth is that it seems clear that the current situation is beginning to move away fortunately enough- from the uncertainties that were generated in March of last year. Naturally, we continue working tirelessly to get the best content for our readers every month. Proof of this is the articles that follow after these lines. In them you will find truthful, top-level information and, of course, so far unpublished.
Editor in chief Javier de Martín
Creative Direction Mercedes González
editor@tmbroadcast.com
design@tmbroadcast.com
Key account manager Susana Sampedro
Administration Laura de Diego
ssa@tmbroadcast.com
administration@tmbroadcast.com
Editorial staff Marcos Cuevas press@tmbroadcast.com
TM Broadcast International #93 May 2021
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966
SUMMARY 6
News
18 5G: 5G-MAG Addressing the future needs of media distribution with 5G
5GMediaHUB A space to accelerate 5G implementations
5G PPP Edge Computing for 5G Networks
38 Sooner Sooner is a relatively young OTT that is growing at a high pace. It’s available in the Benelux, Germany, Austria and Switzerland and it’s looking forward to expand its service to other European countries, amplifying its content to more than cinema and series.
46 FilmDo FilmDoo is not a usual film OTT Platform. Since its beginning, its aim has been to spotlight independent films from all around the world. Movies that are not on the classical distribution channels, finds their way through FilmDoo. 4
18
60 54
VoIP: Exploring ST2110 and ST2022
60
IP Infrastructures: Phases, challenges, and technical keys in RTVE's project in Sant Cugat RTVE's Sant Cugat studios face an unprecedented challenge in the Spanish television scene: completing the transition from SDI technology to IP technology.
82
Software reduces technical vs. creative tension for video production
86
Ross Emery: A DOP Raised by cameras 5
NEWS - PRODUCTS
EVS releases XtraMotion, its cloud service for super slow-motion replays
EVS has announced the release of XtraMotion. The new cloud service uses Artificial Intelligence to create super-slow motion replays. This solution has been tested successfully by Fox Sports during the last months. Normally, the native super slow-motion sources cannot be deployed in every environment. Most pointof-view cameras are not capable of producing super slow-motion video and the cost of this 6
infrastructure usually makes them not affordable. Now with XtraMotion, productions benefit from super slow-motion on virtually every camera angle as the high frame rate is created ondemand rather than natively. Relying on machine learning algorithms developed by EVS, the process works for any production format – from 1080i to UHD/HDR – and any original framerate, transforming a
60fps video into 180fps video. It also works on post and archived content. XtraMotion was first evaluated by FOX Sports at Super Bowl LIV last year, to convert standard frame rate clips from the specialty cameras into high frame rate footage. It has since been used on other productions including MLB’s World Series in Arlington, Texas in November 2020, and on a more regular basis for live coverage of the
NEWS - PRODUCTS
NFL, NCAA College Basketball and NASCAR. “EVS’ XtraMotion has continued to show a lot of potential since the first tests were conducted during last year’s Super Bowl, so it seems only natural to use it on more of our productions moving forward,” said Mike Davies, SVP, Field and Technical
Management and Operations, Fox Sports. “Storytelling is always top of mind at FOX Sports and applying the XtraMotion effect onto our increasing number of specialty cameras really helps us give that extra visual wow factor to our productions.”
said: “FOX Sports believed
Christophe Messa, Product Manager at EVS,
productions on a regular
in XtraMotion from the very beginning and we have used each of our interactions with their experienced team during live events to improve our solution. We are now very proud to see it being deployed for its basis.”
NEWS - PRODUCTS
Versio IOX, the latest storage solution from Imagine Communications Imagine Communications has released Versio IOX Express NAS, its new storage solution. This device offers all the tools needed for simple and effective storage management, whether onsite or working remotely. Powered by the EditShare EFS solution, built on Hewlett Packard Enterprise (HPE) enterprise-grade hardware, Versio IOX Express is designed to sustain prolonged operations. True shared
8
storage access eliminates the need for resourceintensive asset transfers. According with Imagine statement, when coupled with the parity stack and RAID protection, the new solution guarantees the client recovery time of less than 20ms, providing uninterrupted service to on-air playback servers or editing applications. Versio IOX Express is offered in predefined bundles, allowing users to choose capacity and parity stack topology to suit their needs. “Our goal in developing Versio IOX
Express was to hit all the technical requirements we know broadcasters demand, but do it at an unprecedented price point,” said Steve Reynolds, president at Imagine Communications. “With the latest addition to our IOX portfolio, we can now offer our customers affordable, high-availability storage, delivered as defined, packaged solutions with all the software installed. All they need to do on site is connect the network and start delivering content.”
NEWS - PRODUCTS
Telestream releases latest PRISM Monitoring Software with new audio tools This software upgrade enables support for international loudness standards for worldwide regulatory compliance It has adjustable loudness monitoring threshold alarms/alerts and simultaneous metering and Lissajous display.
The new software adds significant audio support to an already broad range of IP and SDI audio tools.
monitoring instrument which can be easily updated and advanced via software.
Designed to be used in video engineering, operations, live acquisition and event production and post production, PRISM now supports 4K and 8K compliant 32-channel audio monitoring. For IP systems standardizing on ST-2110 or SDI workflows looking to expand into UHD, the new capabilities can be easily added to any PRISM model with no hardware modifications. PRISM is a unique software-defined
New PRISM feature highlights include comprehensive audio support for IP environments including 2110-30 and 2110-31 with support for compressed Dolby audio. Also, it features local headphone and speaker output via IP-to-SDI conversion with 5.1 / 7.1 downmix; as well as Dolby D/D+ and Dolby E decode with Dolby status support and an integrated RTW surround display.
“We’ve been a leading provider for the largest ST-2110 deployments and have worked closely with industry experts to meet expectations for multichannel audio monitoring and regulatory compliance. We are proud that customers view our relationship as one where their success and the success of PRISM’s audio monitoring tools are so closely aligned. Thanks to these partnerships, our capabilities are wideranging and both 4K and 8K compliant,” said Charlie Dunn, SVP of Tek Video Business Unit at Telestream. 9
NEWS - PRODUCTS
Red Bee adds boxing content to its platform with Fightzone Red Bee will add Fightzone to its OTT platform, to provide a worldwide streaming service for boxing content. The fans of this sport will enjoy plus than 50 live events per year, with the premier of the service on May 21st. The OTT platform of Red Bee allows the new channel to provide multiple simultaneous live streams at broadcast quality, through pay-perview and subscriptionbased options. “Red Bee is the perfect technology partner for us, as we are launching the Fightzone boxing revolution,” says Jim McMunn, General Manager Fightzone. “We are building a unique destination for British and international boxing fans, with access to incredible fights, expert interviews and behind the scenes content. Red Bee’s OTT platform allows us to reach a worldwide 10
audience and deliver broadcast grade streaming quality on any device.” All fights, related highlights, interviews, and other content will be made available on demand on Fightzone.uk. The content is protected through advanced geoblocking and DRM functionality, helping Fightzone segment their audience and monetize their content rights to the fullest. See www.fightzone.uk for a complete schedule and upcoming fights.
“Fightzone illustrates well how innovative and expansive content owners can make the most out of their assets, and bring engaging viewing experiences to a global audience, through our OTT platform” says, Steve Russell, Head of Product, Red Bee Media. “The obstacles are fewer than ever, when it comes to creating and expanding your own media business, and we’re looking forward to supporting Fightzone’s journey to global success.“
NEWS - SUCCESS STORIES
Vizrt supports AWS for Media & Entertainment initiative
AWS for Media & Entertainment is an initiative featuring new and existing services and solutions from AWS and AWS Partners, built specifically for content creators, rights holders, producers, broadcasters, and distributors. AWS adds the newly announced Amazon Nimble Studio, a service that enables customers to set up creative studios in hours instead of weeks, to a portfolio of more purpose-built media and entertainment industry services than any other cloud, including AWS
Elemental MediaPackage, AWS Elemental MediaConnect, AWS Elemental MediaLive, AWS Elemental MediaConvert, and Amazon Interactive Video Service (IVS). Daniel Url, Head of Product Management, Vizrt Group, said, “Vizrt has been building purely software-based solutions since the beginning of the company's history, in line with our SDVS (software defined visual storytelling) philosophy. AWS and the cloud have given our customers a
real boost in efficiency in recent years. Our Flexible Access software plans and NDI video-over-IP capabilities, combined with AWS Media Services really help our customers exploit the benefits of cloud and remote production workflows. AWS is both an integral part of our cloud strategy, and an inspiring R&D sparring partner.” The Vizrt Production Control solution suite, available through Flexible Access, uses IP connectivity to achieve instant access to, and seamless interchange with sources from, anywhere across the network in real time. Operation of the solution can occur from any compatible desktop or mobile device remotely, even in virtual environments. 11
NEWS - SUCCESS STORIES
Supershooter 5, the new 4K-Capable mobile unit from NEP NEP Group has announced the release of the Supershooter 5, its new mobile unit. The device supports 4K and is 1080p ready. It is the first fully IP unit that use the new broadcast control and monitoring system developed by the company. The solution made its debut at The Masters golf tournament in early April. . It features a Grass Vally Kayenne X-Frame switcher, a Calrec Apollo audio console, Sony HDC-3500 cameras, and EVS XT VIA servers. “Though the footprint and technology in this unit are
12
nearly identical to others in our fleet, our new broadcast control system really makes Supershooter 5 a nextgeneration truck,” says Glen Levine, President of NEP’s U.S. Broadcast Services group. “The adoption and explosion of IP technologies and the shift towards centralized production workflows has been revolutionary for the industry. With our broadcast control system, we can really harness the power of this technology while reducing complexity and speeding up the configuration process. It is really transformative and will help us bridge the gap
between existing and future technologies.” Andrew Jordan, NEP’s Global CTO, adds, “This broadcast control system also starts moving us to a place where our mobile units, flypacks, studios and centralized production facilities will all be on the same platform – making us much more flexible, adaptable, and agile for our clients. It was developed entirely in-house by a fantastic team of software and broadcast engineers and is going to be a key component of our future technology development.”
NEWS - SUCCESS STORIES
French network M6 upgrades its routing infrastructure with Riedel’s MediorNet Riedel Communications has announced an agreement with Metropole Television (M6 Group), to provide its MediorNet realtime video networking topology. This technology will upgrade the routing infrastructure of the network, allowing a decentralized hybrid infrastructure. “With disparate technologies built up over the years, we started struggling with a lack of interaction between routing, processing, and signal distribution within our various production islands,” said Mathias Bejanin, Chief Technical Officer, M6. “We were motivated to take a more holistic, global approach to our networking architecture and adopt a comprehensive and yet decentralized networking technology that would make our entire infrastructure up-to-date and future-proof.” Bejanin added, “We have had outstanding results with Riedel’s intercom
Riedel's MediorNet provides a decentralized Network for M6's video routing Upgrade.
solutions for many years, so it made logical sense to take a closer look at their MediorNet video portfolio when this project arose. Right away, we could see that MediorNet would deliver on all the benefits of real-time networks, such as resilience, multicast routing, scalability, and modularity. At the same time, MediorNet offers all the processing intelligence required in cutting-edge broadcast facilities while still being very accessible for operations and service teams.” The new MediorNet installation is at the core of
a two-year upgrade project currently underway at M6 that began with two studio production galleries and is now focused on the central infrastructure router, managing incoming and outgoing signals, as well as exchanges with production islands. An additional production island is to be integrated later this year. “The unique, hybrid nature of MediorNet is a tremendous asset for broadcast operations such as ours that are looking to make a smooth and safe transition towards IP,” said Franck Martin, Head of Engineering, M6. 13
NEWS - SUCCESS STORIES
Blackbird wins 18 more US TV stations with TownNews Blackbird is being used by 18 more US TV Stations for digital news production following TownNews’ latest deployment. This is the fifth expansion of Blackbird by TownNews since the partnership began in 2018. 69 US TV stations now use Blackbird remotely and safely to rapidly access, edit and publish news content
sourcing their news from
continuing our mutual
fast to social and web
websites and social media,
collaboration for a long
millions of viewers in the US
time to come."
now consume content
Blackbird CEO, Ian
through the Blackbird and
McDonough, commented:
TownNews’ Field59 VMS
“Congratulations to
platform.
TownNews on their new
browser-based, carbon
Derek Gebler, Vice
station wins. We are thrilled
efficient cloud native video
President of Broadcast and
to expand our services with
editing platform. US TV
Video for TownNews, said:
them to 69 news stations
stations using Blackbird
"Blackbird's high-quality,
across 35 states from the 2
now span 35 states.
frame-accurate cloud
we started with back in
Over 350 local TV stations
editing software gives our
2018. This is an excellent
in the US produce
broadcast customers a huge
example of the ‘Blackbird
approximately six hours of
competitive advantage. The
inside’ OEM strategy
news content per day with
growth of the Blackbird
working and a superb
the industry generating
TownNews partnership over
endorsement of our
nearly $31 billion in
the past three years has
technology in the fast-
revenues annually. With an
been extremely exciting,
paced environment of news
estimated 43% of US adults
and we look forward to
production.”
platforms. When paired with TownNews’ Field59 VMS, news stories can be delivered to viewers with unbeatable speed and control using Blackbird’s
14
NEWS - SUCCESS STORIES
Corrivium selects Grass Valley’s AMPP to stream its virtual events AMPP solution has been designed to offer all the capability of the very best hardware production solutions through a cloudbased SaaS model. For events companies such as Corrivium, this simplifies workflows, reduces set-up time and represents great Grass Valley has been selected by Corrivium, an Australian live event streaming expert, to provide its GV AMPP technology. The solution will support the streaming of virtual events, replacing an inhouse system developed by the company. “During the last year, almost all physical events have moved to the virtual world. A consequence of streaming becoming centre stage in the events industry is that high-quality production values became more important than ever for our customers,” commented Corrivium’s
founder and director, Steve Jones. “Looking for technology that could produce the highest quality event streaming, it was natural for us to seek solutions from broadcast technology leaders. The GV AMPP platform stood out to us. As well as meeting our demands in terms of video and production values, GV AMPP also simplifies workflows, reduces the total event budget and provides extra functionality to expand our service offering.”, concludes. Greg de Bressac, Grass Valley’s vice president of sales, APAC, added: “Our GV
value without impacting future functionality, a key advantage of the GV AMPP SaaS model. We enable access to any broadcast production technology, combining this with our production expertise and our broadcast-grade support service is a powerful package. GV AMPP means that Corrivium can elegantly scale production functionality and costs to stream some of the world’s most prestigious events on budget and at high-quality every time.” 15
NEWS - BUSINESS & PEOPLE
New leadership at Tracab Tracab, a Chyron brand, has announced the promotions of David Eccles and Martin Brogren to cogeneral managers of Tracab. Brogren, who has been dedicated to the development of tracking technology and has led many of the company's innovations over the past decade, will now manage research, engineering, and product strategy. Eccles, who led commercial and business development activities for sports video and data analysis, will now manage Tracab's business development, marketing, and sales functions. "We have an exciting time ahead of us here at Tracab," said Brogren. "Deep learning and better GPUs and cameras open up great possibilities to accelerate the value we bring to leagues, teams, and fans. We will create fantastic products in the coming years through skeletal tracking and automatic event detection on top of the real-time tracking with which Tracab has led the market since 2006." "I'm tremendously excited about the opportunity to co-lead the Tracab business alongside my colleague, Martin," said Eccles. "With backing from the CEO and the board, we have the opportunity to execute some really powerful initiatives. I am passionate about sports and sports technology, and I believe we have only just begun to understand the true value that video and data can play in our industry."
16
JW Player acquires VUALTO JW Player, video software and data insights platform, has announced it is acquiring VUALTO, provider of live and ondemand video streaming and Digital Rights Management (DRM) solutions. The acquisition deepens JW Player’s already robust offering to global broadcasters and further accelerates its vision to empower customers with independence and control in today’s Digital Video Economy by offering easy-to-use, scalable video technology. This acquisition arrives as the consumption of digital video continues its push to the mainstream. “Over the past two years, digital video has become ubiquitous. We now live in the Digital Video Economy, and as a platform company that empowers our customers with independence and control, JW Player is uniquely positioned to succeed in this environment,” said Dave Otten, CEO and co-founder of JW Player. “Joining forces with VUALTO further solidifies our position. Their world-class technology stack expands our platform to include broadcast-level live streaming and content protection services, which are critical for today’s customers. We could not be more excited about this partnership and look forward to innovating together with the highly-talented VUALTO team.”
NEWS - BUSINESS & PEOPLE
Dan Marshall joins Signiant to lead global sales Dan Marshall has been appointed to the newly created role of Chief Revenue Officer. Reporting directly to CEO Margaret Craig, Marshall is now responsible for driving all of the company's global sales activities. He assumed the role on May 3, 2021. Marshall brings many years of experience building and driving high-performance, customer-centric sales organizations in the media technology sector. He joins Signiant from Amazon Web Services (AWS), where he led several large teams involved in implementing media workflows in the cloud. Marshall joined AWS in 2015 via its acquisition of Elemental Technologies, a pioneer in multiscreen video streaming technologies, where he led the sales organization from the start-up phase through to market leadership. Prior to Elemental he managed sales and other customer-facing functions for Omneon, where he played a pivotal role in growing a global business that transformed the video server and storage market.
Dejero announces successful recapitalization to accelerate growth Dejero has announced a $60 million minority recapitalization of the business, led by Vertu Capital. Vertu partnered with Ubicom Ventures, a special purpose investment entity in partnership with Dejero management, to complete the transaction and pave the way for accelerating the company’s growth. Founded by Bogdan Frusina in 2008, and led by Bruce Anderson, CEO, Dejero is headquartered in Waterloo, Ontario, one of Canada’s top tech hubs, with offices in the US and UK, and a global distribution network. The company’s live video and real-time data transmission solutions are used by leading broadcasters, production companies, public safety and government agencies, along with enterprise customers around the globe who need uninterrupted connectivity for their critical applications. “Canada’s technology ecosystem is thriving, and we are excited to be playing a vital role by actively partnering with premier companies that are at critical points of inflection in their growth and scale,” said Lisa Melchior, founder and managing partner of Vertu. “Dejero exemplifies a Vertu investment; the company has an established worldleading technology solution and is accelerating its expansion into multiple global markets. We are thrilled to be partnering with Bruce, Bogdan and the entire Dejero management team on this exciting new chapter for the company.”
17
5G
Addressing the future needs of
media distribution with 5G
Author: Jordi J. Giménez – 5G-MAG Head of Technology
Delivering television, radio services and new formats to all sort of devices including smartphones, tablets, connected cars or smart TVs is becoming more and more important. Users, in particular young audiences, make an increasing use of such devices both at home and on-themove. The delivery of content over the internet is currently based on overthe-top (OTT) platforms where media traffic is not treated in any particular way, therefore similar to other data downloads. These platforms suffer, however, from the lack of scalability as content, delivered using unicast 18
connections, cannot easily be offered to large audiences without suffering some kind of degradation. This way of delivering content is trivial for on-demand content as every user can make a selection of the desired programme at the specific time they want to consume it. Multiple independent connections are unavoidable although some efficiency might be possible by means of caching mechanisms. However, when it comes to live or linear content, where users connect and receive exactly the same programme at the same time, possibilities for improving delivery efficiency are worth to be considered.
MEDIA DISTRIBUTION
19
5G
The situation becomes challenging in mobile networks given the scarcity of radio resources, which need to be shared by many users, additional issues such as interferences from other cells or the need to guarantee mobility among them. At the same time consumption habits are changing with OTT platforms not only being used for on-demand consumption but also becoming reference platforms for the consumption of live events such as sports. 3GPP, the global mobile technologies standardisation organization, is working on solutions under the umbrella of 5G with different technical solutions to address the future needs in terms of media distribution. Among them, Terrestrial Broadcast using 5G Broadcast, multicast as a network optimisation tool or the media streaming architecture to enable different collaboration scenarios between 20
different stakeholders are alternatives being developed. 5G technologies will therefore be able to support the distribution of media services as a combination of linear (e.g. current TV and radio services) and non-linear (e.g. on-demand, podcasts,…) components. In particular, tools may enable these services to reach the final users with a higher degree of control and ensuring and end-toend guarantee of the quality of service.
An insight into LTE-based 5G Terrestrial Broadcast LTE-based 5G Terrestrial Broadcast, commonly referred to as 5G Broadcast, was specified to fulfill requirements for TV and radio broadcasting. The system grants service providers control on linear content delivery, in enables to configure radio carriers with almost 100% capacity for broadcast services and also enables large area Single
Frequency Networks (SFN) with topologies beyond cellular networks. All this is accompanied with a significant change as uplink capabilities nor registration to the provisioning network are required to consume broadcast content. This eliminates the need of a SIM card and effectively enables free-to-air reception. As this broadcast system is part of the 3GPP family of standards, it can be fully integrated into 3GPP equipment, with the same chipset architecture and even complemented by mobile broadband data. LTE-based 5G Terrestrial Broadcast includes features to support: Receive-only mode / Supporting free to air services as well as encrypted services incl. authentication mechanisms; Dedicated HPHT, MPMT and LPLT broadcast networks; Single-Frequency Networks (SFNs); Fixed, portable and mobile reception; Quality of service (QoS) defined by
MEDIA DISTRIBUTION
service providers; Standard APIs for easy design and integration of media services in applications in devices. LTE-based 5G Terrestrial Broadcast could be used to distribute public and commercial linear TV and radio services, encrypted and unencrypted (free-toair), to 3GPP compatible devices such as smartphones, smart TVs, or car infotainment systems. It also enables hybrid TV / radio offers by delivering linear
broadcast content alongside catch-up and on-demand as well as addressable TV services, using the same family of standards. The system can integrate broadcast distribution of linear TV and radio services into existing media applications with 3GPPdefined APIs. 5G Broadcast represents a pragmatic approach to broadcasting based on 3GPP technologies in order to reach portable and mobile devices.
An insight into the 5G Media Streaming Architecture Leveraging the high potential of 5G enhanced mobile broadband connectivity with increased data rates and low latency, and new network features such as network slicing or edge computing, 3GPP is developing a media architecture fully integrated within the 5G system. This aims at supporting the most recent advancements in 21
5G
terms of media and video content providing augmented quality of service for traditional audio and video services as well as emerging formats for virtual/augmented/mixed reality. The 5G Media Streaming Architecture (5G-MSA) is the nuclear system to enable different business arrangements between on-line media service providers (e.g. CDN providers), broadcasters, and mobile network operators. With the 5GMSA, network and device functionalities are exposed to third-party providers enabling the use of 5G capabilities in the best way to ensure an increased quality of service for connected users. The new 22
architecture is a reality in 3GPP from 5G Release-16. The 5G-MSA introduces the concept of trusted media functions, which are implemented in both the network and the user device and also define APIs to interface with external media servers and functions. This effectively means that functions commonly deployed outside the network domain can be integrated within it. ABR encoders, streaming manifest generators, segment packagers, CDN servers and caches, DRM servers, content servers for advertisement replacement, manifest modifications servers, or even metrics servers can now be allocated within
the 5G network to improve the delivery of the service. For instance metrics collection and reporting may provide information related to the user experience, streaming sessions can be monitored on the user device and reported back to the service provider or even to the network where the information may be used for potential transport optimization within the mobile network. This opens the door to use 5G not just as a better 4G network but as an option shaped to cover future needs for media distribution and resolve problems that IP distribution via mobile networks may be currently facing.
MEDIA DISTRIBUTION
An insight into mixed-mode multicast Multicast may also play a role in 5G understood as a network off-loading mechanism in mobile networks. This feature may address the need to make content distribution scale according to demand, therefore providing sustainable quality of experience for content consumed by massive audiences. 3GPP is considering the introduction of multicast capabilities for the 5G system architecture in Release 17, initially targeting an architecture to fulfil requirements associated to IoT, Public Safety, V2X or IPTV, among others. This work also covers the radio access network (RAN), which should involve the possibility to use multicast and broadcast at cell level or between a small group of cells. In this way, the 5G network may be able to
select the most appropriate delivery mode according to different circumstances like concurrent audience demand.
engagement, new services
The 5G Media Action Group, shaping together the future of media
identified along the entire
5G technologies comes with a wide range of enhanced technical capabilities and network features such edge computing or network slicing. While previous mobile technologies established a closed solution, 5G introduces a paradigm shift with respect to industry
and business opportunities in many market sectors. The global media industry is one of these sectors, where opportunities are value chain. For the media industry is important to ensure that upcoming 5G specifications support their needs and requirements. The 5G Media Action Group is an association bridging the global media and ICT industries. 5GMAG members, drawn from the entire value chain, work together to understand and shape future technologies for media production, contribution and distribution. Analysing what 5G standards can do for media, defining new media use cases, recommending how to use and implement technologies and networks are some of the key work areas of the association. 23
5G
5GMediaHUB A space to accelerate 5G implementations 5GMediaHUB is an European initiative looking to speed up the testing and validation of 5G media applications and NetApps. Its aim it’s to give the technological partners a fully equipped facility in where they can develop 5G based solutions quickly. Looking for all the needs of these companies is Christos Verikoukis, the Project Coordinator of this consortium, who had the opportunity to explain us a little bit more about it.
24
5GMEDIAHUB
What’s 5GMediaHUB and how did it come about? 5GMediaHUB is an Innovation Action funded project as a response of the ICT41-2020 call by the European Commission. it is an initiative mainly promoted by three teams, the eBOS´s, a hi-tech SME from Cyprus, led by Dr. Loizos Christosfi, the IQUADRAT´s Informatica, a hi-tech software house from Spain, led by Dr. Konstantinos Ramantas and my team at the Centre Tecnològic de
Telecomunicacions de Catalunya (CTTC) as a follow-up of the ICT-19 5G-Solutions project. I remember that we started the preparation of the proposal in December of 2019 and we worked very hard under the difficult lockdown conditions. 5GMediaHUB aims to help EU to achieve the goal of becoming a world leader in 5G, by accelerating the testing and validation of innovative 5Gempowered media applications and NetApps from 3rd party experimenters 25
5G
and NetApps developers, through an open, integrated and fully featured Experimentation Facility. This will significantly reduce not only the service creation lifecycle but also the time to market barrier, thus providing such actors that are primarily from SMEs, with a competitive advantage against their rivals outside EU.
What’s your position and work within the project? I have the honor the serve as the Project Coordinator, my role is to monitor the progress of the project’s results and implement any necessary corrective measures. I will be the interface of between the consortium partners and the European Comission.
You are working for the CTTC (Centre Tecnològic de Telecomunicacions de Catalunya), What are the main projects and activity you are developing now? 26
CTTC is a non-profit research institution based in Castelldefels , resulting from a public initiative of the Regional Government of Catalonia (Generalitat de Catalunya) under the umbrella of CERCA Institute system. CTTC represents a technological infrastructure gifted with cutting-edge equipment and a highly talented staff aiming in designing the digital society of the
future. Our mission is to fill the gap between the industry and academia, to go beyond the traditional fundamental research, and offer innovative solutions to the society. I am the Head of the SMARTECH Department, and I am leading a team of 14 researchers. We have high reputation both in EC funded projects and scientific outcome. In addition to 5GMediaHub,
5GMEDIAHUB
we are currently coordinating 2 more 5GPP projects (MARSAL and MonB5G propose cuttingedge solutions for 6G wireless networks) and one Marie-Curie Action. Moreover, we are involved in 8 more EC funded projects either offering innovative solutions or as technology providers. To name some of them, 5GRoutes, 5G-Solutions and 5G-EPICENTRE test different verticals over 5G infrastructures.
Back to 5GMediaHUB, what companies and organizations are involved? The 5GMediaHUB consortium has been formed by a group of 17 organisations from 7 EU member and 2 associated countries, that complement each other in terms of background knowledge, technical competence, capability of new knowledge creation, business and market experience and expertise in the 5G and media domains. The consortium consists of three main groups of partners i)
Group 1 technology partners, 5G infrastructure providers, ii) Group 2 media vertical stakeholders iii) Group 3 ecosystem development partners.
What specific services may be of interest to the broadcast sector? 5GMediaHUB aims to provide the following features and capabilities to the broadcast sector: A web-based Experimenters Portal with associated User Management for secure and easy access of system tenants to the 5GMediaHUB Experimentation Facility, based on their profile and services they offer. A rich set of Experimentation Tools, that offer scheduling, validation, verification, analytics and QoS/QoE monitoring mechanisms to broadcast providers, allowing them to validate and optimise their services in a 5G environment so as to remove uncertainties prior to commercial
deployments in real 5G networks. A set of re-usable vertical-specific and vertical-agnostic NetApps, i.e. chains of platform VNFs that implement networking, security, resource management and load balancing functions on top of the 5G NFVI, abstracting its details. Thus, vertical applications only interact with the NFVI via easy to use Northbound APIs, that will be offered by the NetApps. Such APIs can easily be consumed by media application providers as it will reduce the complexity and risk of integrations and operations of media related applications. A re-usable open-source NetApps Repository that could be leveraged, enhanced and extended by 3rd party NetApp developers for facilitating the fast prototyping and testing of multimedia-based applications and associated NetApps. 27
5G
Access to two wellestablished 5G testbeds (i.e. TNOR’s Oslo and CTTC’s Barcelona) for providing 5G network capabilities to novel broadcast applications. An innovative Security Framework that provides secure user authorisation and authentication, software defined perimeter protection and isolation. Validation capability of 5G network and servicelevel KPIs relevant for the media sector that require 5G performance characteristics.
What is the roadmap for the future? The 5GMediaHUB roadmap will follow the 3GPP standardisation releases R.16 and R.17. This will be achieved through two iterative and consecutive testing cycles, with each testing cycle lasting for 8 months (i.e. Testing Cycle 1 from April 2022-September 2022, and Testing Cycle 2 from March 2023-October 2023), preceded by relevant 5G testbeds 28
upgrade, setup, configuration and preparation periods and followed by corresponding parallel evaluation and/or re-design/upgrade improvements periods to ensure a smooth and aligned evolution of the validations of the Experimentation facility through realistic use cases in the media domain. Therefore, we will be able to test media applications in realistic 5G environments by mid-2022 when the whole platform will be fully operational.
We will carry out tests in 6 innovative media applications in the areas of On-site Live Event Experience, Smart media production and Smart media content distribution.
In your opinion, how will 5G change our lives? 5G will be faster and able to accommodate more connected devices than the existing 4G cellular networks, but not only this, it offers lower latency, increased bandwidth, higher
5GMEDIAHUB
demonstrated by the different research projects, have triggered the interest of several stakeholders from vertical industries such as automotive, energy, factories, health, media, public transportation. All these sectors will be benefit from 5G.
And finally, when do you think 5G will be finally implemented?
reliability and lower energy consumption. These improvements will radically change the way on how people live and work. To give you few examples, with 5G we will be able to download a whole movie from a cloud platform in few seconds, it will allow remote use of robots in surgeries and will improve on-site Live Event Experience. All these improvements open new business opportunities in all these areas
What professional sectors are going to be the most benefit from this technology? The new technological concepts introduced by 5G will progressively create impact in all vertical industries and our society. The involvement of the vertical industry stakeholders’ in the 5G value chain is a great change comparing to the previous generation of cellular networks. The clear benefits of the 5G technology in several vertical industries as
As you may realized, 5G is operational, even in a non-standalone version, in several countries and the coverage areas are very board. It is expected that early next year that in most of the countries the stand-alone version of the 5G will available. It has to be noted that even though with the existing operational mode of 5G the achieved data rate by the end user may be 10 times higher than the one from the 4G systems.
29
5G
Edge Computing for 5G Networks
30
5GPPP
The 5G PPP Initiative and the 5GIA have presented a new white paper entitled “Edge Computing for 5G Networks”. This white paper provides among others: A brief introduction to the Edge computing concept; an exhaustive technology review focusing on virtualisation, orchestration, network control, and operational frameworks; a discussion about the role of security and an analysis of several business aspects around the Edge ecosystem. Moreover, the white paper provides an in-depth analysis of edge solutions that have been selected, deployed and validated by 17 different EU funded 5G PPP projects.
31
5G
The Section 1 of this whitepaper presents a brief intro to the edge computing concept. It linked it to the explosion of data usage driven by other technologies like Artificial Intelligence (AI) and the relevance of data gravity concept. It explains why edge computing is critical for 5G and how it helps to deliver the 5G value proposition based in 3 pillars: Enhanced Mobile Broadband (EMBB), UltraReliable and Low Latency Communications (URLLC) and Massive Machine Type Communications (MMTC). After this, it goes over different edge locations (from On Premise to Public Cloud, going through RAN/Base Stations, central offices and private data centres) and talks about how an edge deployment could look like in terms of components of the architecture (compute nodes, switching fabric, access and transport networks). It ends with the edge cloud ecosystem, introducing the roles of 32
the main actors in the value chain, which are further developed later in the document. Section 2 presents an exhaustive technology review of concepts with a 5G network perspective, focusing on four categories: resources virtualisation, orchestration, network control, and operational frameworks.
Different resources of virtualization technologies are explained (virtual machines, containers, unikernels). It also includes the main orchestration frameworks used in many European projects, namely Kubernetes, Open Source Mano and ONAP.
5GPPP
Network programmability is a fundamental design principle for edge deployments. For that, the SDN and P4 technologies are covered in the whitepaper. Additionally, some acceleration technologies are enumerated for data plane optimization (DPDK, FPGA, SmartNICs).
From an operational point of view, edge computing infrastructure is about placing workloads close to the edge where the data and the actions take place. This special domain requires a generalized DevOps methodology to code, test, deploy, and run the apps. DevOps principles, such as continuous integration, continuous delivery, continuous deployment
and continuous monitoring, are described listing industry common tools from CNCF and other organizations. The Section 3 analyses the role of security in edge computing, reviewing key security threads and how they can be remediated, and how some 5G PPP projects have addressed these problems. Edge computing inherits its paradigm and key technical building blocks from virtualization and cloud-native processing. Therefore, edge computing security suffers the same threats as core computing. However, the platforms are generally not offering the same security rich features and security policy and procedures. Edge computing is more vulnerable typically on local introspection attacks. Mitigation technologies are explained including isolation techniques and trusted execution technologies. Section 4 presents the so-called “Battle for Edge” 33
5G
that many companies are currently fighting, trying to gain the best possible position in the ecosystem and value chain. It describes the different actors and roles for these companies, and then describes the “Competitive Landscape”. Different scenarios are pictured, where one actor can take a dominant role, and implications to other actors are analysed. In the first scenario, MNOs maintains the prominent position in most of the stages in the value chain, maintaining the relationship with customers. In the second one, hyperscalers take the prominent position occupying the space for maintaining the relationship with customers as in the public cloud business space. A third scenario is proposed, where Local/Regional IT/Cloud Provider takes the prominent position at local levels, aggregating services for several MNOs. Finally, partial and fully collaborative scenarios are 34
presented, where collaboration and competition are balanced differently. Several industry driven initiatives are taking place. In particular, the early partnerships between telcos (Operators) and hyperscaler platforms (Google Cloud, AWS, Microsoft Azure) are mentioned. Moreover, initiatives by large and global players in the industry equipment and solutions space are addressing industrial indoor and outdoor service and solution offerings and non-public edge computing. Along with these perstakeholder-driven initiatives, the whitepaper also covers the multilateral GSMA initiative along with their Future Network Programme. A central part of this programme is the operator platform concept and the edge cloud computing that is in focus for their phase 1. GSMA envisages that operators will collaborate to offer a unified “operator platform”, that will support federation
5GPPP
among multiple operators’ edge computing infrastructure “to give application providers access to a global edge cloud to run innovative, distributed and low latency services through a set of common APIs.” Recently, and as follow-up, GSMA released both the Operator Platform Telco Edge Proposal Version 1.0 and a Telco Edge Cloud Whitepaper on “Edge Service Description and Commercial Principles”. Initiatives driven by the public side are also noted. Recently, the European Commission sent out a press release welcoming the political intention expressed by all 27 Member States on the next generation cloud for Europe. It is pointed out that – “Cloud computing enables data-driven innovation and emerging technologies, such as 5G/6G, artificial intelligence and Internet of Things. It allows European businesses and the public sector to run and store their data safely, according to European rules and
standards.” Alongside the expression of these goals, the whitepaper also recognizes the GAIA-X initiative, driven first by France and Germany, which want to create the next generation of data infrastructure for Europe, its states, its companies and its citizens. The last section enters in the main focus of the whitepaper, describing 5GPPP projects approach to edge computing and 5G. This analysis has been based on 17 answers from Phase 2 and Phase 3 5GPPP projects to an edge computing questionnaire created specifically for this whitepaper. The questionnaire asked about the type of infrastructure deployed, the location of the Edge used in the project, the main technologies used for these deployments, the use cases and vertical applications deployed at the edge, and what drivers were used to select those. As the reader will see, edge computing solutions have been extensively used by many 5G PPP projects and 35
5G
for diverse use cases. The analysis of the received answers provides some useful insight to the reader about the usefulness of edge computing in real networks. The first analysis focused on use cases addressed by phase 2 and phase 3 projects. Use cases are clustered and the use cases clustering revolves around the following 9 key functionalities: • AR/VR/Video processing/analytics and caching: Any kind of video processing or caching performed at the edge with the aim of a faster computation of AR/VR, reduction of load at backhaul or other kind of video related processing requiring low latency. • Low latency computation: Non-video applications located at the edge in order to reduce the latency between the user and the application server. • 4G/5G-Core functionalities at edge 36
(e.g., PGW, UPF): Hosting at the edge parts (typically, from the data plane) of the 4G or 5G core functions. • IoT GW/Data management: Virtualized versions of IoT GWs hosted at the Edge as a mechanism to reduce load or pre-processing data. • Geo-dependent computations: Championed by the automotive scenarios, this cluster includes the use cases which place functions at the edge to serve a certain geographical region.
• Multi-link aggregation: The edge as an aggregation point where multiple technologies can be used to connect to the core network. • Autonomous Edge: The edge as a mechanism to operate with low or nonexisting backhaul, therefore typically hosting core functions to work in an autonomous way. • AI functions at edge: The edge used to run AI functions leveraging contextual information available in the vicinity of the user.
5GPPP
https://5g-ppp.eu/white-papers/
• Virtual RAN (vRAN) at edge: The edge as a hosting platform for Virtual RAN functions. Next, the whitepaper focus on the type of edge infrastructure deployed, whether the projects have chosen fog computing, CORD, ETSI MEC or any other architecture to deploy the edge infrastructure. There is a variety of answers and each project explains their
preferred choices. After that, the whitepaper analyses the location of the edge computing resources for the EU projects. There are examples of projects deploying infrastructure on premise, in RAN/Base station, street cabinets, central offices, micro data centres, private data centres and even public cloud. Finally, the whitepaper analyse technologies and Virtual Network Functions used across all projects. X86 appears as the dominant technology for compute, with some acceleration resources (GPUs, FPGAs) starting to appear. OSM dominates as the preferred orchestration framework, followed by ONAP, and ONOS/ODL are the preferred SDN controllers for network programmability. Finally, Openstack and Kubernetes
dominates as Virtualization and resource orchestration frameworks. On the VNF usage, besides running vertical applications at the edge for the use cases, virtual Enhanced Packet Core and 5G core components dominate. The reason for this dominance is clearly to implement local break out of the data plane to connect devices in the 5G network to the vertical applications running at the edge. The list of projects that have participated in the whitepaper is: • Phase 2: 5GTransformer, Slicenet, SaT5G, 5G Picture • Phase 3: 5GEve, 5GVinni, 5Genesis, 5GCroCo, 5GCarmen, 5GMobix, 5GHeart, 5GDrones, 5Growth, 5GVictori, MonB5G, 5GZorro, 5GDive 37
OTT
SOONER FOCUS ON THE USER TO CONQUER EUROPE SOONER OR LATER
38
SOONER
Sooner is a relatively young OTT that is growing at a high pace. It’s available in the Benelux, Germany, Austria and Switzerland and it’s looking forward to expand its service to other European countries, amplifying its content to more than cinema and series. With a simple and direct navigation system, it focuses its aim on giving the user just what it wants. Daniel Siqueira, CTO at the company, reveals us how they are doing this.
39
OTT
What is the origin of Sooner? How did Content Scope, EYZ Media and Metropolitan Filmexport get along together and decide to create Sooner? Sooner coalesced from the realization that market segmentation in Europe limits the growth and viability of VOD platforms. With Sooner we aim for an Europe wide brand that can serve local needs and mutualize costs both for technical infrastructure but also for content licensing and marketing. Metropolitan approached EYZ to develop a concept which could unite multiple operators under the same umbrella but also keep them independent.
With such a big market of streaming platforms in Europe, what makes Sooner different? One key feature is the mix between Films, Series, subscription and single purchase content. We partner with film festivals to bring users the latest trends of the film industry and have streamlined the rental process making it more seamless where 40
harmoniously tinted with the dominant colors of the title cover. It makes you instantly able to judge key aspects of the content without distracting from the browsing experience, watching a trailer that might reveal too many plot points. Daniel Siqueira, Sooner CTO
users won’t have the typical painful shopping cart checkout process. We noticed most platforms used very similar navigation and layout, which relies heavily on quantity, bombarding users with an endless wall of thumbnails and scrollers. In our assessment this can lead to choice paralysis, as formalized in the Paradox of Choice. We chose to simplify, centering the main selection covers and giving users a more focused approach when looking for titles. This exercise in decluttering the UI also led us to design a single title page that brings the content forward blending it into the UI, using a background video loop extracted from the film that’s
Which was the biggest technological challenge for Sooner? There are many of course, in the beginning it was architecting a comprehensive system encompassing scalable serverless backend, database abstractions for multiple payment processors, a frontend design that could handle web, smartTV and mobile together, and an integrated workflow for the content ingest and marketing team. I’m glad to say our strategy paid off and is facilitating our expansion into more platforms. The challenge now is understanding the friction points and edge cases from our growing userbase and streamlining those rough edges. In parallel, something that emerged is managing the expectations of VOD users.
SOONER
We’re now used to larger, more mature platforms, it’s a challenge to keep the pace and innovate, given our brand has existed for less than a year with a fraction of the development team size of other players in this market.
What are the technological challenges of being in 5 different countries?
We’ve mostly solved scalability, and distribution across territories. But because we operate with different corporate structures on each region, we need to manage each region’s accounts across many services, like payment processors and app stores independently. This means setting up, configuring and practicing release management for each
separately, which of course creates an overhead but allows each region to operate independently simplifying other business processes that would have to be developed were they merged under a single account.
Which image format and quality is available on Sooner? Why did you choose that format and
WITH SOONER WE AIM FOR AN EUROPE WIDE BRAND THAT CAN SERVE LOCAL NEEDS AND MUTUALIZE COSTS BOTH FOR TECHNICAL INFRASTRUCTURE BUT ALSO FOR CONTENT LICENSING AND MARKETING
41
OTT
are you expecting an upgrade? We have a mix of new content and legacy content from over years of operation. Everything that’s new is always available in the highest quality we can get from licensors. This usually means an HD ProRes master streamed at h264. We already can deliver 2k and 4K content but these are still rare from the distribution side. As soon as the companies up in that chain can deliver in larger formats we’ll be able to make them more available to our users.
In which platforms are you available and what is the challenge of being in different platforms? We are currently available on the web, iOS and Android and soon launching apps for SmartTV on Samsung, LG, FireTV and AppleTV. Being a small team, the main challenge is being able to reach as many devices as possible utilizing hybrid stacks to leverage web standards when available and implementing native modules when they’re not. The main challenge here 42
EVERYTHING THAT’S NEW IS ALWAYS AVAILABLE IN THE HIGHEST QUALITY WE CAN GET FROM LICENSORS
is testing, acquiring devices and setups that represent edge cases of our subscribers can become a daunting quest. This is especially hard for SmartTVs as users expect the apps to work even on older, less powerful devices which have outdated OS, while on the other end, the ever evolving OS refresh rate and deprecation of APIs means at every new update, many changes have to be made and a lot of testing needs to occur.
How has the encoding of your videos evolved over the years? Tremendously. I actually started working in EYZ as a video encoder about 5 years ago, back then we
SOONER
were ripping DVDs and Blurays and using quite primitive scripting to generate FFMPEG commands for encoding multiple video sizes. As time went on, connections got faster and licensors started more and more sending us iTunes packages or single ProRes files via Aspera, ftp and Dropbox. It was seeing the limitations of that workflow and the increase in online deliveries that sparked the development of a mostly automated encoding system that could recognize formats and file patterns from different providers and normalize their outputs. This system could recognize covers, stills, convert subtitle formats, automatically crop black bars and select the correct audio tracks from assets with many tracks and multiple languages while integrated via API with remote storage and both our content management system and streaming servers. With this, our encoding and editorial teams could have an overview of where in the pipeline each asset was and was automatically
update, deprecating our previous method for managing using cumbersome google sheets where statuses would get stale as the management overhead was too big. Today we have a new system that follows that same principles and workflow but has been refactored with templating and as docker containers that can be deployed to many different OS both on dedicated remote servers but also on premises utilizing computing power that would have been idle otherwise.
How do you manage all the amount of assets that you have? Do you collaborate with any company? No, everything is inhouse. Each region has a dedicated encoding department comprising highly skilled people able to manage and process an astound amount of data per day. We try to build as much automation as possible in the ingest pipeline, but because we receive content from such diverse range or licensors there are many edge cases 43
OTT
that need to be handled manually. We also have image editors for image covers and marketing material and contract subtitlers to localize exclusive content, that can include simple translations or the creation of full subtitles from scratch.
Which content and streaming server do you use and why? We utilize Kaltura SaaS solution, it’s a very mature system which provides stream encoding and serving, cdn hosting,
44
image resizing and DRM, they also have an integrated player which offloads some of the more critical playback components and allows us to focus on the broader user experience. We made the decision to use a third party platform that specializes on streaming and are working around the clock dedicated to that task because owning those components didn’t really match our core strategy to focus on content and user experience, it required very specialized knowledge.
With such and specific content, how do you manage to offer a personalized experience? Each region has their own particularities regarding availability of offerings. The Benelux region has more of a focus on french cinema and studio films while DACH focuses on independent film makers and european series. Personalization plays a big role in the discoverability and user retention.
SOONER
You opted for curated content on you experts, so does this exclude AI developments focused on personal recommendations in base on customer experience? We use a mix of curators with a constantly evolving recommendation system. Our editors are much better reading the time sensitive cultural context of each market and react to them by prioritizing the content the majority of users will enjoy. But this approach is way too broad and we think a recommendation system offers the extra edge which can react to an individual’s content journey and surface content that’s likely to peak their interest but might not be in the current zeitgeist. We give the users the option to sort the content based on their own usage and utilize many different processes to match their history with other similar type of content.
What is the future of Sooner? What is the next step for the company?
WE’RE CURRENTLY BRINGING THE SOONER CONTENT TO MORE ENVIRONMENTS AND WILL CONTINUE TO EXPAND AND REACH MORE USERS INCLUDING NEW TERRITORIES.
We’re currently bringing the Sooner content to more environments and will continue to expand and reach more users including new territories. In the near future I can see Sooner increasing its market share in key regions and bringing more local content including livestream for concerts, theater and partnerships with cinemas. For our users, next in the roadmap will be integrating a social networking dynamic where users can create custom playlists, share and follow each other’s activities and favorites. It won’t be a full fledged social network, but we believe having social integration will allow the cinephiles and power users to build an audience by curating their own
content journey and making that available for other users to experience the platform as they do. Rating the content as well, we feel the current star rating or link/dislike logic of other services doesn’t really capture what attracts people to the content, so we are developing ratings based on a few criteria like story, character development and so on. As much as our curators and recommendation system can help users discover our huge catalogue, we believe it’s the users who can tell us and each other what are the highlights they are most excited about in our collection, and that is very exciting for us! 45
OTT
46
FILMDOO
FilmDoo BRINGING INTERNATIONAL INDEPENDENT FILMS TO THE LIGHT ON A GLOBALLY AVAILABLE PLATFORM
FilmDoo is not a usual film OTT Platform. Since its beginning, its aim has been to spotlight independent films from all around the world. Movies that are not on the classical distribution channels, finds their way through FilmDoo. A company with a disruptive new idea: an EdTech initiative for languages based on a multilingual international catalogue. We talk with Weerada Sucharitkul, CEO and co-founder, to know a bit more about them.
What was the need that you identified and prompted you to create FilmDoo? We launched FilmDoo in 2015. Originally, we wanted to create a platform to help people discover quick films around the world. The need really came from me. I had the chance to grow up in other countries and I realise there is a lot more movies than Hollywood films. When I was living in the UK, I could not understand why I could not watch so many international productions.
Weerada Sucharitkul, FilmDoo CEO and co-founder
Some of these are blockbuster in the country and some of them are award winning films and you cannot find them anywhere.
The more we look into this; we understand that actually the very big problem in the film distribution industry is that there are not enough channels. Consequently, we only get to see a few of international independent films a year and the others never get distributed out of their country. So that is our brilliant mission. First of all, to help people to discover these productions from around the world that otherwise they would not know about. And second, to make them accessible, with subtitles, 47
OTT
posters and trailers in English, so they can easily learn about the films and who created them. It was a genuine need and at that time we did not see any platform working with this content and, consequently, today we have one of world’s biggest international movies catalogue online.
How has FilmDoo evolved technologically since its birth? What has been the biggest changed you have faced? The biggest change is that we have expanded into EdTech. Basically, in 2019 we started to notice that a lot of people were coming to FilmDoo from language resource searchers. You know, I’m learning French, so where can I watch French movies? So we ran a survey between our users and we found out that the 70% of the people told us that the main reason for them to watch FilmDoo is they are learning a foreign language. So, it meant we were not only attracting the typical movie fans, but language students as well. When we 48
WE BASICALLY DEVELOPED A REALLY INNOVATIVE EDTECH PLATFORM, WHERE YOU CAN TURN ANY FILM INTO A LESSON AND WE CREATE SOME GAMING INTERACTION.
looked more into this, we saw that nobody else was using license films or foreign award films to help people to learn. We basically developed a really innovative EdTech platform, where you can turn any film into a lesson and we create some gaming interaction. Basically, the main technology change is that we have developed an inhand gaming interaction that can be used for online teaching. Not only online, teachers can show the films on video in class and students can play, interact and answer questions while watching. That is the key technology that we have developed in terms that are completely different from the other platforms right now.
FilmDoo is a platform that defines itself as “Globally Available”. What challenges does this imply? How do you manage it in terms of copyright? Not every film has the same right. Our aim has always been to work with content creators, field agents, distributors and film makers rather than against them. Unlike many other major platforms, who take exclusive rights and sometimes the content owner does not have control on the movie anymore, our plan has always been to talk and say “hey, you have an older film that maybe you sold to a few countries? what are you doing with the rest of the world? Why don’t you at least put it online and monetize it?”
FILMDOO
A lot of these platforms do not forbid the accessibility. It might be like we need these countries where the content won’t be available. With FilmDoo, we can work with them. For example, if they need all this countries except for Malaysia, Indonesia and India. We say, OK, no problem. We can geoblock these countries that you no longer have the rights. We only need one month to change the country. If you want to do an exclusive view without Spain, you just let us know one month and we can geoblock that and we are not preventing you from making any more deals.
Our platform is available anywhere in the world, including Antarctica (laughs), but it depends on the film. Some of them we do have worldwide, because we work with independent creators as well, including documentaries and short films that go worldwide. Usually, for this type of production, the filmmaker is the right holder. But for a lot of films, we might not have all the countries, because the content already has a distributor. But in terms of accessibility, we are globally available, but the content depends on each country.
The challenge that this implies is how you launch a subscription? Our aim is always to make the films available in all these countries, but the subscription works more by location. You have this catalogue for America and you have the same catalogue for the UK. So that is why we did not launch with this model initially, because we would require the same film available in all countries to make sure all our users had access to the similar catalogue.
FilmDoo offers both free-to-watch content and rent movies. Why did you decided not to
49
OTT
offer a subscriptionbased service but a payper-view model. What’s the reason for opting for this decision when the rest of the VOD world is opting for the other model? I believe that the reality is that industry is moving away from PPV model. When we launched FilmDoo in 2015, ITunes was the market leader. ITunes had PPV and the 80% of market share. But now iTunes is closed down and moved to Apple TV, with a subscription model and nowadays more and 50
more people are used to the platforms or even advertising free to watch. So I think we have to expand to that. The future we are looking at is a blended model. They will still be some films available on PPV. Sometimes the content owners do not want to go on subscription, especially on newer films. So, this allows us to work with newer films who do not want to be on subscription model yet, but also with older films who want to capitalise more on the volume, plus our language
learning platform, which will be a subscription platform as well. Never a 100% subscription, but always a blend to offer flexibility to what people demands.
The free movies and short films can be viewed via Vimeo. What about rented movies? What platform are you using for that video streaming? A: Is a mix. We use Amazon hosting as well as we do some internal development. But the main streaming platform we use is Brightcove. We
FILMDOO
have been using Brightcove for a few years. Our films are very secure with them. For the ones that are license owner, we have to make sure there is protection and Brightcove is one of the best providers of video streaming.
FilmDoo helps its users discover a variety of independent movies, or at least “off-the-market” films. How do you help people discover films? Do you bet on curated content? Are you testing some AI engines and services that can find the best movie based on your previous views? We are always working on that. Is not one answer. As other platforms, the reality is that we have a blend of technology and
I BELIEVE THAT THE REALITY IS THAT INDUSTRY IS MOVING AWAY FROM PPV MODEL
curated content. I do not think you can ever use one by itself. The technology allows you to organise the first filter, but the human curation gives you that relevant connection that technology might never pick up. We have an editorial team, who watches the films and catalogue them by themes, subjects and language level as well. So there is definitely a human connection that we use for our newsletters, blog and promotion. We have quite a volume of films now, so the first filter comes from our IA develop. The concept is similar to other platforms: “Hey, if you like Superman, here you have a Japanese Superman”. If you like a certain type of film, we would try to match the same kind of film from other country. Obviously, we also use automated emails, but on top of that we also send a more human newsletter with our new releases and promotion. So is a mix. What is really interesting is where we are going next. We are going to combine our AI with film
recommendations by language level. We are using this technology to know the users knowledge and match it with a movie. For example, if you learned a lot of animal’s vocabulary, we can recommend you a film with a lot of this to learn.
You’re currently offering HD movies. What about UHD? Are distribution channels prepared to offer streaming movies in this format? Platforms will have to support that formats. But the reality is that is not cheap. Also the files sizes are a lot much bigger and the investment of the technology development. But even the host, to stream the film, file processing…. I guess the some of the biggest platforms are looking to that. They have the infrastructure that can bring the costs down. I think some of the smaller like us; we might have to wait until the economy is affordable. Right now, I think the big ones will be probably the leaders with its infrastructure. And hopefully, we could learn from them. 51
OTT
What is the cloud platform of your choice? We use Amazon. But we use many platforms. We also use Rackspace, a hosting. We use Oracle; we do a lot of testing with Oracle. And we are always looking for more cloud platforms. We embrace the standard platforms because for our developers and team members is easily shareable, is common knowledge.
FilmDoo can be enjoyed via web. Are you planning the development of mobile + TV applications? Two years ago, it was on our road map. But two
52
things happened. One, we realise that we were expanding into subscription platform. So that put the plan of the TV app on hold, because we realise it doesn’t makes sense. Most people do not use PPV on TV anymore and is very rare on a mobile app. It works if you have a free content that you just click one app and you can watch the film. But who is going to watch a movie and pay per app anytime, it doesn’t make sense. A lot of people will pay for a film on the laptop and then they will stream the content through Chromecast or Apple Air. Having an exclusive app for PPV content would not have
been a good use case. And two, we started to develop our EdTech platform. As we nearly are developing it, we think to have a TV app and a Mobile app when technology is fully launched. That will be game changer. You can watch a film on a big TV, while playing language games on the mobile phone. So it’s definitely on the road map and we are looking to accelerate that for the next year.
We’d also like to talk about FilmDoo Academy, which is a really interesting development. Could you tell our readers a bit more about this?
FILMDOO
I did not mention that the academy is a B2B model. We work directly with schools, academies, etc. Not directly with the public to sign up yet. But we will be looking to make it available directly to language learners. The FilmDoo academy is on a different website from Filmdoo, which is our original streaming platform. But they are the same business. We are looking to launch a blended subscription model for that as well for the next year. So that will be great changes.
What’s the future of FilmDoo? What will your next steps be? I would love to point out that we are looking for great productions from all around the world as we now have our own user upload platform. If you go to the filmdoo.com you can see “Submit a film” on top. We allow a lot of filmmakers to directly upload their creations. Of course, we do not take any content, we are not YouTube. There is still curation making sure that it fits our criteria. But this would mean that more
I AM MORE COMMITTED THAN EVER THAT THERE HAS TO BE A PLACE TO HELP INTERNATIONAL AND INDEPENDENT FILMMAKERS TO GET SEEN, DISTRIBUTED AND MONETIZE. artist can directly submit their films and would have more options. And this is important; there are two things I want to remark. One, we all love the big platforms. But even if they are working now with international content, the gap is wider for foreign independent filmmakers. The opportunity for their voices and stories to get told and distributed is getting less and less. Why? Because the platforms prefer to make its own content. They prefer to commission their content, maybe as part of a series, or as a part of their marketing strategy or what their data tell them. So it’s getting harder for independent films to be discovered and monetize. In fact, a few weeks ago I discovered that Amazon
no longer accepts shorts films and documentaries, which is sad because many independent filmmakers have to make documentaries or shorts films because of their limited budget. But Amazon is turning their back on them. I am sure because they have a lot of volume and they need to prioritize, but there is no solution outside yet. I am more committed than ever that there has to be a place to help international and independent filmmakers to get seen, distributed and monetize. At least, they can submit content directly to FilmDoo and hopefully we will be able to make it easier for these filmmakers to get to a global audience. That is the aim. 53
TECHNOLOGY
When approaching any new audiovisual project, if we come to realize that IP infrastructure is not just another option but the main one, we must put theory into practice. Because, in principle, there is no difference between theory and practice, but the truth is that there really is.
By Yeray Alfageme
IP infrastructure provides -to phrase it in two words- greater flexibility and scalability. Traditional SDI solutions have the maturity and robustness provided by time and experience going for them, but they are rigid and stubborn. For example, if you need to go from SD to HD -although it may seem that this is something from the past it actually is more common than we think- much of the 270 Mbps SDI-SD 54
infrastructure will not support 1,485 Gbps SDI-HD signals, this being the issue that many were facing some time ago. The same applies when going to UHD. Implementing 4K or 8K definitions with an SDI infrastructure in which you have to multiply by 4, or even by 16, both wiring and switching capacity becomes practically unfeasible.
VOIP
55
TECHNOLOGY
IP, as we already know, is agnostic as to format to be transmitted through said infrastructure. It is all about data. There is no need to modify the wiring, the ports in the matrix (in fact, there is no matrix), and even signals of different formats can be mixed within the same equipment. No one would ever think that having SD, HD, 25 fps or slow-motion signals on an IP network can be a problem. This is just unthinkable in SDI.
ST2022 SMPTE is a strong advocate for IP and has been long working hard to offer a standardization that sets up a framework for both manufacturers and integrators, and as early as 2007 they released ST2022. This was the first set of standards and the aim here was to define some requirements, in the simplest possible way, to make possible a deployment of IP infrastructure for broadcast. Back in 2012, the ST2022-6 version provided 56
for the use of standard Internet protocols to encapsulate SDI signals within IP packets. Basically, each SDI signal was divided into an RTP (Real Time Protocol) encapsulation within a UDP (User Datagram Packet) frame and then within the payload, the data part of an IP packet. Although this concept works well, it does not streamline the use of bandwidth since it requires more than 20% of overhead for packet headers and queues. For example, of an SDISD signal packetized in IP under ST2022-6, only 76% of the information is video while the rest, almost 25%, are ancillary data, packet headers and queues, as well as TRS (Timing Reference Signals) information. ST2110 has come to the rescue.
ST2110 ST2110 aims at fixing these inefficiencies, at least partially, by removing timing from the equation as a separate signal. Clocking data are
embedded within the IP frames corresponding to the signals, whether they are SDI, MADI or AES, the latter two for audio. Although Ethernet is the preferred channel for IP networks, we should not confuse IP with Ethernet, as they are not the same thing. Ethernet consists of the specifications for layer 1 -physical layer- and layer 2 -link- within a seven-layer ISO model and, for example, IP links can be implemented through the latter or through WIFI, just to give clearest and most differential example. Both ST2022 and ST2110 can be implemented on any existing IP transport protocol, although the benefits offered by Ethernet are evident. In the first place, it is the most popular. I would almost say that the only de facto medium in corporate networks and all equipment is compatible with it, be it fiber or copper. In addition, being a physical medium -wired I mean- it avoids added latencies
TECHNOLOGY
that are present in other media such as WIFI, in addition to increased reliability. Such delays and reliability issues are not critical in file exchange environments as they are supported by transport protocols, but they become critical when it comes to real-time video.
Eliminating TRS to gain bandwidth Ethernet is asynchronous, that is, each packet is transmitted by the network at a different time and there is no need to wait for the next clock cycle to be transmitted, since such clock does not even exist. This makes the medium faster and more flexible, but at the same time forces the entire frame to be rebuilt when packets are received 'at the wrong time'. By removing TRS, ST2110 reduces the necessary bandwidth -or increases the amount of signals to transmit, whichever way look at it- between 16% and 40%, depending on whether it is audio or video. For all those of us 58
who come from 'old broadcast', doing away with synchronicity can be quite scary, since the clock is one of the most important things in any system and all lines and fields, frames, audio samples and metadata have to be perfectly synchronized. So, what do we do now? To achieve this synchronization, ST2110 uses the IEE 1588:2008 protocol, commonly known as PTP (Precision Timing Protocol). A curious piece of data: PTP is a counter that measures the exact number of nanoseconds that have elapsed since midnight on January 1, 1970, a date and time known as Epoch and used as a reference point in many computer systems. Each ST2110 packet includes within its metadata the PTP value in the RTP header, which provides the desired timing. A simple solution that is also elegant and functional. Occam's razor: the simplest solution is always the best.
Synchronizing audio The same PTO concept applicable to every video frame is also valid for every audio sample. The audio packets are put together in packets being similar to the AES67 ones. Each packet is signed with a PTP value so it can be sorted upon receipt. The same happens with metadata and control data, all with their PTP. For all this to work, there must be a master PTP reference and since all data packets, whether video, audio or metadata, are generated by the camera, the mixer, the microphone or any other data source independently but referenced to this. PTP can be processed separately and in parallel, which increases the throughput and minimizes the time required. One PTP to control them all Within a network there may be more than one PTP source, but only one of these sources will be the so-called 'Grandmaster'. In each of
VOIP
these clocks there is an algorithm called BMC (Best Master Clock) that decides which of them is the best. Criteria such as whether it is linked to a GPS signal -including preferences chosen by the system admin- establish which of all these clocks will be the grandmaster and will control them all. This allows us to extend the limits of our network as we can imagine, since part of the information audio for instance- can be processed within the control hardware because it requires less processing capacity, although HDR video is processed by means of Cloud resources, which are more flexible and less expensive. As a result, a large number of software has appeared that, in imitation of the Cloud, works under payper-use licenses, which allows us to gain that flexibility so necessary for our productions. Associating this to the past: in a linear SDI environment this was almost unthinkable.
Optimizing resources and innovating more easily The SaaS (Software as a Service) model, which is basically the use of software as a service paying for it only when necessary- is a paradigm shift that is quite natural in the IP world, but this results in a new set of rules for audiovisual equipment. Pay-as-you-Go is the key solution for all production companies in order to adapt their equipment to the needs of each given time in a business like ours in which workload is so changing. Likewise, innovation becomes much easier. A mixing console, be it video or audio, used to have a format that would be the result of legacy and operational needs, but changing it required replacing everything. All this is much easier now. For example, implementing touch controls on screens that are today a mixing console, tomorrow a multiviewer and later on a replay server, requires very little effort and
provides many benefits. Once more, we gain flexibility in a cheaper way and with fewer resources.
Conclusions Migrating to an IP infrastructure offers great benefits, mainly flexibility and scalability, but to make the most of this, systems connected to an IP network must understand each other. For this reason, protocols and standards are even more necessary than in linear environments, since the standard IT equipment being used would not be suitable for combination without them. Both ST2022-7 and ST2110-40 are the latest versions of these protocols. They are already well established and all IP broadcast systems are compliant with them, thus ensuring interoperability between them, which was a big headache in the past. Now that we have interoperability, flexibility, and scalability, what's stopping us from moving from SDI to IP? Nothing 59
IP INFRASTRUCTURE
RTVE's Sant Cugat studios face an unprecedented challenge in the Spanish television scene: completing the transition from SDI technology to IP technology. Like any migration process to new technologies, it has been necessary not only to face the uncertainty regarding maturity of technology, but also to generate an environment of trust among the operating and maintenance personnel for the systems, who are probably facing the most outstanding technological change they have encountered in their professional careers. The first of these challenges is overcome by means of exploration and analysis of the market, training, and technical support from 'those who know best'. The second challenge, dealing with a deep professional retraining of the staff to help them leave their 'comfort zone' and 'change mindset', requires a more profound restructuring. However, we can state that the attitude of the entire technical and operational team at Sant Cugat has been impeccable, adopting a highly receptive position right from the outset and providing a level of collaboration that is undoubtedly having a highly positive impact on this transition's success.
60
RTVE
IP INFRASTRUCTURES
Phases, challenges, and technical keys in RTVE's project in Sant Cugat
With Jesús García Romero, Technical Director at TVE; and Miguel Ángel Corral Toral, Head of Facilities, Engineering and Studio Support Projects and Mobile Units at RTVE.
61
IP INFRASTRUCTURE
A bit of history The project was born from the Television Technical Area Directorate when the time came to approach migration of four studios in Sant Cugat to HD. Consideration was made of the convenience of taking advantage of the moment of change to 'go even further' towards a technology that, although apparently not mature enough, it was felt that it would not take long to become a solid reality in chronological parallelism with the time necessary to prepare a tender, whose concept was far from what had been developed so far. The decision was clear. We set out to 'get on the bandwagon of what's to come' rather than continue to exploit established technologies. We would do the latter with very good results, but the truth is that in a short time these technologies could already become outdated. A sizable investment must be accompanied by an assurance of validity over time of the technology for 62
which a commitment is made. And to successfully achieve this goal, nothing better than to be guided by the advice of one of our golden age writers, Baltasar Gracian: 'It is beneficial sanity to save yourself trouble. Prudence avoids much of it.' And, as good things come to the prudent, they proposed a first approach to the IP world that they had already seen at the IBC and NAB, Bit Media, BITAM, etc. professional trade shows, especially since they could physically interact with equipment models in operation offered in the very interesting IP showcase of these fairs. In May 2018, RTVE's Operations Department convened the first workshop on 'Transition of broadcast technology to the IT world' in the Prado del Rey auditorium, attended by industry professionals of the highest level. It became a reference as it was the first approach to take place in Spain aimed at driving the certainty that the transition from the broadcast world to IT
technology was already a reality. With such certainty, a small engineering team traveled to the legendary Pinewood Studios in London. There, from the hand of SONY, they received an interesting theoretical approach to the IP environment and got to know an initial Live IP experience. "A remarkably interesting IP experience, also very galactic-like, because the shooting of a movie from the Star Wars saga had just finished and among the remains of the sets was the Millennium Falcon’s cockpit. This was a sign that the Force was with us in this project." During the following months, the TVE engineering department worked in the same direction, with a single goal in mind: to deepen the knowledge of IP technology through intense specific training on networks and IP video standards, throuogh numerous meetings and concept tests with integrators such as Telefónica, Unitecnic, Crosspoint or Qvestmedia; engineers from broadcast
RTVE
manufacturers such as Abacanto, Nevion, GrassValley, Sony, Lawo, Xeltec, Calrec, Evertz, EVS, Axon, Sapec or Net Insight; from IT manufacturers such as Arista, Cisco or Melanox; and orchestration and control software developers such as VSM, Cerebrum, Orbit or BFE. As part of the information gathering process, three overseas trips were taken to IP video facilities. This is how contact with real experiences was made: the Proximus production centers in Belgium or the giant NEP in the Netherlands; and with the help of Sony and Telefónica
Servicios Audiovisuales, a visit was paid to see the IP implementation that these companies had carried out on Portuguese television SIC in Lisbon. In these visits, whatever concerns had been generated during the assimilation of
new knowledge were conveyed. It could be seen how the path had been difficult for those who had done it before, but the goal was not an unattainable one. RTVE took these experiences into consideration as advantages for their project's design: they served to properly address certain aspects of the project learned from the experience that others had gone through.
63
IP INFRASTRUCTURE
After this entire journey, an RFI (Request For Information) was drawn up and sent out to all companies -mainly integrators- of relevance and solidly established in the broadcast sector. For the first time in a broadcast-oriented way, the main IT brands joined forces, and they had a lot to say in this new journey. With their unstoppable evolution in increasing bandwidth management capabilities, they have positioned themselves as the alternative and foundation of the new video signal transmission systems. At the same time, and to complement the fact that all the signals would travel through the network, a new type of player arose: companies dedicated to the development of orchestration systems, capable of offering a layer of control over all the equipment that was unheard of up to then. This would add agility and flexibility to the systems in aspects relating reconfiguration and new adaptation to different working modes. Taking on this challenge generated an avalanche of proposals for 64
participation. After a period of two months that was needed for working in depth on the requirements, exhibitions arrived along with proposals at the highest level. This only strengthened the idea that broadcast technology was unstoppably advancing towards convergence with the IT world, consolidating the governing standards to come around this goal; a roadmap that the JT-NM RTVE Engineering visit to NE.
published with very specific dates for the achievement of welldefined milestones.
Inception of the project After 2 years from the beginning of the process, in October 2019 RTVE published the first tender that contained a proposal for a complete IP video and audio installation for 4 television studios in its Program Production
RTVE
Center in Sant Cugat and a remote set in the city of Barcelona. Said specification also contemplated compliance of the new IP installation with important 'legacy” systems, such as the signal recording system currently in operation for 6 studios and the exchange of a hundred signals with the Central Control matrix. All this, in an installation designed for HD-3G and dimensioned for
redundancy, both for the 4 studios that were the subject of the contract, as well as for future compliance by the other 3 HD studios and the Center's continuity system. This project was proposed as a complete, encapsulated, highdefinition digital audio and video technology installation for transport over IP technology for the production of four studios with a recording system to which three other independent items were added: a virtual set, a set in the newsroom and a remote set. The video was based on 1080i-format highdefinition serial digital video, in an installation that had been considered for 1080/50p format with HDR; and to allow future compliance with IP technology by studies 5, 6 and 7. Audio was treated as PCM digital audio encapsulated in IP (AoIP) under the AES67 Profile B standard up to the audio mixer's output in order to minimize latency. From there onwards Profile A would be used for audio as it was compatible with
2110-30. In this way, audio and video would have as final signal encapsulation format the one corresponding to the SMPTE 2110 set of standards (Professional Media Over Managed IP Networks), a contribution by the Video Services Forum as a set of technical recommendations for the transport of elementary media streams over IP networks. This new technology adopted by RTVE for the new Sant Cugat studios offers three key points that improve upon the current coaxial digital video baseband infrastructures; namely: a) Simplicity in installation, a significant decrease in wiring and resources (distribution, processing ...). b) Versatility, by allowing the software to assign various functions to the hardware. And there is a much greater density of signals when a large number of essences of different nature are conveyed through the same link (the network 65
IP INFRASTRUCTURE
is signal-type agnostic). c) Virtualization, as cameras, mixers, signal processors, gateways, etc. turn into generic, readily re-assignable resources. Other advantages arising from these are ease of migration to the new booming formats and ease of expansion and/or adherence of new studios to the system, which had already been foreseen in the initial dimensioning of the IP infrastructure. This novel approach also allows and promotes sharing of resources, being this feature adopted in the installation not only regarding reallocation of media between studios, but also in the aspect of electronics sharing, which then becomes centralized. Such is the case with the electronics of the video mixers in the studios, for which a proposal was made to acquire only one for every pair of studios along with two remote panels for the production controls, for instance. In this way, preparations to carry out special programs that previously required complex procedures for transfer of resources or which simply could not be carried out at all, are now possible thanks to this technology with a shorter intervention time, thus increasing productivity of the facilities. On the contrary, in the initial phase installation 66
commissioning times increase, as complex programming tasks are required during the new systems' start-up. The studios receive a signal and can remote the referred local and distant sets by relying on the necessary technology to produce, in certain cases, by means of virtual reality. Installation of synchronisms has also been expanded with the addition of the PTP signal and the intercom system, including RRCS for communications management. In addition, two new tools not present until now in the usual installation of studios have been added: - An orchestration and control system (‘broadcast controller') that governs studio equipment and network electronics, thus allowing to quickly and flexibly make configuration changes, schedule events or plan for different work modes via reallocation of resources, enhancing, for instance, the resources of one studio versus another in the undertaking of large productions. - A management, monitoring and alarms system that offers operators a global view of the running processes and the status of the network and the equipment, with interactive notices that allow reacting to issues or anticipating congestion situations that may occur.
RTVE
Participating companies Telefónica Soluciones is the integrator for this project together with the TSA technical team. It also relies on the technical advice of the engineers from the distributors: Crosspoint for Grass Valley and Embrionix/Riedel equipment; and Xeltec for the entire range of Lawo systems. The main brands that Telefónica Soluciones relied on for its batch as integrator were: - Cisco for network electronics. The product is IPFM (IP Fabric for Media) with a key element, a development effort named NBM (NonBlocking Multicast). It also provides the network electronics manager, which is Cisco's DCNM. - Lawo for gateway processors, for multiscreen generators and for the VSM orchestration/broadcast controller system. Also part of VSM is the set of monitoring and control tools: vSNMP, SmartDASH, SmartSCOPE, theWall. - Grass Valley for Kahuna 9600 video mixers with Maverik panels. - Embrionix for monitoring gateways in controls.
- Stands, robotic heads and Vinten robotics control system, CUE Autoscript signal generators and BlackMagic Ultimatte 12 virtual background embedders. Generators hooked to PTP GPS systems and traditional Tektronix SPG8000A synchronisms, as well as Tektronix PRISM WFMs for measurements in the IP world. Classic synchronism distributors (BB, TriLevel, DARS...) with equipment from the firm Albalá Ingenieros. Crosspoint is the successful bidder for the Camera System batch, which comprises GrassValley LDX 82 Elite, and offers a complete system with CCS-One as an access point for the configuration of the system by VSM. On the other hand, Telefónica Servicios Audiovisuales is the supplier of the audio system batch consisting of 4 LAWO mc2 56 MKIII HD consoles and a NOVA Router 73 HD audio matrix (also from LAWO); apart from the LAWO distributed input and output boxes A_Stage64 / A_Digital8 and 4 independent external DSP modules with processing capacity of up to 256 channels each (A_UHD Core). 67
IP INFRASTRUCTURE
68
RTVE
69
IP INFRASTRUCTURE
Execution phases The execution phases for this project were the following: - Release of the call for bids (October 2019) - Contract award (March 2020): Creation of a ‘technical office’ for project management and field measurements in order to plan for the installation and commissioning of orders / procurement of materials. - Base architecture installation (May 2020): Installation of network infrastructure and equipment for a 'pilot' study. - Pre-training (June 2020): Aimed at providing the company's professionals with theoretical foundations on these new technologies. Also oriented to the acquisition of knowledge about "what the system can do”, especially regarding setup of the orchestration and control system, allocation, and management of resources; and design of software panels 70
representing RTVE's own workflow. - Installation of Production Studios (September 2020): Once the infrastructure works carried out on the controls for studios 3 and 4 (July and August) were completed. Meanwhile, production was maintained versus the same sets from a mobile unit located outside. - Network and systems configuration: Cameras / Mixers / Gateways / Multiscreens (September
and October 2020): Commissioning and setup of Cisco, Grass Valley, Lawo and Embrionix systems, pursuing the interoperability of them all as a whole. - Approval of the pilot study (November 2020): Demonstration of the degree of interoperability achieved between the equipment and the orchestration system. - Training (October 2020 to February 2021):
RTVE
Training courses regarding all areas involved in the project, concerning equipment and systems operation and maintenance, acquisition of skills on management of the orchestration system and familiarization with management tools for network electronics. - Go live - Production Studios (March 2021): Commissioning of the first pair of studios (studios 3 and 4 for Program Production).
- Installation of News Studios (June 2021): Once the infrastructure works carried out on the controls for studios 1 and 2 (July and August) are completed. In the meantime, production is maintained versus the same sets from a mobile unit located outside and/or from the controls of studios 3 and 4, already in operation, whenever the timetables of the relevant programs do not overlap. - Go live - News Studios (July 2021): Commissioning of the second couple of studies (News Studies 1 and 2). - Installation of remote, virtual sets and newsroom (July/August 2021): Installation and commissioning of sets close to the studios.
Technology elements As we have seen, we are facing a complete IP video and audio installation for 4 television studios at the Barcelona Program Production Center. Furthermore, also contemplated is compliance of the new IP
installation with important 'legacy’ systems, such as the signal recording system currently in operation, 3 EVS XT VIA servers shared by the 6 studios and the exchange of a hundred signals with the Central Control SDI matrix. All this, in an installation designed for HD-3G and dimensioned for redundancy, both for the 4 studios that were the subject of the contract, as well as for future compliance of 3 additional studios (studio 7 for news programs and studios 5 for program production) as well the Center's continuity system. But what specific elements do they comprise? 1. NETWORK ARCHITECTURE The design of the media network is based on a network architecture featuring SPINE & LEAF topology (with a single SPINE), comprising three LEAF nodes connected to a central SPINE. These nodes correspond to equipment for News studios 1 and 2, equipment for studios 3 and 4 for Program Production, and connectivity equipment 71
IP INFRASTRUCTURE
with the legacy of the current SDI installation based on gateways (shared resources). The SPINE and the first two nodes are located in a 'full IP' Equipment Room in the Studio area, whereas the third node is located in the Central Control Equipment Room found in the Technical Block. Likewise, all the abovementioned nodes are duplicated, thus making up two completely independent networks: the red network and the blue network. Said network electronics is covered by Cisco equipment, being the SPINE and the Central Control node stand-alone type switches, each of them featuring Nexus series 36 100G ports. The electronics of each of these two nodes of the chassis studios offer capacities of up to 4 line cards, of the same family. As mentioned before, each of the electronics is duplicated for both networks. The network solution provided by Cisco is called IPFM (IP Fabric for Media), based on the use of 72
transport at level 3 (IP) using the multicast protocol. Cisco adds to the multicast transport protocol (PIM) its NBM (Non-Blocking Multicast) development, a process under NX-OS (operating system used by the switches of the Nexus family), that provides an extra intelligence to PIM for the use of efficient links between nodes. DCNM is the network manager. The audio network is practically independent from the control network,
except for the fact that in very specific locations (GCL in the studio area) switches share audio and control signals, being therefore connected to the cores of both networks. In the rest of the audio installation, the switches are dedicated, thus forming two parallel networks (red and blue networks), with their relevant cores installed in the IP Studios Equipment Room (studio area) and connected to the SPINE of media networks so as to enable audio-video
RTVE
SPINEs of the media network; and, from this entry point, the PTP is distributed to all the media switches working in 'boundary clock' mode. The switches in this network receive a signal from two Tektronix SPG8000A generators connected to GPS, also located in the Central Control Equipment Room. The PTP signal is generated under a single PTP SMPTE 2050 v2 domain, as there is initially no equipment in the installation that requires working under another PTP domain. alignment in studio production. In media networks (video and audio), the SMPTE 2110 video over IP encapsulation format is used. There is a synchronization network based on PTP comprising two separate, dedicated switches for this type of signal, operating in 'transparent clock' mode and installed in the Central Control Equipment Room. From them, the master PTP is sent to the
The design of the control network conforms to a 'pseudo SPINE-LEAF' architecture, in the sense that there are not two parallel and duplicated networks, but a distribution of the signals by zones towards the LEAF switches, each one of them connected to both SPINES, thus setting up a double path for the control signals instead. As in the previously described networks, the cores (SPINES) are installed in the Equipment Room of the Studios, and the LEAFs
are distributed by zones for convenience for the aggregation of equipment. A LEAF is also deployed in the Central Control Equipment Room. Last, it must be mentioned that there is a network of cameras separate from the ones previously described- that is not directly connected to the SPINEs corresponding to the media network, but to the CCU elements uploading and receiving video and audio signals in SMPTE 2110 format from said network. This network is made up of Cisco switches also from the Nexus series. Within the camera network, we work with the SMPTE 2022/6 video over IP encapsulation format. The main purpose of this network is to enable block exchange of all signals between camera heads and CCUs. This is a particularly good approach to working with the Remote Set, so that signals are exchanged with this set without causing ‘misalignment’ between them. 73
IP INFRASTRUCTURE
Future growth of the networks The architecture for the media network of this facility encourages an 'east-to-west' growth through addition of new LEAFs, with the limit of connectivity to the SPINE being determined by the number of free links available. For this reason, an expansion strategy will either involve adding new LEAFs by connecting them to the SPINE whenever there are links available (100G links); or expanding instead the capacity of the SPINEs, in which case they would be replaced by SPINEs having a larger number (and where appropriate size) of ports, the then released SPINEs onwards taking the role of new LEAFs, which results in some of the equipment being reused. All current links between LEAFs and SPINE have been calculated: - By considering for each 1080 50p video streams workload of 2.5 Gbps (actual load: 2.15 Gbps), operating in fact, as already noted above, in 1080 50i. 74
- By considering that the number of current links referred to each LEAF includes a SPARE link (not necessary in principle due to load volume). - By assuming that 15% of ports of each type are free in every LEAF. - By taking into account all the dimensioning of future studios. In this regard, 3 additional links to those currently connected will be used when the time comes to join the system for these studios. The control network is also expandable without any further trouble by simply by adding new control switches that increase the control ports of the new equipment and are connected to both control cores.
• High-density gateway processors: used for the legacy signals associated with the Central Control matrix and for the upload and download signals versus the junction boxes found in the set. C100 Lawo software-defined processors mounted in the v_Matrix enclosure are used and connected to the red and blue media networks via 40 GbE interfaces. • Mini-module based gateway processors: used for discrete monitoring in controls and sets, featuring HDMI or SDI outputs. Converters by Embrionix, a brand recently acquired by Riedel, are used and fitted in small boxes and connected to the red and blue networks with 10 GbE interfaces.
2. GATEWAY PROCESSORS HD-SDI/IP/HD-SDI gateway-type conversion systems implemented in Barcelona are sorted out by using various products existing on the market based on their relevant role in the installation, as we find the following environments:
3. MULTISCREEN SYSTEM Generation of multiscreen signals is based on the same Lawo v_Matrix platform that is used for the gateways. Only that in this case the C100 processor modules that carry out this work come with a multiscreen
RTVE
license loaded and the rear adapted to the function to be performed. There are 19 modules of this type, licensed under vm_dmv, that offer the following processing capabilities (both simultaneously): - INPUT STAGE: Generation of mipmaps (versions of input signals in various resolutions) for
up to 16 external HD-SDI signals, or up to 24 signals present in the network in different formats, provided they do not exceed 40 Gbps in aggregate. In our project up to 24 network signals encapsulated in 2110 1080/50i. The mipmaps generated are available on the network for subscription.
- OUTPUT STAGE: Generation of up to 4 heads (multiscreen outputs to the network generated from mipmaps present on the network) in 2110 1080/50p format with up to 64 sources each (in our case limited to 25 for ease of use, not due to hardware restrictions). The sources may have been generated by the module
The RTVE Engineering team during their visit to Pinewood.
75
IP INFRASTRUCTURE
76
RTVE
77
IP INFRASTRUCTURE
in question or by any other module. The 19 modules are divided into two clusters, one for each pair of studios (production or news) along with their associated technical controls. Each cluster processes the signals from their own studios, in order to minimize the exchange of flows between LEAFs, and thus not unnecessarily overload the links between LEAFs and SPINE. Connectivity to the media network is carried out through 2 40 GbE interfaces (one for each of the red and blue networks). 4. CAMERA SYSTEM
Valley Direct IP technology is used for the IP transport of video, audio, intercom and signaling, through the previously described camera network. They also allow an Ethernet trunk between the camera head and the relevant CCU. This allows connecting the robotics control data on the camera head. Furthermore, in its Direct IP+ version, this technology is used in four of the camera heads located at the remote set in Barcelona in order to convey said signals in JPEG2000 compressed format through the NetInsight Nimbra network. The entire set of camera chains is controlled by the Orchestration System through CCS-One servers in order to have absolute flexibility and to be able to manage the assignments of the camera heads deployed on any set to any production control at convenience. based on operational needs.
The camera system comprises a pool of 23 XCU Enterprise UXF CCUs and 30 LDX-82 Elite camera heads, all of them from Grass Valley. Four of these camera heads are integrated with the StarTracker system from manufacturer Mo-Sys for absolute positioning tracking used for virtual reality and augmented reality.
5. SHARED VIDEO MIXERS
Between the camera heads and the XCUs, Grass
There are two 72-in x 42out Kahuna 9600 video
78
mixers with 5 M/E banks each. Each mixer provides service to a pair of studios and therefore there are 2 Maverik panels connected to each set of electronics (4 panels in total). Therefore, given the electronics shared by each pair of studios, resources between each pair can be balanced so as to enhance at a given moment the resources allocated to one studio to the detriment of the other studio, all of this by governing the mixers from the VSM orchestrator. 6. MONITORING AND SYNCHRONISM GENERATION SYSTEM Two Tektronix PRISM waveform monitors are available, which allow an analysis of IP signals traveling through the network. The synchronism generators are also from Tektronix, model SPG8000A. There are two due to redundancy reasons; both generating PTP sync signal hooked to GPS and classic sync for signals outside the IP environment. The plan is to have all RTVE Sant
RTVE
Cugat synchronisms depend on these new generators. 7. NIMBRA Specific Nimbra cards with concrete SFP modules have been purchased for the exchange of 2022/6 flows between sites and for the transmission of robotics control data and tracking for positioning of sensorized cameras for such purpose. 8. INTERCOM An Artist 1024 box has been purchased with a card licensed for 110 twoway audio streams (8 channels per stream) and redundant connection to the red and blue audio networks. For intercom audio exchange with audio mixers (16 streams per studio) and with camera CCUs (46 streams, production and engineering, for 23 CCUs). 9. CONTROL AND ORCHESTATION SERVERS Installation comprises several servers to service Lawo's orchestration and control systems such as VSM, vSNMP, SmartDASH,
SmartSCOPE as well as VSM control hardware and software panels. It also has two servers for Cisco's DCNM system dedicated to network management: DCNM. Lawo's VSM (Virtual Studio Manager) is the system of choice as broadcast orchestrator and controller. The system controls all makes and models comprising the equipment involved in the installation, managing all the tallies and the labeling) in a centralized fashion. With an unlimited number of simultaneous users, each of them has an interface (hardware or software panel) adapted to the technical and operational requirements of their role. Regarding the control of the network, VSM gets integrated with the network electronics through the DCNM (Data Center Network Management) API, in such a way that it sends the orders that trigger the provisioning of new flows in the network, being the determination of the path of these flows left in the hands of IPFM's NBM
function. Subsequently, VSM receives confirmation and information on the characteristics and route of the established flows and presents it to the operator by means of tools that are more adapted to a broadcast environment. In this sense, the monitoring tools offered by VSM for this project provide both a graphical representation of the network topology with all endpoints, their location and how they are interconnected (SmartDASH), and a realtime, subscription-enabled analysis of the streams (at media packet level) existing on the network (SmartSCOPE). Cisco DCNM is also a valuable network monitoring and setup tool -more oriented to IT engineers- through which the network will be able to be configured and the host and flow policies that are decided can be applied in order to protect the network against flows from unauthorized devices or bandwidths that exceed those supported by the equipment, working in the 79
IP INFRASTRUCTURE
the commitment of members while making them feel they are part of a whole.
David Valcarce, RTVE Director. RTVE workshop on production in IP technology.
format already defined for this project. Last, and integrated with the tools described above, there is an alert system (vSNMP) that informs about the alarms that are triggered in the various devices by monitoring certain parameters (temperature, status of sources, etc.).
Keys to success In our opinion and based on the experience gathered by now, the keys to this project's success lie on the following pillars: Pre-training: Receiving the appropriate training, from the hand of those who have the required 80
expertise is the key to knowing the market and guiding both the drafting of the project and its execution on the right track. Interdepartmental cooperation: Nobody knows a lot about everything. Therefore, turning to those who have a better knowledge of each system is a great support and provides assurance that no gaps are left unresolved. In addition, it can help shape up solutions to the problems in the most successful way. In the end, the whole company joins forces in pursuing the same goal, thus increasing
Visit to facilities: Being able to see first-hand how similar things have been done with good results in other parts of the world provides assurance that we are at the right time to go for a change. These visits also provide ideas and market trends. Awareness of market movements: Market trends are a good indication of where the strongest efforts are being made. The fact that one format is more popular than another is a telling sign of how it will survive the future, especially if there is also a clear adoption by the more established companies, which will also guarantee a more extensive implementation. Commitment with standards: It is both prudent and essential to require partnering companies to commit to adopt in a foreseeable future the standards on which JT-NM is currently working. Although not everything is developed at the same moment in time,
RTVE
it must always be expected that what is on the approvals pipeline will be adopted by the company with which a relationship exists. Otherwise, we would move away from interoperability in a short time, thus accelerating the system obsolescence process. Target the solution to a variety of manufacturers: Focusing the solution exclusively on a single manufacturer leads away from the important concept of interoperability. There is nothing more successful for showing a commitment towards interoperability than to demonstrate it through integration of different brands, choosing from each one the elements that best meet the relevant needs. In the case of network electronics -the basis for the transport layer of media flows- going for COTS switches has been a decisive step. They are the top experts 0n IP transport and they offer a wide range of models from which choosing the most suitable one for the project.
Ongoing communication with the ‘Client’: Know the needs of the 'client' to whom the project is carried out, open channels for involvement in the project so that they can contribute their experience, peculiarities and way of working, thus allowing to reach the most balanced solution possible between their company goals and policies. At the same time, it reduces the stress in the face of the change that is generated among the staff, as they feel taken into account and involved. The latter is the key to increasing their involvement and enthusiasm, which will in turn facilitate a change of mentality that they necessarily have to go through. Ongoing training: Acquisition of knowledge generates confidence, and provides security by being aware that what we do has a theoretical foundation and has also proved to work well in other places. With good training, there are fewer and fewer loose ends and faith in the project
increases. This training should be provided from the very beginning: prior to implementation it will raise awareness of what can be expected from the system. It will also facilitate a correct compromise between what is needed and how it can be achieved. After implementation, it will be useful for strengthening knowledge and processes and ways of taking action once the system is in operation. Continuous monitoring: During the implementation process, nothing should be left unchecked or not made subject to approval. Systems are complex and we cannot afford to stray by not participating in all aspects of system setup, leaving everything the integrator's discretion. Said criterion must be made clear and understood, be open to contributions or improvements; and finally, with the adoption or not of changes, approved and applied. 81
OPINION
Software Reduces Technical vs. Creative Tension for Video Production
Since its inception, video production has faced an inherent conflict between technical requirements and creative aspirations. Software is fundamentally reducing this tension. The dramatic changes in consumer viewing habits continue to challenge business models for many media organizations. The shift to viewing video in new ways not tied to any fixed schedule coming from smart televisions, computers, and mobile devices requires different approaches for content creators. Specialized and dedicated technology implementations are giving way to more ubiquitous IT-centric methodologies in many aspects of daily life. Producing and delivering media is no exception. The most common IT elements evolving for media include software82
defined workflows, networking, and computing. The common thread linking these elements together is IP transport. By its very nature, IP technology efficiently handles all data types of interest to media production and distribution. IP is already the dominant methodology, as seen with the rapid ascendance of various streaming infrastructures and streaming services worldwide. With an IP backbone, the applications to manage these services have become virtualized, hosted in cloud environments, and are even being offered in software-as-a-service (SaaS) models to use what is needed when it is needed. With media distribution and delivery to all types of screens mostly migrated to IP streaming, what is
By Matt Allard, The Vizrt Group
the impact upstream on production? Media production has long been linear in nature, i.e., cameras and devices connected with point-topoint cabling. Systems and studios have been isolated from each other. Linking everything together has meant burdensome cabling requirements and complexity. Compare this past approach to using IT with IP connectivity, which simply requires systems and devices to be plugged into the network and permits them to connect
SOFTWARE-BASED TOOLS
through software. Now every device is available for use with production and distribution, without directly connecting them to one another.
can be deployed in cloud environments and providing ready access from multiple locations by multiple users collaborating in real-time.
This approach provides more options for producing media. It makes the process simpler, leading to substantial enhancement of the quality and creativity of productions – and the delivery of them whenever, wherever, and however they are watched.
Software-based tools lend themselves to function consolidation that can combine tasks in different ways for different users offering more flexible usage that matches changing resource requirements. Software tools can also automate processes that make production more efficient and reduce the possibility of errors. Rather than rigid production clusters, creatives have a more agile way of working that moves away from the technical to focus on enabling more stories, better told.
An increasing number of software-based solutions are being offered to cover every aspect of production. While some of these can be located onpremises with off-the-shelf computer hardware, there is increasing interest in virtualized solutions that
News production has been software-centric for some time. Now, by surrounding the software layer with changes in networking and transport, more possibilities have arisen. Small camera technology has vastly improved so that smartphones and tablets record media content, and with Wi-Fi connectivity, can even be real-time sources to live events. The opportunities for building stories have become more versatile. Images, clips, audio, and graphics elements can be searched for from anywhere in cloud-based archives using metadata tags. Browser-based editors can be virtualized to modify media and add graphics. Newsroom systems can also be
83
OPINION
virtualized so journalists can plan out placeholders and rundowns wherever they are located. Especially for scripted news productions, the entire program output with switching, effects, audio, and graphics can be run from various locations using remote user interfaces and customized software panels. Many of these same capabilities are available for sports production as well, including being able to run productions remotely with a smaller number of staff on-site at events. Esports is an excellent example of how using software and IP streaming opens up new possibilities that reduce production complexity and keeps costs low. The opportunity to further monetize sports content with reuse is more manageable with software orchestration functions such as metadata tagging, logging, and automated highlights creation. Another opportunity is with advertising technology using virtual 84
reality software technology to overlay existing field-side advertising boards with realistic virtual signage in real-time throughout events for tailored marketing. Augmented reality software uses image-based tracking and keying technology for immersive 3D graphics to display additional information like statistics. This creates more
entertaining experiences for viewers. The line between linear and non-linear programming continues to blur over time for viewers. Some streaming services offer a hybrid approach with a mix of scheduled programming and video on-demand content. For media content creators this is a challenge and an opportunity. Live productions can be
SOFTWARE-BASED TOOLS
typical networking infrastructure practices have to be amended for optimized performance. The provisioning requirements for cloud deployment must be well understood to achieve the expected performance parameters.
recorded and put into both non-linear access and into schedules to be repeated for linear programming. The key again is software. In this case, it is achieved through scheduling and automation applications as the content itself can be placed in cloud storage for flexible access. Incorporating IT networking knowledge in some manner to use IP for
transport is critical to success for media organizations. The need to move high-bandwidth video streams and large files around will only increase. That presents technical challenges. This necessity – combined with technologies including high-quality compression, software-defined networks, and network accelerators – means
The ultimate solution here is software. With attention to the user experience, software interfaces can provide easy access to all types of content creators. In some cases, systems provide the means to build do-ityourself software interfaces that can be customized precisely to particular user and workflow needs. This empowers organizations to bring in the talents of creative generalists rather than technical specialists. Software has become the primary plotline that leads to the ultimate goal: more stories, better told for viewers – no matter how they choose to access content. 85
DOP IN TV SERIES
ACS
86
ROSS EMERY
A DOP Raised by cameras With more than 20 years of experience in Cinema and TV, Ross Emery has the privilege to pick up the projects that motivates him. His latest job is “Raised by Wolves”, a sci-fi series set in a faraway planet and produced by Ridley Scott. We had the opportunity to chat with Ross about this production and his long career, as well as we make an overview of the film industry.
87
DOP IN TV SERIES
You’ve been linked to cinematography (sorry to point this out) for a very long time. What has changed since you started? What have been the main changes in the profession and in the technology used in it? It doesn’t feel like a long time but yes, I’ve been there for a lot of changes major and minor. The main one is moving from shooting on film to digital, but that is a simple transition. I believe that cinematography has actually changed markedly in the way it is involved in the process. It used to be a highly technical process with technical aptitude being a prerequisite for progression to DP. Without a meaningful amount of knowledge of the chemical processes, you could not really do the job a high level. Now digital cameras are much more forgiving and the technical knowledge needed is pretty simple to learn. It does not mean that DPs who shot film were better in fact the technical aspects could really limit creativity by imposing 88
Picture from Raised by wolves
technical boundaries that were not supposed to be crossed. Some DPs would never contemplate underexposing film for example or exposing outside the guidelines of the manufacturer. Digital has allowed cinematography to cross over into a much more creative involvement, more intertwined with story and character and taking risks to give the audience a much more unique experience more suited to the movies intentions. The other main difference is now with hi quality on set monitors where everyone can see the image often with the look applied, you need to make sure all the creative minds are on the same
page and your political skills to deal with any disagreements about the look can be negotiated and ideally dealt with in pre production.This is an important one that really needs to be part of the cinematographers skill set now. I still believe that the cinematographer should take the lead in creating the look of the film, bringing all the collaborative ideas with the director, production designer VFX lead and actors into a single application the image that with serve the film best.
Has your motivation evolved since you first features? What drives you? I used to be consumed
ROSS EMERY
DIGITAL HAS ALLOWED CINEMATOGRAPHY TO CROSS OVER INTO A MUCH MORE CREATIVE INVOLVEMENT, MORE INTERTWINED WITH STORY AND CHARACTER AND TAKING RISKS TO GIVE THE AUDIENCE A MUCH MORE UNIQUE EXPERIENCE
with the look at the expense of other parts of the film. I am now much more holistic in how I approach a project. I still am amused when we have awards where “beautiful” pictures are rewarded, I find that pretty annoying now. When people tell me I’ve shot something “beautiful” or “Gorgeous” I try to correct them and ask them to use the term appropriate instead. You should always be looking to shoot images that are appropriate, if the story demands harsh, ugly images, that’s what you must shoot, otherwise you are damaging the experience to the audience. It’s a long journey for cinematographers to
reach that point, to shoot a bad shot to serve the film in the best way. My greatest fear now is to be awarded some trophy for a film that I knew was not good work, there are so many award ceremonies and festivals now that hand out copius amounts of awards a lot of the time to projects that are simply popular, fashonable or the film just made a lot at the box office. I would rather have a film honoured for being a good film and no one noticed that the cinematography just did its job and made it better.
How important technology in your shootings? Does it limit you? Does it broaden your horizons? Technology is a broad term and I apply it differently to some. I find myself lucky to be working at a time that I can shoot with such a huge range of technology from the latest hi resolution cameras with lenses that are so well made that the image can look technically perfect, or I can choose older less perfect lenses to apply an imperfect look and a camera system that
89
DOP IN TV SERIES
processes images in such a way that it gives a certain textures in this way technology can broaden your choices if you include all definitions of technology. I will always try to make sure that the technology does not impeded the flow of the processes, it can be great to see everything under perfect conditions but if it costs you a few setups per day on the schedule it is harming the process.
What would you say have been the most relevant technological updates that you have adopted during this time? If I had to pick one it would be the Digital Intermediate process. Some would automatically say shooting digital but what we can do in the DI now is just brilliant and here is the reason. In the past you would make decisions based on the script and talks with the director, but sometimes a film gets found in the edit, or an actor gives such a powerful performance that it can change the emotional balance of the film, it happens quite a lot. 90
THE MORE DIFFERENT TECHNIQUES AND STYLE YOU CAN OBSERVE IN PROCESS THE MORE YOU CAN RECALL THEM IN SIMILAR CIRCUMSTANCES ON YOUR OWN PROJECTS.
With the DI tools I can adjust to suit the new directions, not be stuck with what was a decision made on the shoot day, or when you see the cut of a film with music for the first time and you change some ideas about the emotion you want out of the images. With a DI it’s alway s great to know you can do this, I would always try to be a s accurate as I can with the shooting of the film initially but knowing you can enhance and elevate visually later is a powerful tool.
We know that it is your job to adapt to each project, its budget and the director’s visions, but are there some recurring traits that marks you as cinematographer? What’s your signature?
I would hope there is nothing too obvious but we all have those little signature that find their way into the work. I think it’s changed. I used to do shallow depth a lot but it became really popular so I kind of didn’t want to do it anymore. I liked it because you could really control what the audience looked at. Now I try to use different things, I like using front light at the moment and then controlling the fall off, itis fun, everyone defaults to backlight because that’s what they get taught in film school or online tutorials so I’m always looking to subvert the current trends, it keeps you thinking. As for a recurring one it’s probably the shallow depth but very occasionally now, I nearly always carry the
ROSS EMERY
Panavision 50mm T1 Superspeed lens in the kit and I can’t help pulling it out for the shot that needs it usually a close up where the character is internalising a performance.
Looking back on your career, we’re sure that sharing shoots with other cinematographers has made you grow as an artist. What experiences have influenced you the most? What techniques or styles from other DPs have you made your own?
The more different techniques and style you can observe in process the more you can recall them in similar circumstances on your own projects. Someone like the late great William Fraker had a really old Hollywood style that was very much eliminate all the variables and risk and let the actors and director have freedom on the set. He never got cornered and was always thinking ahead to see problems and would have organised himself cover with more lighting units and a style that could
quickly adapt to whatever the director wanted while still supplying really good cinematography. Darius Wolski was more “Indy” taking risks and pushing the technology to achieve amazing results that were unique. It was a pleasure to have worked with both of them and seeing the different approaches. Here’s a secret, I would have been quite happy to be a 2nd unit DP for ever, it’s such a great job on a production, you get all the cool stuff to shoot ,the problems to work out, most of the time your the
91
DOP IN TV SERIES
guy who saves the productions ass by getting additional shots or enhancement shots, most trailers are 2nd unit shots. The other great thing is you get to work with other DPs on a really great level, watching guys like Darius Wolski, Bill Pope, Tom Seigel, Bill Fraker from the position of 2nd Unit DP is such a great experience. The time I spent alongside those DPs is so valuable to the growth as a DP. I don’t know how someone can emerge from a film school as a fully formed DP without time as an AC or Operator or 2nd Unit DP. A lot of what these DPs taught me along with techniques was politics and crew management which I don’t think can be taught, you have to see it in process.
Your job in ‘Raised by Wolves’ is your first experience on “TV” (or at least TV series) as Cinematographer. How has it been? Did you enjoy it? Was it very different from your work in movies? The day to day was not that different. The schedule was a little more 92
challenging but on this project we had a great creative team so it never felt like we were compromising anywhere and the creative was always elevated to primary consideration all the time. We did make conscious decisions to avoid what would be
called traditional coverage, just covering lots of close ups with multiple cameras just to make the day. It never felt like what was called TV shooting but the subject matter contributed to that. It’s a very visual project and we always tried to keep pushing that.
ROSS EMERY
If cinematography in television series had not evolved so much in recent decades, do you think you would still have been interested in being part of "Raised by Wolves"? Truthfully, probably not. I had heard stories from people who worked on episodic TV shows and some of the stories about 16 hour days and 4 camera set ups to cover what was needed, shooting 10 pages of dialogue a day. TV and streaming platforms like Netflix, HBO Max and the rest, are really competitive now and they want high production value and interesting visuals so this is really driving a great deal of high class storytelling in that world now. The stories have to be compelling and the execution of VFX, cinematography, production design, have to keep up. It’s great to see.
Although you did not shoot with Ridley Scott, who was in charge of the first two episodes (and executive production), you expressly asked to be
I DON’T KNOW HOW SOMEONE CAN EMERGE FROM A FILM SCHOOL AS A FULLY FORMED DP WITHOUT TIME AS AN AC OR OPERATOR OR 2ND UNIT DP.
present on the shooting of the first few episodes to learn more about how cinematography was being defined. Did these weeks make you change or evolve your work in your five episodes? I knew this was going to have a really interesting and surreal look. This world that Ridley was building, and after having worked with him on the Alien Covenant movie, I knew that the key to the style is seeing how he builds the look on set. There is always great references and storyboards but nothing is better than watching it happen and see the decisions being made. Also like a good 2nd Unit DP I know the knowledge you gain from talking to the Gaffer, Camera
Operator, script supervisor, set dressers is so valuable in making sure you are hitting all the right notes.
We’re facing a science fiction production that is not so sci-fi focused as it seems. When you delve in the footage, you get this almost ethnographic feel in some shots. What do you think? This was very interesting. There are some very big themes being dealt with in the show and setting the show in an almost stone age setting and seeing the clash of hi tech with stone age practices was very cool. The family was like a tribe and I think seeing their development like you would a Nat Geo documentary was a great way to go. It becomes very 93
DOP IN TV SERIES
refreshing not to be seeing the sci fi tropes that’s are so easy to fall into in most sci fi shows.
And by the way, what was the camera + lenses that was chosen for the project? We shot Alexa SXTs and Alexa Minis wth Panavison lenses, mostly Primo with some older Super Speed and Ultra speed lenses. We didn’t shoot 4K or Large format as it was deemed not needed and HBO Max were fine with the resolution. We shoot Arri Raw for maximum latitude.
We really liked the color treatment. How did you approach this field? We wanted to give the planet a harsh look, dust, wind heat, then a freezing cold night with snow and mist. Desaturation was the main control for day looks and not to shy away from front lit scenes .The night was more difficult, the conceit is that the nights on Keppler 22 the planet are illuminated by two large moons so night on Keppler is never really dark, it’s blue but it’s like the brightest full moon 94
night on Earth ever. We shot day for night for all night scenes on Keppler which was usually the overfilling of shadows and controlling the sun direction where possible. The standard Day for Night set up was 4 x 18K HMI through 20x12 half grid cloth frames close to the camera, very difficult for the actors sometimes.
What was the biggest challenge you had to face in this production? Maintaining a very high level of appropriate cinematography to keep up with the complex and intriguing storylines. Every new script had new challenges and you have to really keep driving the look to keep up. We could never just fall back to standard coverage or framing and lighting, it had to keep pushing the boundaries and find new interesting ways to serve the story.
We’ve spoken over the past few months with several cinematographers and they all point out that led lightning has been a gamechanger for the
profession. Do you feel that way too? Pretty much, it’s great not to have hot sweaty sets anymore and quicker setups with lighter units. I still have to catch myself sometimes because a sky panel can’t do what a 2K Fresnel can do, and sometimes you just need a 2K fresnel. What I do like is the light fabric like Lite Mat units, very quick and
Picture from Alien: Covenant
ROSS EMERY
you can attach it to the wall of the set and you have a lit setup that does not need cables and C Stands that can clutter up a set.
Due to Covid-19, the TV + Cinema industry relies more than ever on remote workflows. How has it affected your work? Do you think the industry is ready to adopt these workflows
permanently? Would this, in your opinion, be positive? I think some of the protocols we have seen come in last year are great. Zoom production meetings rock! So much quicker better information flow, love them. Don’t want to go back to the room full of people for 5 hour production meetings. Remote grading sessions
have been around for a while now and they are really good. It’s nice being in the room but remote grading I can see doing more of.
Finally, what’s next for you? Would you return to feature films? Will you stay in the TV industry for a while? My two projects since Raised by Wolves have both been features and I was going to go back for season 2 of Raised By Wolves, but it looks like Covid World will stop that from happening. I have no problem doing more TV, like always I pick my jobs where possible on people and material, then you normally have a good experience. If you do a job for the money, you get the money and not much else. That’s a balancing act, we all have bills to pay but sometimes the job that is more interesting with the good people will make you more money in the long run. 95