Intelligence for the media & entertainment industry
SEPTEMBER 2018
SEPTEMBER 2018
HEY! YOU! GET INTO THE CLOUD
www.tvbeurope.com
FOLLOW US Twitter.com/TVBEUROPE / Facebook/TVBEUROPE
CONTENT
IS IT TIME TO PICK AND CHOOSE?
O
ver the past few months we’ve been writing a lot about content owners who have decided to offer their products direct to the consumer. It’s been well reported that Disney has taken its content away from other streaming services as it prepares to launch its own SVoD service next year. DC Entertainment, home to the likes of Superman and Wonder Woman, is preparing to launch its streaming service DC Universe this autumn. Juventus recently launched an in-house offering, opting to go OTT instead of connecting with fans via a linear
Senior Staff Writer: Colby Ramsey colby.ramsey@futurenet.com Designer: Sam Richwood sam.richwood@futurenet.com Contributors: George Jarrett, Mark Layton
a CDN? Or the cloud? Will they be purchasing more hi-tech editing suites because viewers will expect high quality content? What about subtitling? Graphics? Security and piracy? As viewers change their habits, surely that can only be a good thing for the technology side of the business, can’t it? I think those are some of the questions I’m likely to be asking at IBC this month. Because I can certainly see a time when I can pick and choose the content or OTT service that I want - which will be Liverpool FC, Kenneth Branagh, The West Wing and Robert Downey Jr (a little insight into my world for you).
‘How far away are we from more content creators and owners launching their own DTC services? And what effect would that have on our side of the industry?’ TV channel. And even Charlton Athletic is streaming live content on its own website, enabling fans to watch coverage of all U23s, U18s and women’s teams. Plus, US uberproducer Tyler Perry has said he’s been talking to Viacom about launching his own OTT service. So that’s got me thinking. How far away are we from more content creators and owners launching their own DTC services? And what effect would that have on our side of the industry - will they need to deliver content via
Editor: Jenny Priestley jenny.priestley@futurenet.com
I have to say I’m a bit amazed at how quickly IBC has snuck up on me. It feels like I’ve only just got back from last year’s show. It looks set to be another great opportunity to meet colleagues within the industry and hear what everyone else thinks are the current key trends. If you see me dashing from hall to hall please stop me and say hello. n
JENNY PRIESTLEY, EDITOR
Digital Director: Diane Oliver diane.oliver@futurenet.com Content Director: James McKeown james.mckeown£futurenet.com
MANAGEMENT Managing Director/Senior Vice President: Christine Shaw Chief Revenue Officer: Diane Giannini Chief Content Officer: Joe Territo Chief Marketing Officer: Wendy Lissau Head of Production US & UK: Mark Constance
ADVERTISING SALES Sales Manager: Peter McCarthy peter.mccarthy@futurenet.com +44 207 354 6025 Japan and Korea Sales: Sho Harihara sho@yukarimedia.com +81 6 4790 2222
SUBSCRIBER CUSTOMER SERVICE To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/page/faqs or email subs@tvbeurope.com
ARCHIVES Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact lucy.wilkie@futurenet.com for more information.
INTERNATIONAL TVBE and its content are available for licensing and syndication re-use. Contact the International department to discuss partnership opportunities and permissions International Licensing Director Matt Ellis, matt.ellis@futurenet.com Future PLC is a member of the Periodical Publishers Association
All contents © 2018 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.
Future plc is a public company quoted on the London Stock Exchange (symbol: FUTR) www.futureplc.com
Chief executive Zillah Byng-Thorne Non-executive chairman Peter Allen Chief financial officer Penny Ladkin-Brand Tel +44 (0)1225 442 244
TVBE SEPTEMBER 2018 | 3
IN THIS ISSUE SEPTEMBER 2018 06 5G BT Sport’s Matt Stagg on how improved connectivity will revolutionise the broadcast industry
12 Playing out of the cloud
Jenny Priestley talks to Discovery Inc following its move to the cloud
17 More storage options than Ikea
George Jarrett discusses the cloud with the BBC’s Andrew Dunne
20 Sounding it out
52
Mark Layton gets the lowdown on the curious world of subtitling
34 IBC: Transforming theory to application
IBC CEO Michael Crimp offers his thoughts
40 IP meets content
Lawo’s Andreas Hilmer weighs in on the company’s new IP technology
52 Cloud chasers
Colby Ramsey finds out how Sony helped Red Bull stream a motorcycle race live from the Swiss mountains
64 First out the gate
Colby Ramsey discovers how Dejero GateWay gave the Team Sky cycling team the edge during this year’s Tour de France
66 Looking forward in post
How Colorfront Transkoder is proving invaluable at Roundtable Post in London
70 Preparing for the future
34
40
Jenny Priestley hears from the EBU about their UHD tests with HFR, HDR and NGA during this summer’s European Championships
OPINION AND ANALYSIS
How 5G will revolutionise the broadcast industry By Matt Stagg, director of mobile strategy, BT Sport
T
he introduction of 4G had a huge impact on the entertainment industry. Suddenly, people could watch high quality, live or on-demand content wherever they were and whenever they wanted. Fans don’t have to miss out, and no one ever has to be bored on the commute without their favourite series. 5G will take this to a new level and fundamentally change the way people experience small screen entertainment. Use cases for consumer applications are fairly well documented and mainly based around immersive experiences – virtual, augmented and mixed reality. Low latency and enhanced mobile broadband are going to exploit this emerging form of interactive entertainment. What is not widely documented is how 5G will fundamentally change the way the entertainment is produced. Prince George of Cambridge was born in July 2013, at St Mary’s Hospital, London. The hospital is a stone’s throw from EE’s old offices in Paddington. OB trucks were parked outside the hospital for weeks, and we got chatting to the production teams manning them that hot summer. They wanted a back-up and an alternative to the satellites they were tied to. They wanted to know that they didn’t have to book space. That they had some flexibility if/when the established system let them down at a vital moment. And they wanted to test the use case: could a mobile network enable a broadcast when it wasn’t possible or viable to send an OB truck. As with so many aspects of 5G, we’ve trialled the capability and proven the use case on 4G. But 4G doesn’t have the ability to deliver with a cast iron guarantee of quality, or at the sort of scale that will see significant change to the industry’s ways of working. 4G is a back-up for news crews, not a first-choice option. 5G would enable the broadcaster to only send the cameraman and reporter instead of a whole production crew. The footage can get back to the studio in a fraction of the time it takes a physical transfer. With a 5G enabled camera you are ready to film as soon as you step out of the car.
06 | TVBE SEPTEMBER 2018
5G could be the difference between an exclusive and a “me too.” A live production needs bandwidth and latency, and while that can be enabled on 4G, it can’t be guaranteed. Using network slicing in a 5G core, we will be able to provide a dedicated broadcast-grade network that will give that guarantee of performance. And with that, you can achieve remote production. Network slicing is a concept in the official 5G standard that allows a network operator to create what are effectively virtual networks to support industries that have differing requirements for bandwidth, latency and quality of service. 5G is not a one size fits all network – it’s a system. One of the problems experienced when filming on location is getting the day’s rushes back to the studio to begin the process of reviewing and editing. In their raw uncompressed format some of these files, particularly as camera technology evolves, can be terabytes of data. So, currently, its quicker to send a courier than to use digital infrastructure. The term ‘rushes’ originates from the need for urgency in getting them in the hands of the editors – speed matters, and it saves a lot of money. With the dedicated bandwidth of a 5G ‘slice’, digital can present a solution that saves a significant amount of money. The 5G option will be quicker than a motorbike transfer, and uploads can happen multiple times a day if required. The whole accepted process (which is really a euphemism for compromise) can be rethought. The hype around 5G is not insignificant. It’s being pitched as a digital panacea. A cure for the woes of consumers, and for the apparent stagnation of the mobile industry, which will be given a new lease of life by having a route into new vertical markets. A natural reaction to that is emerging: to rubbish all that and downplay 5G to the point where it may be all cost and little guaranteed return. The truth, of course, is somewhere in between. And with the media and broadcast space, the hype is very close to reality. The industry can adopt new technologies to become faster, more efficient and more flexible – no more compromise. n
OPINION AND ANALYSIS
Thinking clearly about cloud security By Ian Redgewell, head of media management, Globecast
U
se of the cloud for the storage and delivery of content to third-party service suppliers has been growing in recent years and in many cases is now accepted practice. In addition, we are seeing the introduction of cloud playout services, not least from ourselves. One of the key set of questions that we are asked centre on security issues when using the cloud. How does it change the content security dynamic? We have to differentiate between private and public cloud. Using private cloud, there really shouldn’t be that much of a change in terms of security because the infrastructure is within a facility and it’s under the control of facility staff. If you have a private cloud and mistakes are made - it’s not quite as secure as it needs to be - it’s still within your location and you have some “wiggle room”. However, if mistakes are made using public cloud and users rely on the security defaults or it’s misconfigured, then the risks are far greater. When you are using the public cloud, you have user and system accounts that you can log in to. There’s a large number of storage, network and firewall settings; it’s very extensive in terms of the configurations that can be applied. There are simple things for users like two-factor authentication that have to be enabled. You can lock it down to accounts, IP addresses, networks, services within public clouds – it really is comprehensive. The main point is that staff absolutely need to know what they’re doing and that’s far from always being the case. What’s key is that public cloud is as secure as you make it. It’s clear that for many this is still a new concept and hesitancy is completely understandable. As a service provider, we have vast experience and, to state the obvious, we know what we are doing; it’s a question of customers trusting that confidence that we have. Let’s not forget that cloud use – both in terms of
08 | TVBE SEPTEMBER 2018
delivery and playout – can deliver real innovation and new levels of flexibility and cost-effectiveness. Pre-cloud, a file would be sent from a customer or partner of a customer to us and at that point it’s our responsibility. That can still be true now. Interestingly, what’s beginning to happen – and is facilitated by cloud use – is instances whereby the content doesn’t have to actually move. For example, we have a customer whose content is in the cloud and if we want to access it, initially we don’t need to have a copy, we simply need permission to read the file. We can read directly from the storage in the cloud that the customer – or their supplier – is responsible for and output it as a stream. They retain control and they retain responsibility for its security. They are simply giving us read access. This is still very new for the industry. However, it would be remiss of us not to highlight the fact that as an industry, given the overall shift to IT networks and infrastructure, we are overdue a major security incident, despite the best efforts of all involved. If you think of the potential of interrupting something millions of people are watching, then the consequences are significant. We are moving into the cyber world faster than anyone can really truly keep up with; it’s making sure that’s understood. We have to work together. We can manage all aspects of cloud use as a one-stop-shop, dovetailing this with the multiple media management, playout and delivery options our customers require. They are having to satisfy an ever-increasing amount of non-linear and linear output and they need to be able to do this as quickly and costeffectively as possible. Cloud use is key to this. Costs have to be reduced and content owners need service providers with these new capabilities to satisfy their requirements. Expert security management is key to this. n
OPINION AND ANALYSIS
Software defined processing in IP and SDI infrastructures By Sebastian Schaffrath, chief technology innovation officer, Lynx Technik
B
roadcasters are starting to embrace a softwaredefined infrastructure and moving away from dedicated and proprietary hardware. This helps them create a flexible, scalable, agile and easily upgradable architecture – and one that is flexible enough to address new and emerging requirements. With increased processing power and network bandwidth, we are now starting to see entire facilities being built-out using the software-defined principles. By leveraging powerful, general purpose processing and efficient software solutions, you are removing proprietary hardware lags. Therefore, performance is enhanced, upgrades can be made relatively quickly and costs can be managed more effectively. App-based signal processing and conversion is a key factor to solving challenges around flexibility, scalability and cost optimisation. For live uncompressed signals, the concept of software-defined processing yields many advantages: Instead of using fixed processing cards that are fed by a fixed wired router (applies for SDI as well as IP) and having to re-route/re-switch the signal to bring it to the card with the right functionality, activating the relevant app on a generic hardware platform drives down the amount of bandwidth used and speeds up operations. Furthermore, switching/routing complexity introduced by processing loops and operators stress is reduced, increasing operational safety and efficiency. In terms of scalability, app-based processing platforms offer a future-proof investment for broadcasters to grow with evolving video and audio standards. New standards and compatible devices no longer require a new piece of hardware, but simply an exchange or adding an updated or new app.
Bandwidth for additional signal transport to dedicated processing cards is reduced to a minimum with a software-based approach. Signals remain on their respective IN and OUT positions while loading a different app adds the functionality in between. Therefore, no additional switching/routing is necessary. To help drive this adoption, greenMachine was designed to deliver a platform for the migration from an SDI to an IP based system all while removing legacy hardware and offering software-defined processing that works with flexible software APPs instead of fixed hardware. This opens an opportunity for a completely new and more efficient way of designing a broadcast system that embraces scalable performance and adaptable applications. Our customers recognise the importance of investing in flexible core infrastructures. In the transition to IP based infrastructures, our customers start planning their systems with a fresh view on system efficiency. As standards and technologies evolve, many of our customers have decided in favour of a flexible, softwaredefined solution. For example, we have customers using our platform with an APP for test signal generation during a line-up phase of a sports event. After the lineup phase, these devices are re-defined via software and act as audio embedders. For several years, Lynx Technik has been pioneering the field of app-based signal processing. greenMachine is the result of this expansive research and development with a clear focus on the challenges discussed in this article. The approach of having a software-defined platform that grows with the demands of broadcast production ensures an implicitly future-proof investment. n
TVBE SEPTEMBER 2018 | 09
OPINION AND ANALYSIS
IP: no longer a science project
T
By Steve Reynolds, president playout and networking, Imagine Communications
he shock of the new is a challenge, especially for industries where the core infrastructure and working practices are firmly established. The idea that broadcast and media signals might move away from a bespoke, industry-specific infrastructure to the universal IT fabric of IP over ethernet is just such a shock. But the logic of moving to IP is completely unavoidable, for two reasons. First, the industry will collapse if we do not move from specialist (and therefore unavoidably expensive) hardware products to software services running on commercial off the shelf (COTS) hardware which communicates via IP. Second, the services we are offering are increasingly delivered over IP. While broadcast remains hugely important, streaming online and to mobile devices is over IP. Even broadcast distribution, from studio centres to transmitters and headends, is increasingly using IP. With the advent of next generation broadcasting platforms such as ATSC 3.0, even the delivery to the consumer becomes IP based. For the last couple of IBCs, there have been demonstrations of IP solutions, particularly around interoperability. Imagine was one of the founders of the AIMS project to ensure seamless IP interworking between vendors, which is essential if users are to be reassured about best-of-breed solutions. Today, we have already built out and commissioned a number of IP projects, in all areas of broadcast from live production to master control and playout. The key to achieving success in an IP project is to think beyond simple kit replacement. You will not realise the benefits if you simply draw the architecture you were accustomed to then replace SDI with IP. This transition is not about solving yesterday’s problems with a newer technology. It’s about moving in a better way and achieving a better outcome. In an IP master control suite you can, of course, replace the SDI router with an appropriately scaled central ethernet switch or a scalable architecture like spine-leaf. These can be combined with other imaginative ways of
10 | TVBE SEPTEMBER 2018
building the architecture. From our practical experience, we favour destination switching. The master control orchestrator sends instructions to a receiving device to start looking at a specific IP stream. In a multicast environment, many devices can look at the same stream at the same time. This enables implementations of “make before break”, where that is viable in the design. It should be noted that broadcast-grade implementations of IP-based master control do require enterprise class ethernet switches. While these are an order of magnitude more expensive than the consumer switches you might buy in Circuit City, they are also an order of magnitude less expensive than a comparably-sized SDI router and available from multiple vendors. Additional switches can be added to the network as capacity is required without major reconfiguration or operational downtime. With regards to the move to IP-based master control, it is very important to consider the timing architecture that will be implemented. Imagine strongly recommends adopting SMPTE ST2059, which implements PTP – precision timing protocol – for broadcast systems, so all the destination devices can be synced. We can use PTP to ensure a clean and silent switch on a frame boundary. It is the implementation of PTP in ST2059 that makes IP infrastructures robust and broadcast quality. It also means that the switching information is carried over the network, too, and not via a proprietary protocol. In our architecture, router control panels can be built using HTML: all the ‘buttons’ are just IP objects. This means you can customise the look and feel of the control, and users can customise their workflows. The combination of advanced workflows, flexibility and scalability in the environment, and cost-effectiveness put IPbase master control a full generation ahead of traditional SDI master control architectures. That is a very convincing argument to begin the migration to IP – if you have not already started. n
PLAYING OUT OF THE CLOUD Jenny Priestley talks to Discovery Inc about their move to the cloud
I
n May Discovery announced plans to shut their London playout hub and move to the cloud. The move is part of a global initiative which has been underway for a while. The European change follows on from the US move which was completed in October 2017, and will include 300 channels by the end of the summer, making Discovery the first broadcaster to implement this innovative move at scale. According to Simon Farnsworth, EVP technology and operations EMEA & APAC at Discovery Inc, there were a number of factors behind the decision to move to the cloud: “First of all the way that the vendor supply chain is going in terms of IP, it felt like we were at a junction in time where it just makes sense to move to the cloud because a lot of the IP-grade equipment that suppliers are now producing enabled us to do that,” he explains. “Secondly, it just makes our business so much more flexible in terms of being able to launch channels, shut down channels or launch new pop-up channels at much shorter notice. And it makes our infrastructure strategy so
12 | TVBE SEPTEMBER 2018
much more flexible because we’re not housing huge data centres in our buildings moving forward which has always been a bug bear of the broadcast industry. If you had to move building it was an expensive exercise.” The third reason behind the decision says Farnsworth, was to make Discovery much more efficient: “Having all our content in one place or in one environment in the cloud allows us to do so much more with our content so much faster,” he says. “For example, we’re supplying a lot more content to telcos in Asia and having it all in one place allows us to manipulate that content so much faster and makes our whole supply chain more efficient because we’re all uploading it to the cloud in one file format. Given the amount of content that we reversion around the world in different languages, with different subtitles, with different compliance elements to it, having all of that in a single environment just makes us so much more efficient.” All of the channels moving to the cloud for playout are non-live. According to Farnsworth the decision to move only those kinds of channels is down to latency issues
FEATURE within the cloud for live. “If vendors come to us and don’t have a cloud story in their product portfolio then the conversation will be short,” warns Farnsworth. “We will also be looking to move the Scripps Network channels to the cloud as well, so the non-live channels like Food Network and Travel Channel,” he says. “Then we will look at other opportunities but that hasn’t been completely ratified yet.” Like its US counterpart, the European playout system is based on technology from Evertz, including Mediator-X, Overture-RT LIVE and Render-X. The playout setup also uses Evertz Software Defined Video Networking (SDVN) solutions using IPX Switch Fabrics, Network Address Translators (NAT) and encoding/decoding products. Video is moved to and from the cloud with mezzanine level compression as a transport stream. This meant a major change to the European team’s workflow which Farnsworth says has made them more efficient. “We’re also rendering automated continuity graphics out of the Mediator system so the graphics guys will be retrained. This has not been a fast decision or process, it’s been in the works for over two years now and we’ve had a large team working on this for a long period of time,” he explains. The move to cloud affects Discovery IT’s infrastructure:
“It’s part of a much wider Discovery strategy to put as much of our infrastructure in the cloud as we possibly can, making it easier for us to aggregate, manipulate and distribute assets across all platforms” says Farnsworth. The move to the cloud will also mean a move for Discovery’s playout facility. The plan is to amalgamate the Nordic playout (which is in a separate building in London’s Chiswick Park) with British Eurosport which is played out of Feltham. “We’ll really look at our Northern Europe sports production facilities, so we’ll have four production control rooms building to a voiceover capability, a lot of edit capability and then live TX capability where we’re inserting ads into a live environment. That site will also act as a backup to the main Eurosport production site which will remain in Paris,” explains Farnsworth. “We’re really doubling down our sports sites in two locations so they can back each other up.” “We’ll be moving all of our sports assets media management into the cloud to allow all of our editors whatever platform they’re publishing on anywhere in Europe to access all that content, which is what we did for the Olympics. So we’re going to extend that into business as usual. Again, it’s aimed at making our business more robust, more flexible and more efficient,” he concludes. n
PICTURED ABOVE: Simon Farnsworth, EVP
technology and operations EMEA & APAC, Discovery Inc
INTRODUCING THE NIGHTHAWK
SUPPORT FOR REAL TIME HDR 10 BIT OUTPUT IN FULL HD MOUNTING RAILS AROUND CAMERA
7.A21
7.A21
TVBE SEPTEMBER 2018 | 13
GETTING SAASY By Olivier Karra, director OTT and IPTV solutions at Harmonic
V
ideo consumption is changing. Today there is an increased demand for video content anytime, anywhere, and on any screen. In Europe, OTT video will be worth an estimated €18 billion by 2022 and account for more than 80 per cent of TV and video retail revenue growth, according to research from Analysys Mason. As video content and service providers look to deliver more content, including live, VoD, catch-up TV and start-over TV, to subscribers on a wide range of devices, being agile has become increasingly more important. This article examines the benefits of using Softwareas-a-Service (SaaS) for media processing, explaining how operators can maximise agility and operational efficiencies, launching new channels faster and delivering a better quality of experience (QoE) to viewers across all screens. By employing recent innovations in software and cloudbased media processing, such as channel origination, UHD HDR and content-aware encoding (CAE), operators can gain greater flexibility and drive better user experiences, increasing monetisation for OTT services. MEDIA PROCESSING SAAS WINS BIG OVER TRADITIONAL INFRASTRUCTURE With today’s television viewers demanding more content on an ever-increasing number of devices, operators must be nimble if they want to succeed. Running their media processing operations via SaaS, operators can attain a
14 | TVBE SEPTEMBER 2018
level of business agility that simply is not possible with a traditional hardware-based infrastructure. Cloud-native services, which are hosted on the public cloud, enable operators to launch new services in hours as opposed to weeks and months. From a cost standpoint, media processing SaaS dramatically alters the economics of OTT content delivery. SaaS is based on a pay-as-you-grow business model, which means that operators only pay for what they use. There are no hidden charges or surprises. With no upfront investment or physical plant to build out, operators can launch OTT services very quickly. Scalability is critical nowadays when it comes to OTT service delivery. One unique benefit of media processing SaaS is that it provides native auto-scaling and elasticity of the IaaS and the application layer, which means resources can be applied to necessary workflows and scaled up and down, as needed. This type of setup is perfect for channels covering limited time events or sports tournaments that might only last a day or few days. Using media processing SaaS, operators can handle every part of the workflow, from content acquisition to playout, graphics, transcoding, encryption and distribution, without needing to leave the cloud ecosystem. SaaS collapses multiple functions into a single, simplified workflow that enables operators to work faster, smarter and more efficiently compared with a hardware-based
FEATURE approach that utilises multiple systems in a physical plant. Operators can be assured that that their OTT content will always stream smoothly and securely with the highest possible video quality since media processing SaaS is maintained and operated by the technology provider. TAKING SAAS TO THE NEXT LEVEL: ADVANCED PLAYOUT, UHD HDR AND CAE While many cloud solutions for media processing exist, some offer distinct advantages over others. Recently, there’s been a trend in the industry for vendors to repackage their existing software appliances within a virtual machine running in Microsoft Azure or the AWS Cloud. Virtual machine appliances lack the agility, flexibility and scalability that cloud-native solutions provide to enable smarter, faster and simpler OTT service delivery. Several innovations in media processing SaaS are changing the way operators create and deliver video. One unique capability that has emerged is channel origination. Conventionally, operators have performed playout origination operations on-premises (i.e., playback of video clips, graphics and branding) and send a live stream into the cloud, where they would then apply different delivery platform profiles for broadcast and OTT. Now that media processing SaaS include these capabilities, operators can run the entire channel in the cloud. With playout functionality in the cloud, operators can drive further workflow simplification, increase cost savings and speed up operations. Cloud-native solutions are capable of handling a wide range of channel origination tasks, from accessing external storage to retrieve files and normalising them as needed, adding secondary events like graphics and triggers and ensuring that the content is delivered in formats appropriate for OTT. In addition, operators can support orchestrated VoD and linear channel playout as well as streamlined acquisition of file-based assets. This enables operators to
explore new types of monetisation. SuperSoccer, an Indonesian sports aggregator, is an excellent example of channel origination put to real-world use. Relying on a media processing SaaS, SuperSoccer can stream live football matches, including Serie A Italian league games, over the web to subscribers in Indonesia with unprecedented speed, flexibility and agility. Cloud playout capabilities enable SuperSoccer to fill the gaps between live events with VoD, catch-up TV and start-over TV content. Beyond playout, there have also been innovations in encoding and transcoding technologies that allow operators to address the growing consumer demand for better video quality on every screen. Using a media processing SaaS, operators can deliver UHD HDR at low bit rates, with low latency. Choosing a media processing SaaS that supports the next-generation Common Media Application Format (CMAF) standard is key to providing a similar latency for live OTT and traditional broadcast signals. Relying on the CMAF small chunk packaging format for HLS and DASH delivery of UHD content, operators can keep latency as low as five seconds for OTT delivery, compared to the industry norm of 30 to 35 seconds. Innovation is also strong in the area of bandwidth efficiency. Within the industry, operators are implementing CAE to reduce bandwidth consumption by up to 50 per cent without making any changes to the infrastructure. OTT service providers have increasingly started experimenting with and launching services in the cloud. Compared with traditional hardware infrastructure and virtual machines running in the cloud, media processing SaaS offers infinite agility, flexibility and scalability, driving down operations costs and speeding up time to market for new OTT channels. That’s a win-win situation for both service providers and consumers, who are excited to watch a growing amount of high-quality live and on-demand video on every screen. n
PICTURED ABOVE: Olivier Karra
TVBE SEPTEMBER 2018 | 15
FEATURE
MORE STORAGE
OPTIONS THAN IKEA George Jarrett discusses the cloud with the BBC’s Andrew Dunne
B
orrowing from Donald Sutherland’s character Oddball in Kelly’s Heroes, it is surely time to stop hitting with those negative waves about adopting the cloud and all its alluring microservices. You would think by now that public cloud and offpremise private cloud services would be totally dominant, but whilst the latter category accounts for 30 per cent plus of the market, private on-premise data centres and hybrid infrastructures represent about 46 per cent. Playout (and posting channels for consumers to download) is a no brainer for the public cloud, but reluctance amongst the production community is identified by a DPP stat showing that only 40 per cent of its membership has trustfully adopted cloud-based storage. There is an expectation that on-premise private data centres might increase in number and significance for two years to enable the full transition to IP and the virtualisation of apps, but they will all have wider cloud projects in train and can put variable capacity onto public clouds as part of those trials. The big driving factors around this are that Opex is fast doing for Capex; cyber security will also become a service to apply as required; broadcasters are now using the same technologies as numerous other industries, and they see AI/machine learning as a commercial boost; and, there is an undeniable migration to a total software-based environment for production, playout and master control. At IBC you will see many companies who had to reinvent themselves at great cost to facilitate cloud native microservices, fighting those companies born much later with a cloud native culture from the off. The place to go for a total understanding of what is possible is the EBU stand for news of its three groups on using cloud technologies: they have focused on media production around IP and remote content creation; using cloud for big-data applications in the development of recommendation engines; and, metadata information management and AI. Looking for a big user of off-premise private cloud to
add a wider perspective, there are few bigger than BBC Worldwide, now merged with BBC Studios in a single entity that both produces and distributes. Technology manager Andrew Dunne has been heavily involved with the DPP work on IMF for broadcast and online, and has been looking at access services. “The BBC is one of the biggest users of AWS, because the iPlayer resides entirely on it, but I look at anything that could affect us as a distributor. We are partnered with Sony DADC, and when we originally contracted them six years ago we agreed a model whereby it would store our assets on premise as Pro Res files in its data centre in Phoenix,” says Dunne. BBC Worldwide was happy to see Sony DADC store its mass of assets on its own hardware, but last year, and unprompted by the BBC, Sony decided that its business model would be better suited by moving all of those assets in AWS. “There was 30,000 hours of our content migrated wholesale into AWS. And Sony now services all of our customers using our library from AWS,” says Dunne. After six years, has he seen cost increases, for things like regress and ingress? “We agreed a pricing model six years ago and it is maintained to this day. We did not see any increase of costs through Sony making that transition to AWS but I suspect it is maybe making more money from us because it is probably cheaper to get our files into AWS. That will be something that changes in the coming years,” Dunne explains. Metadata is an area that presents problems. “It’s complicated. Obviously there is technical metadata around the assets and that is still stored in the systems it was before we moved anything into the cloud,” says Dunne. “Then you have the more synopsis-based content metadata, and again that is currently stored in our own systems. “Once we make a move towards everything being in the cloud the ability to have those talking to one another is really important because it means you have a single point of servicing any deals,” he adds. “At the moment Sony
TVBE SEPTEMBER 2018 | 17
FEATURE Now Dunne works with the studio side (in what is a wholly-owned BBC subsidiary), does he see production heading for the cloud? “We are not ready yet to move that towards the cloud. There is infrastructure in place that we have invested in over recent years, so we will sweat that until it dies, but we do have a plan for looking at how we might move production elements into the cloud,” he says. Another factor is the use of so many third party facility houses used to create productions, where the BBC is effectively the producer. Dunne explains: “As a facility they will have to look at any advantages for them in moving work to a cloud model, perhaps with editing. But for a lot of producers particularly there is a way of doing things they like, and they are likely to carry on doing that – things like sitting with an editor and agonising. “The high-end stuff will probably carry on in that regard, but what you may find happening in terms of production is an improvement in the early part of the process; you might acquire in the field and then get the rushes uploaded into a cloud platform,” he adds. “This introduces rough cutting in the field of assets in the cloud, but I know from DPP research that production and connectivity is still a major issue.” Once field connectivity is assured, people will move towards putting stuff into the cloud. Are producers aware of AI and machine learning? “That sort of stuff trickles down to production people. They are very focused on doing what they do and it will be a long while before that stuff moves that way.”
DADC will service the content but we would have our metadata from different systems. And then we also have images and other supporting materials that are serviced from yet another system. It would be great to have all those things back together.” SWEAT UNTIL THAT DIES The next subject was a BBC Studios take on microservices. “DADC will do this on our behalf. We have customers that have certain requirements for the output so we store a Pro Res master. Some just take a copy pass of that, which is just a simple delivery that does use an Aspera instance in AWS, so it is a microservice,” says Dunne. “Other people have different technical requirements and DADC will have to transcode the Pro Res file. That is a transcode microservice; it is automated but there is a (profile) set-up process that happens for every new customer.”
18 | TVBE SEPTEMBER 2018
STORING THE BAREST ESSENTIAL ESSENCE Dunne is heavily involved with IMF, in part because it offers huge savings benefits to distribution. “It is a sensible way of moving forward. At the moment we create multiple versions for distribution of any one piece of content. For a big landmark natural history series we will get eight different versions, each one a complete end-to-end file. But when you actually look at them there is an enormous amount of commonality between those variants. For the most part the picture is 95 per cent the same, and between several versions it is 100 per cent the same with the only difference being in the audio,” says Dunne. “IMF allows us to break these down into the individual components, so we only store the unique bits of media. And then we have simple CPLs (XML files) that are essentially the recipe for assembling those components into the complete versions,” he adds. “It means there is a saving in storage, and if you have quality controlled the elements you do not have to QC the entire version.” This makes a huge difference in terms of shipping less
FEATURE volumes of content around and saving time on QC. And it means once it is sitting in a cloud-based platform you can pick and choose those bits very quickly and easily. “I can see a future where instead of supplying a BBC Studios customer with a complete finished file, you would just supply an XML that tells them these are the bits you need and this is the way to assemble them into a finished file. They collect it from the cloud and assemble it (in a sort of IKEA moment). We only store the barest essential essence.” Dunne sees commodity hardware and software apps as things we have been moving towards for some while. He says: “Now you improve performance by scaling the number of processes you throw at a piece of work. You look at the old broadcast chain and you would have dedicated hardware that did one thing and cost an enormous amount of money. It required its own connector to link to something else in the chain that also did one thing. If any one of those broke down you were scuppered. “If you look at the modern database model everything is identical, which means if one of them fails it does not matter, and if you suddenly need to move something you just commandeer more space until you run out of capacity
in the whole data centre,” he adds. “That is the model we are moving towards. We have that commodity software as a service plus the microservices, and everything runs on standardised hardware.” What has his link to the DPP taught Dunne in terms of the remaining doubts about the values of cloud technologies? “You can argue that getting people to trust the cloud provider remains an issue, trusting all of their data on a cloud operated by someone else. We can trust it. But there is always the particular question of convincing the people who hold the purse strings,” he says. “One of the things we do need to be careful about is tying ourselves to a specific private or public cloud and its methodology. It would be good to have some standards across interoperability so you could store all your assets here but leverage the service with a different cloud provider. The issue you have then though is the cost of moving those assets between clouds,” he adds. “We are at the point where we are going to have to start trusting the machines. Potentially you would hold your own library of content with whichever cloud provider you choose, and then pick and choose which microservices you use per task you want to carry out.” n
TVBE SEPTEMBER 2018 | 19
FEATURE
SOUNDING IT OUT Mark Layton gets the lowdown on the curious world of subtitling
V
oCaption, a long-anticipated innovation in automatic live subtitling will be launched by captioning pioneers Screen Subtitling Systems at IBC2018. With the advent of VoCaption, the quality of speechto-text translation has been greatly improved and the requirement for a respeaker has been reduced, representing a big step forward for the technology (see box on the next page for some of the technical highlights). Visitors may have caught sight of the first incarnation of this speech-to-text technology at IBC last year, but the finished product is now ready for a full demonstration. “The new product is something that we have been asked about for years,” says Screen Subtitling Systems head of marketing and PR Dean Wales. “Everybody was aware that there were speech-to-text engines out there and wanted to know why we couldn’t give them live subtitles.” The answer, explains Wales, is simple - they could, but until now the product would not have met Screen Subtitling Systems’ high standards: “We have been working with a company called Speechmatics in Cambridge; they’re making quite an impact on the industry now. We did a lot of work with their engine and we felt it was at a standard where we could release a product that could do this.” The Ipswich-based Screen Subtitling Systems is no stranger to breaking new ground in the captioning field. Founded by ex-BBC engineer Laurie Atkin in 1976, the then Screen Electronics pioneered the first-ever electronic subtitling system and provided the first digital character generator to the BBC. “Atkin wanted to increase the provision of access services – so captioning for the hard of hearing. The old way they used to do it was very complex and difficult. I believe it was done in a similar way to the old credits that were on card and filmed,” explains Wales. “It was quite a laborious job and there wasn’t a lot of provision for captioning back then, so Atkin had this idea that they could make it easier. He left the BBC to set up Screen Electronics and then came up with this electronic
20 | TVBE SEPTEMBER 2018
version of subtitling. He went back to the BBC and said: ‘I think you could probably do with this’ and that’s where it first started.” For more than 40 years, the company has remained at the forefront of subtitling technology in both software and hardware. It works with major broadcasters to get subtitle files placed on the screen at the right time and in the right language, as well as giving subtitlers the tools they need to prepare the words themselves. In the 1970s and 1980s, the company developed further new subtitling technologies including fully automated transmission using timecode, the first PC-based subtitle preparation system and the first multi-channel, multilanguage subtitling systems. It was also the company responsible for creating the industry standard proprietary .PAC subtitle file. In more recent years, it has gained global recognition for its Polistream transmission and Poliscript preparation products, offering multi-language, multi-channel systems for content localisation. Its technology is now used in the majority of subtitling systems across the world. But despite its global expansion, providing access services for the deaf has remained at the core of the Screen Subtitling Systems’ identity: “We were created to offer hard of hearing captions and that still sits at the heart of what we do,” says Wales. “We do a lot of work with deaf charities. We help with lobbying for more cinema subtitles, for instance. As a PR person, I learned British Sign Language so I could go out and speak to our end users and I’m a trustee on a deaf charity. That’s never gone away.” The rise of the internet and social media has brought new challenges and opportunities to the industry, revealed Wales, with more and more people watching video content online. “The likes of Google say that you need to get more video content on your website and that you need to use more video content in social media, but I think it is something like 85 per cent of Facebook users don’t have their sound switched on when watching videos,” Wales says. “That’s a
FEATURE
huge exclusion for the deaf community because subtitling is not regulated on social media.” Fortunately, the work put in to creating VoCaption had the added benefit of also allowing the company to create a new online tool called Sasquatch, which provides subtitling solutions for short videos. “We thought we’ve got this technology so let’s take it a bit further and allow people who are creating short scope content for social media to caption,” Wales continues. “So we’ve used that technology to make an online application where a content creator can put their video up into Sasquatch, buy their credits and the software will use speech-to-text engines and other algorithms to take the speech and turn it into text and get the timing accurately so it’s all placed properly.” He explains: “It’s allowing social media video content creators to subtitle very quickly with handholding from the experts where required. Amateur vloggers, YouTube channel owners, anyone who is putting content online can caption that now and that keeps the likes of YouTube and Google very happy – and the deaf community as well.” n
THE ABCS OF VOCAPTION n Automatic switch to speech when integrated with the Polistream platform ensuring continuity in display of captions during failures. n Processes audio content from line-input, AES or directly from SDI/HD-SDI/3G-SDI signal. n Ability to pause audio contribution to speech engine manually or automatically, avoiding unwanted speech-to-text conversion fees. n Real-time speech-to-text processing and simultaneous recording to TTML for compliance/ repurposing. n When used with Polistream, enables the widest range of linear broadcast and OTT delivery mechanisms available in the market.
TVBE SEPTEMBER 2018 | 21
FEATURE
THE DATA OF SPORT By Fiona Green
F
or the 30+ years I’ve been in the sports industry the same topic has come up for discussion every few years (usually via attention-grabbing headlines), that the market for sports TV rights has imploded and the bubble has well and truly burst. But over the last few years that discussion has taken a different path. It’s not that the bubble has burst, it’s just become a virtual bubble; it’s gone digital. The future of the TV rights market has become one of my favourite topics, because there’s so much movement, so much speculation, and so much change on a continuous basis. From Amazon, Netflix, Facebook and Twitter buying sports rights, to rights owners themselves delivering OTT services, the options are plentiful and, of course, there’s still the traditional linear deals that have been in play since the 1950s. Looking at just a few of the many deals made in 2017, we had several different business models where the rights owner contracted with a third party. For live coverage of the 2017 Championship, the PGA combined linear with digital, issuing contracts to the BBC, Twitter and GiveMeSport for streaming on their Facebook page. After securing a contract with the NFL for the right to live stream ten games in the US, Amazon signed its first deal outside the US, taking the ATP World Tour rights away from Sky and providing their Prime customers with top-flight men’s tennis. We also saw the launch of EFL’s iFollow service, providing fans inside and outside the UK with a live stream across their three divisions, hosted on their own website, not through a third party, at a cost of £110 per season. In the words of Drew Barrand, marketing director at EFL, “In essence, we’re becoming a broadcaster.” This mirrors the NFL’s GamePass, providing Europeans with multiple options for live or
22 | TVBE SEPTEMBER 2018
on-demand matches, and UFC’s (Ultimate Fighting Championship) Fight Pass, which can be personalised by the user to follow a specific fighter. In an August 2017 interview, Chase Carey, Formula 1’s CEO, also hinted at a possible launch of their own OTT product as ‘one of three or even four potential arenas that [F1 is] engaged with’. So, while there are many different models involving third parties like digital and linear
rights being sold to broadcasters, even nonbroadcast media companies getting into the broadcast space, there are also rights owners doing it for themselves, with major success. But what’s all this got to do with the use of CRM and data? When rights owners move away from linear channels to digital, not only do they give their fans a choice on how, when and for how long they watch, they collect information about them and their viewing habits. According to BT, not only did their YouTube broadcast of the 2017 Champions League Final increase their viewing figures
by 1.8 million, they received data about every viewer that streamed the game. This gave them invaluable insight that will assist with future marketing planning. Amazon echoes this sentiment in support of their deal with the NFL. In an interview with Yahoo! Finance, Saurabh Sharma, Amazon’s director of ad platforms, talked about how the data they’ll generate from NFL viewers will support their advertising sales model, providing brands with behavioural data previously unavailable about fans of the sport. Most importantly for me, if this is the future of the TV sports rights industry (sports fans having full control of their viewing behaviour via a digital platform), then the more rights owners know about their fans, the more direct contact they have with them, the more control they’ll have over their broadcasting future. I’m not a sports media specialist (I’ve never sold a TV deal in my life), but I consume the coverage of this subject in the same way as anyone who wants to stay abreast of this fastmoving industry. When our clients ask me how they should consider their future in this area, my advice is quite clear: even if you, as a rights owner, think you’ll stay with the traditional model of selling, or contracting, your TV rights to a broadcaster, whether linear, digital or a combination of both over multiple channels, then, by having a rich and deep database of your fans, you’ll have far more control over your broadcast negotiations. This approach works, regardless of the scale, as it is relevant for any audience. The principle is that if you can go OTT as a rights owner to the fan, and the business model justifies it, broadcasters will want to know about it. This is an excerpt from Winning with Data – CRM and Analytics for the Business of Sport written by Fiona Green and published by Routledge.
FEATURE
MEDIA STORAGE IS HORRIBLE (SAYS THE CEO OF A LEADING STORAGE COMPANY!) By Jonathan Morgan, CEO, Object Matrix
I
t’s probably not something that you would expect me to say, but let’s face it, media storage has been horrible since the day dot. To see where I am coming from and to describe just a few of the bad practices we have picked up for media storage we need only to look at how we’ve stored media to date.
PICTURED ABOVE: Jonathan Morgan
MEDIA ARCHIVES HAVE LITERALLY BURNT UP Nitrate tape. It was the technology that nearly every major film was recorded on from 1880 through to 1952. For filming through to archive the process was simple: film something, send it to editing and you already have your ready made archive - the original film. Put it on a shelf. The problem? First of all: film on nitrate degrades in quality over time and only if kept at exactly the right conditions could it last for up to 25 years without quality loss. Secondly, you end up with shelves of tape. Once I asked a film institute how much storage they had, expecting them to say “25 Petabytes” or some such similar answer. Instead the answer came back “nine acres”. Yes. Nine acres of warehousing full of film. And even if the film is indexed, you can only begin to imagine the trouble with finding the right place in the right film in time to use that shot, let alone answering questions like “give me a selection of shots including a Rolls Royce Silver Phantom”. And, thirdly, nitrate could and would auto-ignite. This has caused fires at MGM, Universal, the United States National Archives and Records Administration to name but a few. These fires have lost originals of Tom and Jerry, countless movies like Cleopatra through to 12.7 million
24 | TVBE SEPTEMBER 2018
feet of newsreel. Cultural heritage lost forever. But that’s the past right? We obviously learned well from nitrate and acetate film disasters? Roll on (no pun intended!) the 1970s and digicam, DVD, through to USB keys. In fact, digicam variant cameras were made all the way up to 2016. Much better right? Well not really. The fact remained that our cultural heritage, your digital property and therefore potential to make money is still being put on to degrading, difficult to search and index, potentially combustible media. In fact in some ways it’s even worse. The media had a shorter lifespan and the plethora of standards and formats makes it, if anything, harder than before to use the media as an archive media. LTO BASED SOLUTIONS ARE NOT THE ANSWER So instead the answer became “let’s stick it on tape”, which became, “let’s stick it into LTO”. After all LTO has a longer lifespan… right? Well of course, as we all know, LTO has gone through eight different formats with a change in format every two years or so. And with LTO 8 machines unable to read even LTO 6, this has resulted in the need to migrate archives or see them go to dust once again. In over 100 years of media storage - it seems we have gone nowhere. DISK BASED SOLUTIONS ARE NOT THE ANSWER But hang on... we have disk based RAID systems, right? This is very true. Disk based scalable storage is very good indeed at storing archives of storage, storage that is connected to the tools that can search and use that
FEATURE archive. But is it really the answer? First off, companies have a big problem. In the past they bought a disk based raid system for storage, but it quickly became undersized and later became out of date. So the archive on the storage needed to be migrated, again. Secondly, as that company bought the next disk based solution, probably from another manufacturer, maintenance became a mare, and that shiney new (and expensive) platform from a few years ago became valueless over a few years. NATURE HAS THE ANSWER (1) DNA So, as the CEO of a storage company looking to these issues we looked to nature. DNA to be precise. Just look at how amazing it is: n 1.2g of DNA can store a Petabyte of data n It survives corruption n It survives war, fire, 1000’s of years n 99.9 per cent of our DNA is the same, the other 0.1 per cent is what differentiates us OK - even though some people are literally looking at DNA for storage, I don’t want to stretch the analogy too far but the point here is that DNA: (1) has the principal of multiple copies (2) can migrate from one carrier, to the next generation. I fundamentally believe that over time the cost of hardware for storage will become insignificant, but it will be the management of that storage, and in particular in this context, the ability to change out the hardware with automatic managementless migration of the data to the new hardware. NATURE HAS THE ANSWER (2) INTELLIGENCE Artificial intelligence is the most cliched term of 2018, so let me be more precise. If we had intelligence and time, we’d know that this piece of film is needed in the UK and that production needs to be worked on in a post production house in the USA. We’d realise that having data in just one location puts it at great risk and that that film should never be changed from its original format. AI
should be making those decisions for us. And, if we had 1000 eyes and a million hours we’d tag and analyse all the video in our archive, or find a person or item being searched for. AI can do that. But we can’t build intelligence into our storage if it is sat on a shelf on a tape. We need connected digital storage. We need the principles of DNA and intelligence. THE (PRIVATE/HYBRID/PUBLIC) CLOUD IN OUR HANDS After over 100 years losing, burning, corrupting media assets we are on the cusp of a media archiving revolution. Why? Not because hardware technology changes per se but because of software and how we use hardware. Critically, we now have the ability to allow media to migrate from hardware platform to hardware platform without the need for heavy manual maintenance (such as migrating LTOs). The same media management software can make critical decisions about how many copies of data to keep and where to keep them. This means that we can have media assets libraries that can truly grow and grow. Secondly, for the first time ever we are beginning to see the ability of AI video analysis to extract information from movies, newsreel, sports coverage, etc. This is unlocking value in media and discovering new usages. Having a tape on a shelf is no longer an option. And then let’s not forget the key stages that have allowed us to get here: data is now digital and assets can now be network connected and online. Most modern solutions that are bringing these benefits are loosely called “cloud solutions”: be that public cloud like Amazon, or private (and hybrid) cloud like Object Matrix. But we still have a long way to go for nirvana. Nirvana for me, means having tens of copies of media, storing all media in a connected manner and having all media in systems that self-evolve over time. Nirvana for me means having all media searchable, auto indexed, and auto managed so that media houses can focus on creating content and monetising assets rather than data managing them. And, what we can all do today is to get our assets into systems that are at the start of that journey... n
TVBE SEPTEMBER 2018 | 25
USING HIS CEREBRUM Jenny Priestley talks to recently appointed Axon Digital Design CEO Michiel Van Duijvendijk about his plans for the company as it continues to grow
I
n July broadcast infrastructure specialist Axon Digital Design announced the appointment of former NEP Netherlands co-founder/co-owner Michiel Van Duijvendijk as its new CEO. Van Duijvendijk is working closely with Axon’s CTO Peter Schut and another former NEP Netherlands exec, Karel van der Flier, who became Axon’s chief commercial officer in May. Van Duijvendijk says there were two reasons behind his decision to move to Axon: “Having been one of the founders of NEP Netherlands I know the broadcast industry quite well from that side. Axon had been one of NEP’s suppliers and so I’ve always had a lot of respect for what they were doing in our market. “The other reason is that I’ve been representing one of the investors in Axon since 2010. So there were two ways that I was attached to the company. It became clear to me during my interactions with Axon that it was moving in a new direction. The transition to IP brings a lot of challenges and I felt I could be of help, by giving a new vibe to Axon as I lead the company into the future.” With 20+ years’ experience within the broadcast industry, Van Duijvendijk admits he’s not a technical guy, but stresses that he has had plenty of experience of working
26 | TVBE SEPTEMBER 2018
with technical people during his time at NEP. “I think the ideal balance is to have commercial knowledge, technical knowledge, and a balance between the two,” he explains. “My role within Axon is to listen to everybody, to listen to customers, listen to the Axon team and to make the right decisions for the company at a challenging time for the broadcast industry. Axon is still small enough to be flexible. We can focus on research and development, but we have enough skills as a whole to be able to compete with the big companies in the industry,.” “At IBC we will be debuting Neuron, the world’s first Network Attached Processor (NAP) - a new product that we’ve been working on for quite some time to address the needs of complex IP and hybrid environments. The move to IP is causing many in the broadcast industry sleepless nights, particularly the challenge of integrating and controlling increasingly complex technology layers whilst providing guaranteed bandwidth performance for new formats such as UHD. A NAP resolves these issues, and with our new Neuron NAP, Axon is setting the standard.” Neuron packs a powerful punch as it supports 200 Gb/s and 64 channels or 16 UHD channels in a single rack unit. For those heading to a pure native IP infrastructure,
FEATURE Neuron acts as ‘modular glue’ in a centralised and virtual environment as it can process all the tasks needed in a live and baseband video domain. It also enables multiple channels in a single device and eliminates physical cascading of products. Additionally, Neuron aims to bridge the gap for those taking a hybrid path to IP by providing FGPA-based processing power with connection to legacy SDI I/O, and by managing all audio and video processing tasks with ultra high bandwidth. “Neuron is another example of Axon introducing a really disruptive product to the market,” Van Duijvendijk says. “It has already been enthusiastically welcomed by our clients, many of whom are pioneers in IP production, and they have been blown away by its performance.” Another important focus for Axon is control and monitoring within the IP environment. “We certainly think our Cerebrum control and monitoring platform is the ideal solution – and one that is the perfect future-proofed product,” says Van Duijvendijk. Talking of Cerebrum, the platform has definitely been one of the most talked about products of the summer. Dega Broadcast Systems incorporated it into BBC Sport’s IBC facilities in Moscow at the FIFA World Cup where it was used to control and monitor GVG routing, processing,
MADI audio routing and tally management. It was also at the heart of Timeline’s OB production for BBC Sports studios in Red Square. NEP UK also employed Cerebrum at this year’s Wimbledon Championships to control and monitor its fully-IP workflow, which was used to deliver technical host broadcast services. The complex IP infrastructure, which simultaneously offered dual level UHD, HD-SDI, SDR UHD and HDR UHD, required a control system to seamlessly integrate and manage technologies from GVG, Arista, EVS, Phabrix, Calrec and Evertz in a unified IP workflow. Another recent success is ITV, who selected Cerebrum to control and manage its regional news operations. At six sites across the UK, Cerebrum provides an overarching control hub for routing, signal processing, tally control with tight integration of IFB, audio and communications. As Axon continues to establish itself within Europe, where other customers include Viacom International Media Networks, France 5 and Sky, what are the company’s plans for the rest of the world? “We are relatively big in Europe. We’ve been doing really well in Asia over the last two years, particularly amongst OB providers, and are starting to see some momentum in the US - that’s really exciting because of course that’s a huge market where we see a lot of growth.” n
PICTURED ABOVE: Michiel Van Duijvendijk
OmniGlide Roving Pedestal ™
NEW
Better Mobility “Learns” The Environment Safety Ensured Less Moving Parts
om d e e r F
New A e org F to
Path
Exclusively from
The Leader in Camera Robotics sales@telemetricsinc.com telemetricsinc.com
RovingPedestal_BAShow_TVTech_9.625x6.indd 1
TVBE SEPTEMBER 2018 | 27 4/24/18 4:24 PM
FEATURE
DON’T SIMPLY ‘LIFT AND SHIFT’ WHEN MOVING TO THE CLOUD By Allan Lamkin, CTO, Deluxe
T
here’s no question that the way entertainment is made and consumed has undergone a rapid and fundamental shift. Over the past decade, the supply chain of entertainment – the journey of content from the camera lens all the way down to a viewer’s screen – has experienced significant transformations in technology. Today, content can be seen anywhere, anytime and on any device, which has led to consumers becoming accustomed to the instantaneous delivery of their favourite TV shows and films. With media and entertainment companies facing more demanding consumers, fragmented audiences and greater time pressures to produce and distribute more content more quickly, the cloud has evolved to become missioncritical in ensuring studios, broadcasters and OTTs meet consumer demand, and at a faster pace than ever before. The cloud brings flexibility to scale operations on-demand, offers nearly unlimited capacity, reduces costs when managed properly, and enables easier collaboration with teams across the globe. In the case of Deluxe, we used the cloud to reimagine the content supply chain and unify the disparate stages of content development. Our new cloud-based platform Deluxe One simplifies how our clients’ content is created and delivered to audiences. With a cloud-based architecture, content owners now have visibility into where their content is in the media lifecycle, insight into metrics and performance, and elasticity to scale their operations. Getting more content to more people is increasingly complex, but with a cloud-based architecture, we’ve been able to shorten the time between camera lens and screen. When companies begin to migrate their services over to the cloud, there are, of course, a wide variety of factors to consider, but one of the essential basics to keep in mind to be certain you’re getting the most out of the transition is
28 | TVBE SEPTEMBER 2018
to make sure you don’t simply ‘lift and shift’ your current infrastructure. Instead, take advantage of the transition as the perfect opportunity to redesign your apps to be truly cloud-friendly – not just a copy of what you had before. Here are some main tips to consider to ensure you’re on the right track when migrating to the cloud. FIRST, KNOW ALL ABOUT YOUR DATA AND MIGRATE IT FIRST You really need to know all there is to know about your data before migrating it to the cloud. Once you’ve classified your data, migrating it first up to the cloud makes it much easier to move all your applications. This is also the perfect time to take inventory of your data, know where your most important data is, and which data is the most sensitive. Much like the analogy of when you’re moving to a new house and taking stock of what you own before moving it into storage, you’ll make the transition a lot easier if you only take the data you need and take the time to sort through everything. When you know more about your data you can then put it into a proper data structure and normalise it, that is, reducing data redundancy and improving data integrity to ensure you’re unifying all your data to be better used in the cloud. Further breaking the data down into how often or how quickly data needs to be accessed over time can help construct where to store it in the cloud. At Deluxe we store hundreds of Petabytes of content across various tiers of storage. It was important to scrub files and ensure things that were needed going forward were properly stored and inventoried and not migrating unnecessary data. This again illustrates why a lift and shift model does not work. To continue to control the data and media files ingested in to our platform, our data science team has written smart content ingest algorithms that
FEATURE identify and characterise content to eliminate redundancy and ensure all ingested data is properly catalogued. For additional efficiency, we have added data tiering rules that automatically age and move data to cheaper tiered storage. SELECT THE RIGHT DATABASE INFRASTRUCTURE Cloud providers offer many options that simplify the maintenance of the cloud database. Apart from handling installation, maintenance, and scaling of IT infrastructures, constant upgrades by cloud service providers make it easier for enterprises to cut down operational costs without compromising on security, automated backups, replication, and quality. Agility, scalability, and cost savings are the three main factors to consider when choosing your infrastructure. As an example, AWS offers many variations from Elastic Search and RDS to Amazon Aurora, Google Cloud offers Cloud SQL to Cloud Spanner for higher end needs and there is also Microsoft Azure SQL Database and DocumentDB. With Deluxe One we took advantage of many of the AWS database offerings including RDS. CONTAINERISE YOUR APPLICATIONS Container technology solves the problem of getting software to run reliably when moved from one computing environment to another. In this case, this means moving applications from a physical machine in a data centre to a virtual machine in the cloud. A container consists of an entire runtime environment – an application, plus all the code that is needed to run it – bundled into one place. By containerising your applications and all of their dependencies, you can run your applications anywhere, paving the way for an easy transition to a cloud based infrastructure. Futhermore, containers enable decoupled deployment processes and autoscaling that is important in utilising the full capabilities of the cloud. With Deluxe One we utlise containers that are configured to deploy a full environment in an automated fashion utilising Terraform. KEEP YOUR COMPLIANCE AND LEGAL OBLIGATIONS TOP OF MIND When building out your cloud infrastructure, it’s important to take into account what data is being collected, how it needs to be separated or protected, and what policies are in place that currently govern the access of your data. Companies today have a responsibility to maintain best practices while conforming to data compliance requirements set by industry organisations, such as GDPR introduced in May, Personally Identifiable Information (PII) act, or HIPPA. For entertainment companies like Deluxe, there are compliance and legal obligations from the Content Delivery & Security Association (CDSA),
the Motion Picture Association of America (MPAA) and the recently established Trusted Partner Network (TPN). These best practices serve as a way for entertainment companies to assess risks and security of both their data and infrastructure. When migrating to the cloud, you should ensure your cloud provider’s compliance strategies are up to date and have all the necessary certifications and functionality that match your requirements. As an example, with Deluxe One we have added access control rights and auditing to all of our customer’s files and content encryption at rest to ensure the proper level of security of the data to meet our compliance requirements.
PICTURED ABOVE: Allan Lamkin
ORGANISE YOUR BEST TEAM OF PEOPLE AT THE START While it’s important to have an experienced DevOps team that have architected cloud applications from the ground up, at the onset it’s important to establish a strong, unified team – from the top of the organisation all the way down – that knows and understands the inherent strengths of a cloud-based platform. It is especially critical to have a strong DevOps and platform team in place that specialises in continuous and automated platform deployment and integration. Think about the cloud’s ability to auto-scale usage of the platform when traffic is high, then scale back when use drops off - optimising your platform with that approach will save you time and money and ultimately keep your end-users happy. In essence, migrating your system to the cloud is a complete transformation in how your business will operate moving forward. Having the most respected people at the top of your organisation as the evangelists for the cloud will lead to easier and greater adoption across the company. HAVE A ‘YOU BUILD IT YOU OWN IT” MENTALITY Today, there are a lot of companies that are moving away from dependence on quality assurance teams and production support engineers to ensure a product is ready for production and beyond. As a developer, having a ‘you build you own it’ mentality means making sure you see your applications through to production, and once they are live, making sure they are continuously operating in a stable manner. To effectively leverage the benefits the cloud affords, it’s important not to cut corners when making the move. Before you start, organise your data early on in the process, identify the right database that suits your operational needs, containerise your apps, keep data security and governance in mind, and have a top notch team in place that knows the cloud and can configure your apps to meet consumer demand instantaneously. n
TVBE SEPTEMBER 2018 | 29
FEATURE
DOES BROADCAST HAVE ITS HEAD IN THE CLOUDS? By Chris Wood, CTO, Spicy Mango
C
loud. It’s the word on everyone’s lips, regardless of the industry they operate in. The move towards cloud has accelerated significantly over the last few years, so it’s no surprise that the world of broadcast is quickly following suit. Cloud promises flexibility, scalability and a relatively low cost infrastructure, and many organisations have already made the move from onpremise or are at least considering it. The fact is, over the next decade, a large majority of services will be cloud based. In 2017, cloud accounted for around 20 per cent of capacity in traditional enterprise IT and that trend has continued to grow. Broadcast is unlikely to behave differently from other markets and many service providers are looking to move their existing on-premise infrastructure to a cloud, be that public, private or hybrid. Eventually, cloud will become the first and possibly only - option for new organisations looking for a scalable infrastructure. But is moving to the cloud actually simple? Which cloud is best? Does cloud always mean public cloud? And is it imperative that broadcasters make the move? WHY CLOUD? It’s unsurprising that moving to the cloud is attractive to those in broadcast. Speed, scalability, agility and access to analytics are just a handful of reasons why services will choose to move. If you are a new company, it enables faster speed to market and speaks to a culture of constant innovation and experimentation. The ‘fail fast’ mentality is much easier to keep up with in the cloud as there is more flexibility. The scalability aspect of cloud makes spikes in workload easier to handle, particularly during live sporting events, and in turn can manage surges in popularity of new services. Data is also becoming increasingly important and cloud gives broadcasters the ability to collect, store and conduct analytics on vast amounts of data. This data can
then drive personalised features, service developments and provide insights to create a better quality of experience - all vital elements to providing a successful service. WHICH CLOUD? Cloud doesn’t always mean public cloud; hybrid and private clouds are still perfectly viable solutions, they just provide different advantages depending on the needs. Choosing the right cloud is a key architectural decision when migrating and this will be dependent on the requirements which are guided by availability, cost, scalability and complexity. The main advantage of public cloud is that the servers are already there at the disposal of the service provider. If they need another ten servers, they can easily be added to the infrastructure. This makes it a cost effective solution that is reliable and can deliver on the quality of experience, with services like AWS promising a near 100 per cent uptime. Private cloud offers much the same benefits as a public cloud but the infrastructure is dedicated to one organisation. In most cases, the infrastructure will be deployed in a customer’s own premises or data centre. Private clouds may be risky for fast growing companies however, particularly if they aren’t sized correctly. For many organisations, the limitations of using just public or private cloud can be detrimental, so often turning to a hybrid cloud is the best approach as it combines the benefits. They may want to retain control over certain components such as subscriber data or financial records, so therefore wish to keep these within a private cloud, but still want the flexibility of being able to scale up their operations should they grow or need additional capacity. TO CLOUD OR NOT TO CLOUD? The primary issue with cloud is that everyone thinks they need to do it, when in reality, things are probably working
TVBE SEPTEMBER 2018 | 31
PICTURED ABOVE: Chris Wood, CTO, Spicy Mango
fine with their on premise solution or hybrid. There are some high profile use cases for cloud adoption, such as Netflix, who shifted their reliance on traditional IT stacks in owned and operated data centres to public cloud. In the UK ITV, Channel 4 and the BBC hold some of their workloads in the cloud. But the main question that service providers need to ask themselves is why they need a cloud based solution; do they want to migrate because it will actually benefit their business or because they feel left behind? For legacy operators, migration to cloud can take some time and investment, especially when it comes to adapting workflows. If service providers are being honest with themselves, their on premise virtualisation remains an extremely viable solution for a number of elements such as transcoding, so migrating to the cloud can become a gradual process as opposed to something they need to prioritise straight away.
32 | TVBE SEPTEMBER 2018
Currently, the biggest barrier to cloud is the cost of extracting content. Computing power, storage, application and uploading content may be relatively low cost, but the cost to download and distribute content for consumption can be huge. For large enterprises like Netflix, it might be easy to absorb the costs, but for smaller services with an on- premise solution, the cost would be high. For newcomers, cloud deployment in its entirety makes the most sense because their whole workflow can be designed around this approach to maximise the benefits that the cloud brings. As whole architectures need to be reviewed and planned for legacy operators, the leap to cloud is not something to be taken lightly. CTOs that don’t have their own answers can’t just bluff their way forward, they must seek advice and consultation from specialists to ensure they are making the right decision, and the move to cloud doesn’t fall flat. n
FEATURE
IBC:
TRANSFORMING THEORY TO APPLICATION By Michael Crimp, CEO, IBC
N
o sooner had the confetti been swept up from IBC’s 50th anniversary celebrations last year than the team was back planning our 51st show. Not such an exciting number, admittedly, but certainly a whole lot has happened in our industry in a year, and IBC2018 promises to delve into all the new technological developments and the business implications they present. This is a key factor in the ongoing relevance and development of IBC. It may be an odd thing to say, but in a technological industry it is not only the technologies themselves that are exciting. Certainly there are big things happening; I think we will see big advances in areas like artificial intelligence, 5G and blockchain, and you do not need me to talk about the shift to IP connectivity and software-defined topologies; about the convergence of broadcast, IT and telecoms; and about the search for new formats, whether that is Ultra HD or virtual reality. For me, the real excitement comes when these raw technologies are put into action. IBC puts these ideas in front of people who can imagine the possibilities and create the applications that transform our creativity and our business models. How are the businesses that use these technologies evolving? How will broadcasters, telcos and streaming companies co-exist? Will OTT providers be the new broadcasters, or will producers sell direct to consumers? Advertising still seems a reliable and lucrative way to fund content production and delivery; will programmatic advertising planning and dynamic ad insertion transform the cost/revenue model? Or will new monetisation methods – maybe blockchain-managed micropayments from consumer to producer – transform the creative industry? These issues may not be solved at IBC2018, but they will be much talked about. The last stage of development in IBC was a move away from a purely engineering-based event to one which attracts debate from the creative, operational and commercial sides of media businesses, because amazing technology is meaningless if it doesn’t gel within a workflow or it costs more than it earns. In order to keep at the forefront of these issues, we are constantly refreshing and tweaking the IBC format to meet the needs of today’s much wider broadcast industry. We are the first to admit that some innovations are more successful than others, but we are also agile as a business and learn from our efforts. The end result is an event that continues to grow. A great example is the Leaders’ Summit. The move from an engineeringled event to a creative and commercial one meant that we had to engage more at CEO level. By helping C-level executives understand what the technology allowed them to do, they would be better equipped to develop strategic pathways for their businesses, which would maximise return. We developed this invitation-only, behind-closed-doors day to ensure that the people in the room could freely exchange ideas and opinions with their peers. It remains a cornerstone of the IBC Conference, and has led to additional C-level sessions being developed, such as the Cyber Security
34 | TVBE SEPTEMBER 2018
Forum and the Telco and Media Innovation Forum. Again, these are hosted, high level events that bring new people and new businesses into the IBC community, and provide advocacy and context for the rapidly developing world of media. These events underline my strongly held belief that people still have a real need to meet and talk face to face. In our industry we are dealing in extremely complex systems that use technology to drive creativity and commerce. It is not the sort of industry where a quick Google search will find an off-the-shelf product, so getting together with multiple potential partners, in the right environment to do business, is really valuable. The broadening of the industry into adjacent markets also means that there are wildly successful companies you’ve never heard of in different sectors, or small start-ups with no advertising budget, that may have just the workflow or product or idea that will make a big difference to your business. IBC is a great place for those collaborations to be incubated. This diversity is reflected in the conference programme. While the IBC Conference was founded on technical papers - and they remain absolutely central to the programme - this year they are woven more closely into broader sessions, so that the underlying technology is handled alongside the operational and business implications, putting all sides of the story in the same place. We’ll see new ideas from converging markets and emerging technologies from nascent companies as well as leaders from broadcast organisations that are successfully evolving with the new technology available, showing how the underlying technology is handled alongside the operational and business implications. With over 400 expert speakers across six dedicated streams, there will be much to learn and discuss. As well as the conference, we have 15 exhibition halls full of all the players in this rapidly changing industry. We are also seeing exhibitors evolve in the way they tell their stories; it’s no longer about black boxes with blinking lights, and they need to sell less tangible (though no less important) products and services as software or in the cloud – so they are devising clear, compelling and exciting ways to present them. There’s lots to look forward to at IBC2018. On a practical level, I am very pleased to confirm that the new North-South metro line is finally open! Now visitors will be able to get from central Amsterdam to the heart of the RAI in just a few minutes. It will be a great relief for those who dread the crowds on the number 4 tram, and it will slash journey times. We are expecting more than 57,000 people from around the world this year, and I encourage you all to immerse yourselves in the whole experience. Drop into conference sessions that interest you, or that you know nothing about. Go to the Awards Ceremony to see what the really innovative people are achieving today. Visit the IBC Future Zone, our regular space given over to the hit products of five years in the future. But above all, talk to people. Share your knowledge and experiences, and seize the opportunity to be a part of the big global debate. n
FEATURE
TVBE SEPTEMBER 2018 | 35
FEATURE
SHOWCASING IP AT IBC
The IP Showcase at IBC2018 will be hosted by leading technical and standards organisations within the broadcast industry. It will see vendors and broadcasters provide instructional and case-study presentations and an array of IP-based product demonstrations highlighting the benefits of, and momentum behind, the broadcast industry’s move to standards-based IP infrastructure for real-time professional media applications. TVBEurope hears from hosts SMPTE, AIMS and Video Services Forum (VSF) about what visitors can expect. Why is the IP Showcase important? PICTURED ABOVE L-R: Matthew Goldman, Wes Simpson, Mike Cronk
Matthew Goldman, SMPTE president: The IP Showcase is foremost a way to demonstrate to the professional media industry that the new “All-IP” infrastructure, interfaces, and protocols are real, practical to implement, and will enable the industry to leverage off of information technology (IT) advancements to simplify our workflows and scale our business. By All-IP we mean broadcast, production and other professional media applications using essence-based workflows over IP infrastructures. The foundation documents for this are SMPTE ST 2110 Professional Media Over Managed IP Networks standards suite, SMPTE ST 2059 Broadcast Profile for IEEE 1588 Precision Time Protocol standards suite, and AMWA Network Media Open Specifications (NMOS). These documents support the rigorous real-time and quality-of-service requirements that our industry requires. Mike Cronk, AIMS chairman of the board: At IBC this year, the IP Showcase is the single best
36 | TVBE SEPTEMBER 2018
destination for gaining practical knowledge of how IP can be deployed to benefit broadcast operations. Also, the fact that six leading industry organisations (AES, AMWA, AIMS, EBU, SMPTE and VSF) have come together to sponsor it and that over 60+ vendors are represented speaks to the momentum behind a standards and open specification-based approach to IP. Nowhere else at IBC will visitors be able to see a more comprehensive treatment of what IP can and will mean to our industry. Wes Simpson, marketing committee chair for the Video Services Forum Events like the IP Showcase are extremely important in the development and adoption of new standards like ST 2110 for IP media production, and provide an unmatched educational opportunity. These events are important for development because they enable multiple different technology suppliers to create products and solutions and have them tested in conjunction with other products. These events promote adoption by showing solutions to real-life problems, which can be accomplished by using the new standards. And, these events are educational
FEATURE because they host dozens of informative speakers at the Presentation Theatre. So what can visitors expect? WS: Likely the highlight of the IP Showcase is the Presentation Theatre, where dozens of industry experts will be speaking during all five days of IBC. Case studies, panel discussions and tutorials aimed at a variety of different experience levels will be available for free to everyone. The Video Services Forum is responsible for creating the programme and selecting the speakers, helping to ensure a consistent, high-quality level of presentations that are free of sales promotions. More than 50 different technology providers are supplying working products for the IP Showcase. Multiple clusters of equipment and software will provide active demonstrations of real-time video, audio and data content flowing over a massive IP network. Some of the featured demonstrations include: n ST 2110 supporting multiple image formats simultaneously on the same network, including high and ultra-high definition video, multiple audio formats, and related support signals all flowing over a common IP backbone using both copper and fiber-optic connections n ST 2022-7, which allows a signal to be sent over redundant paths and then combined at the receiver to form a pristine signal that is highly resilient to a wide variety of network faults n AES67 and ST 2110-30 audio interoperability, which will show how these two standards are able to work in concert, allowing audio professionals to capture, produce, record, measure and play across an enormous range of
equipment choices n ST 2110-21 defines a set of rules for how video sources perform in relation to their outbound packet flows, eliminating packet bursts and dropouts. This demo shows how these rules permit COTS Ethernet switches to transport uncompressed video signals without overflowing internal buffers or requiring excessive over-provisioning n AMWA/NMOS IS-04 and IS-05 provide automatic device registration and connection management, thereby eliminating the need to manually configure IP signal source and destination addresses into every device on a large ST 2110 system n Live ST 2110 video and audio recording will be demonstrated using cameras to capture expert sessions while they are being made at the IP Showcase Presentation Theatre n Cable count and weight reduction will be demonstrated by comparing the amount of bulky, heavy coaxial cable required to support multiple cameras to a single, highcapacity fiber optic cable that can support the same number of cameras using uncompressed ST 2110 signals Why should IBC visitors get involved? MG: Our industry is undergoing changes. With the advent of new “internet” players competing with traditional content providers, we need to leverage off of the agility and flexibility that IT technology brings with it, while maintaining the strict real-time, high-quality, highavailability requirements of broadcast, production and other areas of professional media. Migrating to All-IP infrastructures and workflows is the realisation of this. The success of this transition depends on everyone coming
“The next step in standardisation and specification is refinement of the control plane documents.” MATTHEW GOLDMAN
TVBE SEPTEMBER 2018 | 37
FEATURE
together with a common methodology, based on industry standards and specifications. The proverbial “a high tide floats all boats” results when people get involved and help create the change. MC: While IP technology may be new to some, the benefits of IP are clear. It scales better; it enables the design of a plant or OB van with format flexibility, regardless of whether the next format is 1080p50, 2160p50, or something else; and it opens up opportunities for resource sharing and better efficiency. It’s important to get involved to be prepared to take advantage of these benefits. A visit to the IP Showcase is a great first step on the road to understanding the importance of IP and what you need to know to successfully deploy IP.
MC: With the publishing of SMPTE ST 2110 a little less than 12 months ago, it has come forth as the clear transport protocol for real-time broadcast signals within facilities and mobile units. Prior to SMPTE ST 2110, there were a few more proprietary approaches, and even SMPTE standards such as SMPTE ST 2022-6. However, the SMPTE ST 2110 suite of standards has effectively codified into a set of standards what the industry was looking for to begin a migration away from SDI and to lay a foundation to a more IT centric future. In practical terms, over the last 12 months, every RFP issued for an IP system that I have seen has specified SMPTE ST 2110. We’ve also already seen a number of system deployments based on SMPTE ST 2110, and we are certain that this number will increase. What do you expect the next 12 months to hold?
WS: IBC attendees who are interested in learning about and actually seeing new technologies in action should come to the IP Showcase because it offers one, convenient location to observe multiple vendors operating in harmony with each other using well-crafted industry standards. IBC exhibitors who did not participate in the showcase should also be interested in paying a visit, to see how other companies are creating products in accordance with the ST 2110 standards. The wide range of different implementations, form factors, and design approaches should serve as an inspiration to system, software and hardware designers alike. What impact has ST 2110 had over the past 12 months? MG: We are currently in what is commonly known as the “early adopter” phase. There are several major implementations of SMPTE ST 2110 and ST 2059 around the world, and many others in various forms of trials and smaller implementations. These early adopters have shown the practicality of the technology and how it has improved the workflows and the economics versus existing industry-specific implementations.
MG: With more and more vendors offering mature solutions of All-IP, and we move from the early adopter to “fast follower” phase, we will see a lot more implementations and more complex solutions happening. The next step in standardisation and specification is refinement of the control plane documents so that we can realise fully the benefits of software-defined media processing and the parallel move of software virtualisation using containers and microservices. MC: We will certainly see more deployments and more momentum as the base set of SMPTE ST 2110 standards has been published. Work is not stopping with these though. SMPTE is working on a standard to add to the ST 2110 family that will specify how to incorporate compressed video flows, and another standard that will specify how to carry AES3 data-enabling transport of audio formats that AES3 can carry, such as Dolby E. Both of these will increase the scope and applicability of ST 2110 beyond just uncompressed video and uncompressed audio. Additionally, AMWA is working on a number of exciting and complementary protocols, building on their work on discovery, registration and connection management, that will improve system deployment. n
“Multiple clusters of equipment and software will provide active demonstrations of real-time video, audio and data content flowing over a massive IP network.” WES SIMPSON 38 | TVBE SEPTEMBER 2018
BROADCAST 3.0
IP MEETS CONTENT By Andreas Hilmer, director of marketing and communications, Lawo
T
he broadcast industry is currently facing its biggest challenge — and one of its biggest opportunities. Without a doubt, the migration to IP technology, which is already well under way, will change our industry. The change will be more profound than one usually expects from a technological leap. The notion of Broadcast 3.0, originally coined by Lawo, encompasses all aspects and trends triggered by the move to IP, like increased flexibility and unlimited scalability, to name but two. The industry’s transformation will go far beyond the pure deployment of IP technology, resulting in what we call “smart production facilities” which are able to keep pace with the rapidly growing demand for content. Broadcast 3.0 specifically describes the broadcast and broader content creation industry’s shift to production and content delivery utilising Internet Protocols (IP) and commercial off-the-shelf (COTS) hardware to transport and distribute video, audio and metadata, as well as ancillary and control data, as elementary streams. The term refers to the broadcast industry’s third evolutionary step, after analogue baseband signal transmission over dedicated point-to-point hardware (Broadcast 1.0), followed by the industry’s transition to digital audio and video baseband signal transport over dedicated or specialised hardware and, in some cases, the leveraging of early-stage telecom- or computer-based tools for
40 | TVBE SEPTEMBER 2018
transmission, routing and storage (Broadcast 2.0). IP technologies as such are not really new; other industries have relied on them for years. Today, however, these technologies — including the principles of data centre management and cloud computing, coupled with the availability of high-speed dedicated fibre networks — are being harnessed for content production applications and applied to core plant infrastructure as well as to remote or distributed production workflows. Together, the current trend toward IP-based production and content delivery and the operational ethic of maximising flexibility and versatility open up new possibilities for workflows and production principles. In an increasingly competitive environment, time and flexibility have become even more critical. One way of saving time is by finding faster ways of getting source signals to the required destinations. Physical patch changes take time and are error-prone even when operators are not under stress. In an OB truck, the process of changing cable routes is even more intricate because space is limited. In addition to efficiency, flexibility is a key consideration. On an IP network, all streams can be automatically available wherever they are needed. Plus, adding more source or destination devices is just a matter of connecting them to a switch. Add to that the possibility of using a
FEATURE given room, studio and even hardware device for multiple purposes, and the benefit of recalling complex MCR setups with a convenient mouse click or by pressing a button becomes obvious. As control data is part of what can be transported over an IP network, forwardthinking broadcasters have started to deploy IP-based remote production strategies with a small on-site crew — essentially camera operators and commentators — and a production crew working several hundred miles away but with control over the camera and audio settings, as well as tally. In light of these advances and benefits, Broadcast 3.0 creates what could be labelled “smart production facilities.” Be it a studio, a truck or a hybrid installation with distributed resources, all areas reap substantial benefits from the IP approach. Provided, that is, the following prerequisites are met: 1. Deployment of IP networking technology for the transport of video, audio, and control and auxiliary data. In every instance, this deployment involves switches. Hybrid setups also require units that translate incoming audio or video signals into data streams, and vice versa. 2. Function- and format-agnostic software-defined processing hardware that applies data centre and cloud computing principles to a broadcast facility’s core
infrastructure. The main aim here is to achieve maximum versatility. Using a modular frame to accommodate modules whose functions are determined by the software uploaded to them, users can adapt available functionality to the requirements of different projects. 3. Network- and resource-aware orchestration and control systems that can flexibly map available (hardware and virtual) infrastructure resources to production workflows. 4. Intelligence within the system that allows for the creation of automated workflows and technology-assisted productions. This includes artificial intelligence (AI) and other technology approaches that aim to reduce production effort whilst increasing production quality. The value proposition driving the move towards Broadcast 3.0 is derived principally from the shift from linear baseband to nonlinear streams where audio and video signals are digitised data packets. Network management and standardised control protocols take care of automating the production process, or workflows. Automated user assistants such as auto-mix in audio or auto-keying in video are only the first visible incarnations of what sophisticated user assist systems and AI technology will add to broadcast productions. With Broadcast 3.0, the “doing more with less” mantra is becoming a reality. n
TVBE SEPTEMBER 2018 | 41
FEATURE
NETFLIX’S GROWING INFLUENCE ON EUROPE’S SVOD OUTLOOK Mohammed Hamza and Michail Chandakas, analysts at S&P Global Market Intelligence’s Kagan unit, explore the current state of play in Europe’s video-on-demand market through the company’s forecasted revenue analysis in 14 national markets
T
he over-the-top (OTT) subscription online video-on-demand market in Europe should keep growing over the next five years, with Kagan, a media research group within S&P Global Market Intelligence, projecting the market to reach $6.8 billion in revenues in 2022, up from $3.9 billion in 2017. Primary contributing factors include the presence of localised Netflix Inc. throughout the region and its integration into pay-TV systems; the expansion of Amazon.com Inc.’s Amazon Prime Video as a standalone service; and the debut of international OTT services such as Time Warner Inc.’s HBO and Naspers Ltd.’s Showmax, as well as the strengthening of offerings from local media providers such as Sky plc’s NOW TV and ProSiebenSat.1 Media SE’s Maxdome. We assessed 14 European national markets including Denmark, Finland, France, Germany, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden, the UK, Poland and Russia. According to our estimates,
Figure 1
42 | TVBE SEPTEMBER 2018
total paid subscriptions in that group of countries will grow from 39.3 million in 2017 to 60.8 million in 2022, a compound annual growth rate of 9.1 per cent. This number represents paid subscriptions to online video services and excludes registered accounts through sale promotions or multiplay bundles. Netflix leads in active paying users in all countries but Germany, Poland, and Russia. It is investing heavily in local content in the majority of its European markets. In Germany, Amazon Prime Video is ahead of the competition, having capitalised on the earlier success of Amazon Prime. On the other hand, in eastern Europe, local services have been able to dominate even when facing bigger international entrants. The majority of these services adopt a hybrid AVoD/TVoD/SVoD model focusing predominantly on advertising to generate revenues. Unlike in the US, where multichannel homes are on the decline, in Europe cord-cutting has been less prevalent, with the average home
Figure 2
FEATURE Figure 3
stacking SVoD services on top of a basic pay-TV subscription. As shown in Figure 2, Kagan expects Europe’s SVoD subscriptions to keep growing at a much higher rate than multichannel subscriptions in the next five years. Even though the growth rate for both sectors should slow, that decline will be steeper for the online video sector as it approaches maturity. Looking at 2017 estimated gross national income purchasing power parity (GNI PPP) indexes by country combined with the average monthly cost of a subscription to a VoD service provides an indicator of SVoD affordability for a market’s average household. Even though Denmark has the highest average monthly subscription cost at $13.41, Portugal has the least affordable SVoD services, with an index of 0.4 per cent. Russia has the most affordable monthly online video subscription with an index of 0.06 per cent, due in part to high levels of piracy, income inequality, the prevalence of free ad-funded services and a dominant multichannel sector. NETFLIX IMPACTS OPERATOR SERVICES, DEVICES, CONTENT DRIVE Netflix’s pay-TV system integration has reached ubiquity in western Europe with 80 per cent of the 50 cable, IPTV and DTH operators that integrate third-party OTT SVoD services hosting the service, a trend kicked off by Virgin Media in 2013. All operators surveyed across 16 territories carried at least one third-party OTT or SVoD service. From 15 operators in March 2016 to 40 as of June 2018, Netflix’s integration into pay-TV systems has grown nearly threefold. The notable pick-up in operator deals over the last year highlights how significant a contributor they are to the growth of Netflix globally. DTH operators had shunned Netflix and third-party OTT carriage until Sky announced in March 2018 that it had also signed a deal with Netflix. Netflix represented a greater threat to DTH operators with content assets to protect: proprietary TV networks, production and long-held key content distribution deals with the majors. Consumer dissatisfaction with high costs for largely DTH-controlled premium content – until the recent emergence of telco and OTT content challengers – and long term contracts have aligned with changing viewing
Venn diagram
habits. Meanwhile, the emergence of flexible, low-cost OTT options in markets with strong free-to-air broadcasters hosting popular digital distribution platforms has tipped the competitive dynamic away from satellite pay-TV. Netflix is now firmly embedded in the pay-TV ecosystem, from the user experience and interface to billing and marketing. Sky, for instance, plans to display popular Netflix shows alongside its own content on the Sky Q platform, a testament not only to Netflix’s influence across Sky’s European markets where Sky runs proprietary OTT services but also to declining subscriber bases in Italy, the UK and Ireland (with slowing growth in Germany and Austria). Sky’s own OTT future could yet receive a boost depending on the outcome of the battle among Disney, Fox, and Comcast to buy out Sky, and those suitors’ respective OTT ambitions. Few operators without Netflix carriage or integration remain: Canal Plus SA in France and Modern Times Group’s Viasat in the Nordics, as well as a handful of tier-two players: Voo in Belgium, Pyur in Germany, Altibox and Get in Norway, Free in France, NOS in Portugal, and Vodafone and Cosmote in Greece. Even Altice has reversed its position, driven by recent subscriber turmoil in its home market of France – its SFR unit was one of the first to integrate Netflix in October 2014 before being dropped as Altice embarked on its own ambitious and costly content creation and rights-acquisition programme. From June 2017, Altice added Netflix in France and Portugal (as well as Israel and the Dominican Republic). Other major groups to join include Deutsche Telecom, which extended an existing deal in October 2017, and Telefonica in May 2018. OTT influence continues to grow. In countries where high-speed broadband development has proliferated, OTT services have also emerged as a much more viable early-mover option for distributing UHD content versus traditional linear channels. Europe has the most OTT services streaming UHD programmes, with 18 providers as of end-2017. Most of these services were made available in 2014 and 2015.n
TVBE SEPTEMBER 2018 | 43
FEATURE
ADDRESSING SECURITY CONCERNS WITH CLOUD-BASED TV PLATFORMS By Chem Assayag, EVP marketing and sales, Viaccess-Orca
D
ata collection and analytics have opened new doorways for operators, in terms of improving advertising opportunities and revenue generation. Yet, maintaining compliance with privacy regulations, such as GDPR, and protecting content from illegal streaming on the web is a concern, especially as operators shift from traditional infrastructure to the cloud. TV ANALYTICS WILL GIVE YOUR BUSINESS AN EDGE The case for TV business analytics in the broadcast industry is well-established. Most broadcasters understand that big data plays an important role in helping to personalise TV content for each viewer. Moreover, tight integration of analytics with a cloud-based TV platform provides 360-degree visibility into channels’ performance, helps optimise content acquisition, reduces churn, and enhances the viewer experience. On its own, big data has no value. When actionable insights are applied to data, it makes a significant impact on broadcasters’ ability to optimise services and personalisation. Today, broadcasters can use cloud-based video platforms to process large-scale data sets in real time and analyse that data over long periods. The cloud can rapidly access information from multiple sources. This process is much more efficient than traditional hardware-based systems, in part because a large volume of data is now stored on the cloud. As a larger amount of data is collected and stored within clouds, including the cloud TV streaming platforms, efficiency will continue to climb. While it’s clear that data is beneficial for
44 | TVBE SEPTEMBER 2018
broadcasters, many in the industry are unsure about how GDPR will affect their business. On May 25th the EU passed the regulation, which must be abided by any business that serves EU customers, whether it’s based within or outside the EU. The penalties for not complying are severe in terms of fines. Data integrity, accuracy, relevancy, and legally justified processing are critical in order to maintain compliance with GDPR. In particular, consent given by the customer is valid only if customers give it freely, based
‘Using an end-toend, cloud-based TV platform as a service can deliver compelling TV services.’ on clear and specific information for each processing operation needed. Moreover, customers have additional rights to access their data and see what is being recorded about them. Navigating the GDPR era may be challenging, but it’s important to recognise the opportunity that it brings to broadcasters. The goal of GDPR is to ensure transparency and trust between individuals and companies that collect their data. It also opens up the opportunity for innovation. By becoming transparent about the way that they use data,
broadcasters can become trusted by consumers and use data to elevate the user experience. IS THE CLOUD SECURE? In today’s world, where sharing personal data is commonplace, it’s only natural that consumers question whether their information is safe. Data breaches seem to happen frequently. Broadcasters have even more at stake than traditional companies because they are protecting more than just consumers’ data; they also have a responsibility to protect their own content. Illegal streaming on the web is prevalent today. As the broadcast industry migrates to cloud-based video platforms, they need to ensure that those platforms are secure. It wasn’t too long ago that the cloud was viewed as less secure. Yet reported data breaches against cloud-based analytics services have been much lower than on-premises systems. When data is shared over direct and secure interconnection, which bypasses the public internet, a breach is even less likely. To successfully protect content, broadcasters need a cloud-based TV platform with robust multi-DRM technologies, which safeguard the content when it is distributed to multiple consumers on multiple devices, in addition to strong anti-piracy protection measures. Using an end-to-end, cloud-based TV platform as a service, such as VO’s TVaaS, broadcasters can deliver compelling TV services, including personalised content, and increase their monetisation. The key is choosing a platform with advanced analytics, robust content protection, and effective anti-piracy measure, compatible with today’s evolving privacy and security requirements. n
FEATURE
REMOTE PROMPTING ACROSS CONTINENTS By Robin Brown, product manager for Autoscript, a Vitec Group brand
C PICTURED BELOW: “As illustrated, all networked devices can be accessed and configured from the prompter application either locally in the main broadcast centre, or remotely in the field, with minimal delay and resources. The prompted text can be controlled by operators in either location.”
overage of large, multinational sporting events creates tremendous opportunities for broadcasters, but also some complex and daunting production challenges. A great example is the recently concluded global football event, in which matches were spread across 11 cities, 12 stadiums, and four time zones. Such a large and geographically distributed event creates huge logistical headaches, not the least of which is how to move people and equipment from one location to the other. At the same time, broadcasters’ budgets are squeezed like never before, and they’re facing a constant and growing requirement to produce greater volumes of content with fewer resources. Remotely integrated at-home production (REMI) is a powerful and relatively new approach that directly addresses these challenges. In principle, REMI reduces the number of crew members needed on location and allows more people to help run a show remotely from a broadcaster’s fixed facility. Fuelled by the growth of
46 | TVBE SEPTEMBER 2018
IP-based media networks and operations, REMI enables the broadcaster to take better advantage of all its technical and human resources and increase both its production efficiency and output. By reducing the movement of people and equipment, the broadcaster can lower operating expenses, shorten on-site setup times, and minimise the production’s environmental impact. Teleprompting is a key area that lends itself well to REMI workflows. While other broadcast functions have yet to completely prove their viability for remote broadcasting, advances in IP prompting technology have answered many of operators’ concerns around reliability and latency. So what’s required for a prompting solution that can gain broadcasters’ complete trust in a REMI environment? At the base level, the prompting system should be completely IP-enabled, with every component designed from the ground up around an end-to-end IP workflow. In this manner, the prompting system can deliver the connectivity, flexibility, ease of use, and redundancy critical for live broadcast operations. Another important factor for prompting in a REMI environment is the ability to send small unicast data packets over the network, instead of the full video feed, for displaying the video script on the prompting monitors. With only small amounts of data sent over the IP network, every monitor can remain in constant communication with the master application to ensure reliable synchronisation and enable centralised management. Rather than all monitors relying on a single video stream from a video generator – a single point of failure – each monitor on the network renders its own script video. This avoids the bandwidth, latency, and synchronisation issues that can arise with full video-over-IP solutions. Going back to the previously mentioned large sporting event, one broadcaster deployed Autoscript’s Intelligent Prompting workflow across two continents. The broadcaster installed two prompting operations, one at local HQ and one locally at the event. Whichever location was directing the content also controlled the prompting, enabling the two sites to share resources efficiently. A backup PC was installed at the international broadcast centre but it sat unused for four weeks.
FEATURE The broadcaster’s biggest challenge was how to improve the latency of the audio signal, because the latency of the prompting feed was almost undetectable. In such a prompting setup, integrating local and remote productions becomes seamless, with the option for one operator to prompt for all studio locations. This means, for example, that highlights shows from a local HQ can include segments from the remote studio and control the script for all locations. When it comes to prompting workflows, one of the biggest questions on many broadcasters’ minds is reliability. Of course, for global live events, implementing a fail-safe is critical. An IP-based prompting workflow creates connectivity without requiring point-to-point connections, and therefore delivers core resilience. At the prompting application level, this enables a fullredundancy fail-safe by which a mirrored PC, paired with the main application, can seamlessly take over if the first PC/application fails — with automatic synchronisation of script position, individual users’ settings, controllers, newsroom connections, teleprompters, etc. This remote redundancy also safeguards against any local network issues at the broadcaster’s headquarters. Since all elements of the prompting system are
networked devices and therefore not physically connected to each other, controllers in any control room or studio can continue serving prompts from the backup machine, wherever it is in the world. In the example above, the mirror PC that went unused at the IBC could have provided a remote backup if needed, at the ready for the operator to take control with a single click. Because the switch from the host PC to the backup is immediate, the on-air talent might notice nothing more than a fractional, millisecond pause in the scroll speed – if they notice anything at all. In this manner, a remote IP prompting workflow can actually offer more security than a traditional local video workflow. IP prompting workflows – based on transfer of data packets rather than full video feeds – are now coming of age. That doesn’t mean that video-based prompting workflows are dead; in fact, a solution like Intelligent Prompting supports customers who are still working with a video workflow by including both video and Ethernet inputs, enabling them to make a seamless transition in the future. Any production planning a major remote event should consider the logistical, cost, environmental, and employee welfare benefits of integrating prompting into a remote, IP-based workflow. n
PICTURED ABOVE: Robin Brown
TVBE SEPTEMBER 2018 | 47
PRODUCTION AND POST
WHEN PIONEERS COLLABORATE, THE CUSTOMER WINS Robert Erickson, advanced technology director at Grass Valley, on the company’s recent collaboration with Cisco
T
he media industry is experiencing one of its biggest transformations since the shift from black and white to colour. Consumers are moving away from the schedule-driven content consumption of the past to curating their own selection from a range of sources, such as online platforms and social media. The downturn in linear TV viewing is well reported. A recent Ampere Analysis study cites “the higher the uptake of non-linear viewing devices in a given country, the lower the viewing time spent on scheduled TV.” This places broadcasters and media organisations under enormous pressure to respond swiftly to this shift in consumer habits. They must now ensure that they tell powerful stories across a diverse range of platforms so consumer awareness of and engagement with their brand remains strong. For many broadcasters, IP is what will ultimately underpin this strategy. The industry is experiencing the early phases of this transformation as broadcasters and media organisations are increasingly ‘going digital’ by moving their content workflows and infrastructures to IP. Such a transition enables signals to be delivered to anywhere from anywhere so they can be seamlessly processed to provide rich content services – an impossible feat for a traditional linear environment. IP also delivers the scalability and flexibility needed to adapt and evolve services to meet consumers’ ever-changing demands. The vendor community has also had to respond to the demands of its customers. Yet when it comes to IP, the media industry lags behind others such as AV and IT where IP has been embraced for many years.
48 | TVBE SEPTEMBER 2018
Collaboration with specialists from other industries and the cross pollination of ideas is invaluable, helping to ultimately provide the best solutions to customers. When it comes to delivering the best IP infrastructure for video production, collaboration enables all parties to gain deep knowledge and important insight that facilitates their innovation roadmaps. This was exactly what Grass Valley and Cisco set out to achieve at the recent Cisco Live US 2018 event in Orlando, which saw a real evolution of what has been a three-year all-IP live production infrastructure and workflow-driven partnership. Cisco Live brings together top experts, customers and partners to provide a centre stage for the latest technical innovations. The Orlando event was all about imagining intuitive IT and attracted around 25,000 attendees, including those that participated in the Technology Ecosystem Partner showcase.
Cisco had very specific requirements for the Cisco TV studios at the event and wanted a live production infrastructure and workflow that was extremely reliable. Grass Valley was selected for its commitment, interoperability, use of industry standards in its system designs and products, and for its collaborative approach. Furthermore, it was able to deliver a highly fault-tolerant, redundant architecture based on SMPTE ST 2022-7. By marrying Grass Valley’s video production and processing expertise with Cisco’s wealth of IP knowledge, the two were able to develop an all IP-based live production infrastructure to support Cisco TV studios – the on-site studios that handled the day-to-day operation of live streaming and broadcast. Throughout the event, live broadcasts of the keynote and showcase sessions were available via Cisco’s websites, as well as live streaming via Facebook and YouTube. The 24-hour a day operation delivered content to a global audience of around 300,000 every day. Replicating a live television production environment, Cisco TV was a perfect test bed for combining Cisco’s IP deep know-how and Grass Valley’s expertise in live production and content delivery to multiple platforms, including social media. The combination of Grass Valley live production workflows with Cisco IP infrastructure delivered a highly scalable solution that can easily grow to support more complex and larger production needs. The success of this collaborative approach has already been seen in other project wins from some of the leading broadcasters and media organisations around the world. n
PRODUCTION AND POST
EXTREMELY INTELLIGENT Jenny Priestley talks to Norway’s TV 2 to find out how they’re using intelligent display systems at their Oslo and Bergen facilities
O
ne of the most important parts of any broadcast facility is its control system. With studios and control rooms often being used by different broadcasters and their production teams, facility managers are looking for systems that can be operated by non-technical staff, expanded as requirements dictate and reduce operational costs in the process. TV 2, Norway’s largest commercial broadcaster, recently installed IPE’s IDS production timing system to its sister broadcast and production facilities in Bergen and Oslo. The IDS system includes door displays, scheduling information, tallies, and video feeds all derived from different systems within their existing workflow. Used in conjunction with TV 2’s booking system and a Lawo Virtual Studio Manager (VSM), IDS displays provide information – at a glance – as to when a studio is scheduled for rehearsal, recording, or on-air, as well as the type of production. A live IPTV video feed of the channel that is being worked on in a studio or commentary room can also be shown on the displays. Alf-Inge Tønder is technical operations manager at TV 2, and has been involved in the installation of IDS in the new facility at Bergen: “We began employing it as an information display system. That meant we were able to use it to display what’s scheduled to be on air in the gallery or in the studio,” he explains. “Then we realised we could start using it in our offscreen displays, so for example in our sports commentary units. We can use it to control a whole studio, so the lighting, background etc.” “IDS allows us to control the lights in the studio, it controls background screens on the set so we can change the background on the screen.
50 | TVBE SEPTEMBER 2018
“At first glance it’s a bit confusing just because it can do so much. It becomes a lot easier when you actually start learning it.” ALF-INGE TØNDER
The whole studio is more or less controlled by the IDS system,” continues Tønder. “That means if we’re using an unmanned studio, so if the operator is not technical, IDS is perfect because it’s a simple layout for the user. They just have to press one button and the studio is more or less ready.” Ease of use has been a key factor in TV 2’s decision to employ IDS according to Tønder, who says the team is finding it much easier to use than what he describes as the high-end equipment: “Once you’ve got the hang of the system it’s easy to use. At first glance it’s a bit confusing just because it can do so much. It messes with your senses a bit I think,” he laughs. “It becomes a lot easier when you actually start learning it.” “The system is more or less automatic,” adds Tønder. “It’s getting information from different places like booking systems and scheduling - we hope anyway as we are currently changing our scheduling system as well!” Of particular interest to Tønder is IDS’ live IPTV video feed of the channel that is being worked on in a studio or commentary room which can also be shown on the displays. He says he already has an idea of how to use it to help bolster security: “I’m hoping the IDS system will allow us to access a video of the studios when they are off air. So you can actually be outside the studio and see what’s happening inside. That means you can keep a check on a studio even when you’re off site. That’s my idea anyway, it just needs a bit more work. “The main reason I want to do that is because our maintenance people normally don’t know when a studio is on air or not. Hopefully this would stop them ending up in shot during a live broadcast!” n
CLOUD CHASERS Sony Professional Europe’s Virtual Production service recently helped Red Bull stream the annual Alpenbrevet motorcycle race live from the Swiss mountains. Colby Ramsey headed to Sarnen to find out how the new cloud-based solution can take content creators beyond the restrictions of physical production infrastructure
U
p to now, high quality and reliable live production has notoriously required significant infrastructure, major financial investment and a plethora of trained technical staff. Enter Virtual Production. The Red Bull Alpenbrevet motorcycle race in Sarnen, Switzerland recently provided the perfect opportunity for Sony to showcase this new ondemand cloud production service, which aims to provide a complete production toolset for multi-platform content creation and delivery. At the one-day event, Red Bull Switzerland used Virtual Production’s cloud-based services and 4G connectivity to capture and broadcast the entire race to its social media platforms using a limited number of camcorders and a single laptop to operate the Virtual Switcher GUI. “I heard about Virtual Production last year at the Locarno Festival and straight away we started thinking about opportunities where we can use this product, so we got in contact,” says Red Bull Switzerland production
52 | TVBE SEPTEMBER 2018
manager Hubert Zaech. The Alpenbrevet has grown in terms of the amount of participants every year since its inception in 2010: “Some are hardcore and some are first-timers but the goal is simply to have fun and enjoy the weekend,” Zaech adds. “It’s important to note however that it is not a race because motor racing itself has traditionally been banned in Switzerland. Instead it is a tour, with no real winner, while prizes are awarded to the teams with the fastest times and whackiest appearances.” The Alpenbrevet event has become an important date in the calendars of casual motorcycle racers across Switzerland and Europe. While the scenic route of the race is a big draw for participants, it poses significant challenges for production teams, and so Sony’s Virtual Production proved to be the perfect solution for the event. It provided the Red Bull team with the freedom, mobility and agility to capture and distribute content quickly, without having to deal with the logistical nightmare of installing physical infrastructure in a
PRODUCTION AND POST remote, mountainous location such as this. “We had six Sony cameras: PXW-Z90s, the X200 which was out on the course, and the X3000 small action cam, which was rigged to a bike for a POV viewpoint during the race,” Zaech explains. “These captured live action from four different locations around the course, including from high altitude summits, while just one laptop managed all the production from a small production office in Sarnen.” Virtual Production aims to create a frictionless subscription-based production workflow with access to a cloud based professional vision mixer, which can then be delivered quickly to social media, websites, apps or CDN platforms. “It’s a complete production toolset that’s optimised for covering music or sports events like this,” says Sony Professional Europe’s marketing manager for Live Production Solutions, Nicolas Moreau. “We responded to various requests from the market to be able to produce content cheaper but still with professional quality video.” It remains clear that there are opportunities where live streaming is a very good tool for companies to drive and build audiences, and increase awareness of their brand. With media going mobile on a global scale, geographic location is always an important factor to consider, so Sony developed the solution to be able to deliver from anywhere, to anywhere: “Overall mobile connectivity is getting better and this is opening up a whole host of new possibilities,” Moreau adds.
LIVE CONTENT IS KING Sony’s Virtual Production proposition is definitively simple, and it took just a two-hour training session for the Red Bull production teams to get to grips with the solution and its tools. The service has been designed to take the cost out of live production whilst empowering content creators – including the likes of vloggers, Instagrammers and Twitch streamers – to grow their reach and revenues. With just a camera acquisition kit, production teams can log into and synchronise with their personalised Virtual Production portal, and stream content on a pay-as-you-go basis. “What is interesting is that with this post-live workflow, you are recording into the cloud, so after this you can do some non linear edits and then publish straight to a website,” says Moreau. “It is important to be able to do this because you can multiply the number of total viewers by an average of five to seven times this way.” Camera crews on location use wireless transmitters to feed into a virtual production switcher that is hosted in the Amazon Web Services cloud. At the same time a vision mixer based anywhere in the world logs into the Virtual Production service using an ordinary web browser. From here, users can be assigned specific roles and can switch camera feeds, add graphics, logos and captions and stream
the output to a range of different platforms at speed, all without the need to run or install any software. “It is pretty simple to operate, which is why we can target new customers like digital media groups who are not necessarily AV professionals,” Moreau adds. “This application for Virtual Production at the Red Bull Alpenbrevet was a typical and rather perfect use case. It is a medium sized event that requires a professional quality stream for a major brand like Red Bull, so the mobility of the solution here was of utmost importance.” Virtual Production aims to fill in the gaps between traditional heavy live production solutions and the consumer live streaming applications. With its core production hosted in the cloud, the flexibility of the service enables it to cover a wider range of events while still offering professional programme outputs. “Normally for this event we have a film crew who create a highlight clip for social media,” Zaech remarks. “Our goal for this year was to improve this context and live stream the entire event. We tested the network coverage of the entire track and was able to achieve 4G coverage throughout. We had more riders and spectators than ever before so this could have been a potential problem for bandwidth, but this was not the case.” What’s for certain is that the Red Bull Alpenbrevet was the perfect event to showcase a solution of this calibre, with a large area to cover but a small enough event to stream exclusively and successfully to social media. “I think the other side of the production workflow with an OB van and satellite transmission will still be part of the process going forward and Virtual Production will work in tandem with it, but for smaller events like this a cloud-based solution is fantastic,” says Zaech. “We’re forever trying to push the boundaries of innovation when it comes to covering our events.” n
PICTURED ABOVE: Virtual Production in action
TVBE SEPTEMBER 2018 | 53
PRODUCTION AND POST
TALKING HEADS:
CHALLENGE, OPPORTUNITY, & THE DRIVERS OF CHANGE Technology is driving evolutionary change across the traditional broadcast industry, and its impact on the production and post-production landscape poses a big challenge for operational and technical professionals looking to introduce efficiencies into their workflow. Joining our roundtable discussion forum to discuss the areas of challenge and opportunity within our marketplace, are: Bea Alonso, director of global product marketing, Ooyala; Ian McDonough, CEO, Blackbird; and Stuart Almond, head of marketing and communications, Sony Pro Europe
56 | TVBE SEPTEMBER 2018
PRODUCTION AND POST When you think about technology in production and post, what are the biggest areas of challenge/concern?
opportunities, not only to be able to deliver content quicker to their existing syndication partners, but also to explore new markets and licensees.
Ian McDonough: Our customers tell us that scalability is a huge challenge for their organisations. Increased content throughput raises the importance of speed to visibility across the supply chain over multiple geographies and on lower bandwidths. Access for instant review workflows without traditional video rendering and export delays. The ability to view content instantly anywhere enables focus on fulfilling all outputs in a multi platform workflow. Scale increases cost and our customers are looking for opex financial structures to limit capex spikes. Blackbird is focused on scalability with cost-efficiency.
IM: We are seeing a tipping point in the mass acceptance and adoption of cloud technology. Customers do not want heavyweight on-premise solutions to be moved to the cloud, it simply isn’t efficient or sensible. Having lightweight applications that run as cloud-native and browser-based is far more flexible than moving a traditional hi-res workflow to the cloud and providing a virtualisation layer for user access. That is just moving the problem somewhere else. Adding AI & ML services to accelerate previously manual tasks like captioning is a huge advance for enabling consumption on other platforms including social/mobile where audio is often muted.
Stuart Almond: Access to content. We know that the volume of content created is growing as an astronomical rate – and as more and more content creators are enabled to create, stream, produce and deliver content to an unprecedented number of platforms, I’d argue that storage is only an iceberg tip when we look at the problem to come. Finding, processing, sweating what is in effect a media company’s most valuable asset, its content, is a real need for many of our customers right now. But the need for this to be more consolidated across the business, deliver more collaboration amongst users, and ultimately increase speed of delivery becomes the real focus.
What will be the next big driver of change? BA: Artificial Intelligence is taking the media production industry by storm and introducing opportunities for efficiently indexing content and automatically adding metadata that facilitates content search. Ooyala leverages cognitive services, such as Microsoft Video Indexer, to collect key metadata such as speech-to-text transcriptions, facial, sentiment and object recognition that allows editors and producers to quickly locate and edit content. We are seeing companies already benefiting from implementing this solution, automatically logging hours of live content with meaningful metadata that can, in turn, trigger workflows such as creating highlights and managing content lifecycle rules. It may be possible in the near future to leverage AI to automatically suggest relevant rushes and b-roll to video editors during the post-production process, which can release them from tedious time-consuming manual work.
What are the biggest areas of opportunity? Bea Alonso: Without a doubt, the ability to increase revenue opportunities by introducing efficiencies in the way content is produced and post-produced. Integrated MAM-based content management systems, that can easily scale up and down, work on the cloud, on premise or as a hybrid, and which can integrate multiple best-inbreed solutions are revolutionising content production. Media companies gain control and visibility when they connect their content supply chain: traditional production bottlenecks can be identified and addressed, time savings can be realised, allowing creative teams to produce more content, in less time, at a fraction of a cost. Turner Asia deployed the Ooyala Flex Media Platform to automate and orchestrate workflows, reducing content processing time by 85 per cent. This is presenting them with new revenue
SA: As the migration to cloud-based production continues and the rapid growth of live streaming increases, the combined benefits of 5G along with Artificial Intelligence (AI) will help address both the desire for tailored content and the need for richer audience engagement. Disruptive micro-services, provided by a range of different organisations (including Sony), connected through open APIs, will make it easier to benefit from cloud-based MAM services. And as cloud production tools and streaming mature, audiences will have more choice, that ability to experience more live events than ever before, and engage with content in the way that they choose to. n
TVBE SEPTEMBER 2018 | 57
GUARDING AGAINST DISASTER How VFX house Jellyfish Pictures is safeguarding its content against a major fail
A
nimation and visual effects (VFX) are key creative elements of film and television post-production that rely heavily on processing and distributing data. Any loss of material can set a project back severely in terms of both schedules and costs. To guard against this, leading London animation and VFX house Jellyfish Pictures has contracted broadcast IT provider ERA to host an extensive disaster recover (DR) infrastructure for its high-profile operations. Jellyfish Pictures was founded in 2001, since then it has grown from the original “two-man band” into an internationally recognised operation with over 100 employees working over three sites in central and south London. The company’s Margaret Street offices in Soho concentrate on VFX for feature films, which have included work on Solo: A Star Wars Story, The Last Jedi and Rogue One: A Star Wars Story; and broadcast programmes, with clients including the BBC and Netflix. Premises in Brixton and the Oval are primarily dedicated to animation, for the children’s TV market with shows such as Dennis and Gnasher: Unleashed, Floogals and Bitz & Bob. The three facilities are connected over a 10Gb fibre network with Soho as its hub. Storage across the company
58 | TVBE SEPTEMBER 2018
is on Pixit Media PixStor software-defined systems. Jellyfish installed its first PixStor in 2014 at the Margaret Street site, where it runs with 260TB of capacity, followed by both Brixton and the Oval, which share 200TB. Animation and VFX projects in progress were backed up to LTO data tape for DR purposes. Although LTO had long been used in post-production for near and deep storage, Jellyfish’s chief technology officer, Jeremy Smith, says it was beginning to fall behind the demands of today’s workflows: “The tapes could not keep up with the amount of data involved now. If you’ve got LTO libraries falling over then that’s not a reliable backup. We need speed of access as well as reliability and had to have something where we could do incremental, Delta backups every evening without problems.” Smith explains that because Jellyfish was already an existing customer of ERA they approached them to provide DR facilities. “It was the natural, easy proposition,” he says. “It’s handy to have the backup under one support contract, partly because it means there are fewer people involved.” ERA has been providing IT workflows for broadcasters, VFX and post-production and media facilities since 1998.
PRODUCTION AND POST Its services include managed contracts, remote editing, cloud platforms, IaaS (infrastructure as a service), storage as a service, system maintenance and consultancy. ERA’s DR operation for Jellyfish is based at the VIRTUS data centre in Hayes, west London. It is linked to the Margaret Street premises over another 10Gb fibre circuit, with backing up from all three sites carried out automatically. “The projects are immediately replicated to the data centre,” comments Sean Baker, commercial director of ERA. “We don’t replicate everything; for example, transient data isn’t included, but all the changes and updates to a project are stored. It’s fully automated from the backup perspective but ERA personnel do monitor the process to make sure it is doing what it needs to do and that the links between offices are live.” Baker adds that while DR is about ensuring work is fully backed up and available, it also has to be readily accessible. “Jellyfish can have between 150 and 200 animators working on projects and if there is a loss of data you can’t have those artists and operators sitting idle,” he says. “You need to know you can recover that data quickly.” Smith agrees: “It’s about keeping the production going rather than wasting time and money waiting to access the backup, which is what would happen if we were still
having to recover data from LTO.” Smith says that the DR system is tested every day at the Jellyfish facilities to check it is running and available. While there have been no major emergencies, there have been a few occasions when the system has been called into action: “We have had some accidental deletions. But it was very easy for the artist to pull back and recall the material that had been lost.” ERA’s data centre also provides recovery capacity for other leading post houses and programme providers including STV, Turner and Absolute. Jellyfish’s DR service is hosted on a 250TB PixStor and so, says Baker, can be classed as a private cloud platform. “Anything counts as cloud when it’s not on a company’s own premises,” he says. Smith observes that file/data-based backup and recovery is now vital as the production and post production of both films and television programmes moves more towards Ultra High Definition/4K with HDR (high dynamic range) processing. “The amount of data growth is staggering,” he says. “Tape libraries just can’t keep up with the file sizes and capacities involved. But the DR offering from ERA has the capacity and reliability we need. In some ways we don’t even have to think about backing up. And because it’s not on our premises, it’s out of sight, out of mind. But we know it’s there when we need it.” n
PICTURED ABOVE: Jeremy Smith, CTO, Jellyfish Pictures
TVBE SEPTEMBER 2018 | 59
PRODUCTION AND POST
RENTAL VS BUYING: A VENDOR’S PERSPECTIVE The initial outlay required to obtain a significant item of new equipment, such as an ingest server or replay system, can mean that some broadcasters, OB providers and venues rely extensively on rental systems. But with more affordable, easy-to-use solutions now reaching the market, aQ Broadcast CEO Neil Hutchins says it’s time for a return to the ‘rental vs buying’ debate
I PICTURED ABOVE: Neil Hutchins, aQ Broadcast, CEO
t can be stated without any fear of contradiction that the issue of whether to buy or rent a major broadcast system constitutes one of the eternal questions of our business. And when it comes to a major item of equipment – for example, a new ingest server or replay solution – where the precise level and extent of usage is uncertain, it can be quite the dilemma. Is it better to invest in a permanent system in the hope that the investment will ultimately be justified, or simply rent in a system to pick up the slack as and when required? The simple answer is that there isn’t always a simple answer! But in light of a new generation of broadcast solutions that are not only more cost-effective but also capable of acting in a truly multi-functional way, there are definitely more instances now where it can make sense for a broadcast customer to take the plunge and invest in more permanent systems. A notable example from my own recent experience underlines the point. We were recently contacted by a major UK sports venue that was in the process of re-evaluating its broadcast systems, which are used to support both the production of in-house content and the requirements of visiting broadcasters. Prior to the review the customer had been renting two high-end server products for each major event. These systems were not especially easy to use for the uninitiated, and therefore entailed not only system rental costs for each project, but also the hiring of specialist operators. As our dialogue progressed we were able to show the customer that there was another way to approach the problem. Integral to this was our ability to offer a range of servers that was more competitively priced, and which could also be readily configured and reconfigured
60 | TVBE SEPTEMBER 2018
to meet changing needs, than their existing brand of choice. But we also highlighted to the customer that of the two units being rented on each occasion, one was only needed for basic operations and was therefore overspecified. Duly convinced, the customer ultimately went ahead and purchased one of our servers, primarily to fulfil this secondary role but also with the knowledge that its functionality meant that it could take on other duties if required. It hardly needs explaining that a straightforward financial component formed an important part of the case. By forecasting its pattern of regular rental charges (Opex) over an extended period, against the up-front expenditure (Capex) of purchasing a permanent system, the customer was able to see that there was no great logic to habitually renting an expensive secondary server – or indeed the operator required to use it. Several months on from the supply and installation of our server, the customer is delighted with its performance and is also continuing to make the most of functionality that can further enhance its workflows. At aQ Broadcast we are certain that many others could benefit from taking a similar approach, although it’s true to say that there are certain criteria that need to be in place for the greatest rewards to be reaped. CRUCIAL CRITERIA The purchase price of the permanent system needs to be sufficiently low to justify the Capex expenditure. When compared to projected rental costs, there needs to be a reasonably quick return on investment – ideally, of just a year or two – and then what is evidently a very substantial saving once the outlay costs have been accommodated.
PRODUCTION AND POST The new system must also be fully featured and easy to use. In terms of features and functionality, the more that can be done with the system both in its existing state – and through updates and relatively straightforward reconfiguration – the better. With such a system the owner will be well-placed to cope not just with current requirements, but also those that may emerge in the future. Meanwhile, the easier the system is to use, the more accessible it will be to the widest possible selection of operating personnel. Indeed, with some of the latest systems, such as those offered by aQ Broadcast, there is often no need to hire in specialist outside operators at all. This, of course, can pave the way to very substantial reductions in labour costs over the long-time. Also, part and parcel of systems that are designed with an emphasis on ease of use is the ability to customise them to suit the preferences of individual operators – a quality that is also widespread in the latest generation of broadcast systems. Once the decision has been made to purchase then the customer may soon realise there are even more benefits than they had envisaged. For example, in the case of a permanently installed server, the system can be left in place, with no need to rig and de-rig for each event. As well as the major ‘headline’ commitments, the system will also be available for smaller events which otherwise would not have justified a rental. There are advantages, too, in terms of maintenance and support. With the system being on-site at all times there is no limitation on carrying out tests or trialing new features and workflows, not least during those quieter operational periods. Then there is the scope that a permanent system affords for operator training, enabling all team members to feel confident with the equipment and further reducing the need to engage outside support. Over the long-term, forging a direct relationship with the manufacturer can also prove to be a distinct advantage, making it easier for changes to be made to the individual system as well as establishing a pathway for guidance that could impact positively on the entire production workflow. Also, manufacturers will sometimes be able to consider flexible payment plans to help reduce the immediate financial plan. In our case, we remain happy to work on direct lease, lease-to-purchase and staged purchase plans as alternatives to direct payment-in-advance options. Investing in new equipment is rarely a matter to be undertaken lightly, but as system costs continue to become more competitive the argument for renting on a habitual basis appears increasingly uncertain. So, the next time you undertake an equipment review, it could well be worth consulting some of the leading system providers and finding out quite how much you could save long-term by buying rather than renting. n
‘There are definitely more instances where it can make sense for a broadcast customer to take the plunge and invest in permanent systems.’
TVBE SEPTEMBER 2018 | 61
PRODUCTION AND POST
TECHNICAL EXCELLENCE
Colby Ramsey finds out how VFX specialist Gradient Effects adapted Stranger Things’ Upside Down tech for its new 8K short Megan
4
K has been a talking point for many over the last couple of years, especially for video streaming giants looking to push the technology forward in combination with HDR, which is in turn paving the way for 8K capabilities. Designed to test LA-based Gradient Effects’ new 8K infrastructure, a new five-minute short, Megan, takes viewers deep into the heart of an alien infested zone, a world reminiscent of the spore-heavy Upside-Down the studio created for Stranger Things. “We sat down with our engineers about a year ago because we’d just finished Stranger Things, and we came up with an engineering task for ourselves to be ready for what lies beyond 4K,” says Olcun Tan, VFX supervisor and owner of Gradient Labs and sister studio Secret Lab. In fact, getting to know the ropes with 8K also meant a complete upgrade of Gradient’s onsite infrastructure, including the addition of a high-speed network, clustered storage and a core engine: “We evaluated our infrastructure, networking, storage etc. and came up with a new approach with regards to how everything is arranged and stored,” adds Tan, who coproduced this latest project. “We had two goals: expanding our capability to do far more complex effects work while at the same time covering the 8K field,” Tan continues. “That’s how we went about Megan. We surmised that the world doesn’t fully understand what we do with simulations, so we should put that into a context that our clients can understand better, and what better example to use than 8K.” Megan is centered around the fallout of an
62 | TVBE SEPTEMBER 2018
alien event. One of the focal points is on a group of soldiers who start the film rushing to the scene in their Black Hawk helicopters. For these assets, Gradient re-purposed several CG helicopters originally designed for TNT’s The Last Ship, giving them 8K textures and adding tiny intricate details to make the visuals pop. Viewers enter the hot zone after a sensational helicopter crash, set in motion thanks to a lightning system developed for Thor: Ragnarok. On the ground, the area is hazy, blue and raining spores. Before being used on Stranger Things, these spores were a key part of the invisible effects Secret Lab produced for The Revenant. Tapping into its huge asset library, Gradient Effects was able to re-apply the spores, tweak them to fit the new shots before making sure that nothing drifted in the wrong direction or obscured the actors’ faces.
A TASTE OF THE FUTURE “I would say that 8K is only saturating our pipeline by 20 per cent, because you have to utilise that bandwidth to do far more complicated simulations,” says Tan. “8K is therefore living in our environment very nicely because we are doing simulations which require far more bandwidth than 8K would ever require. The main thing we wanted to showcase with Megan was how well we can do 8K with our pipeline.” Megan arguably demonstrates not only the technical aspect but also the creative aspect of the post production process. Gradient Effects brought its colour grading department closer
together with the VFX hub to streamline and dilute the workflow effectively: “There’s a lot of computation power that we’ve integrated into the small footprint of our facility,” Tan reveals. “You can do some amazing things with a much smaller crew.” The five-minute short includes over 100 VFX shots created by Gradient and Secret Lab. The relationship mirrors a common workflow throughout the company which often sees effects and tools created for feature films by Secret Lab passed down and used by Gradient to increase the quality of their TV productions. “In Secret Lab we work on feature films and special projects internally, and we really try and experiment with the engineering aspect of the project,” explains Tan. “This is then inherited by our mass product outlet – Gradient Effects – which is really focussed on TV and episodic content. Megan is really something we would usually do through Secret Lab because we believe it’s quite unlike anything else in the context of what the industry is doing today.” Gradient will be submitting the 4K mastered version of Megan to several film festivals. The next phase of the project will be the conversion of Gradient’s posted footage into the Dolby high dynamic range (HDR) platform, to be screened at a later date. “HDR is a big deal and this will pull up resolutions into 8K,” Tan concludes. “I don’t think VR will replace these mediums as such - if technology is intrusive and uncomfortable it won’t take off. The combination of HDR and 8K with the simplicity of traditional TV will take it even further.” n
FIRST OUT THE GATE Dejero has partnered with Team Sky to provide them with an in-vehicle mobile connectivity system for the 2018 UCI World Tour. Colby Ramsey speaks to head of Sky Performance Hub Scott Drawer about how this technical collaboration gave the British professional cycling team the edge during this year’s Tour de France.
T
hroughout this year’s UCI World Tour, the Dejero GateWay mobile connectivity solution has been enabling Team Sky to monitor live TV feeds, track live environmental data and social media activity within its ‘follow’ vehicles in order to anticipate the route ahead for its cyclists, buying the team precious time to prepare support in the instance of a crash, injury, puncture or other unforeseen event. As part of Team Sky’s research and development programme, they have been using the Dejero GateWay – an established connectivity solution usually used by broadcasters – in the racing environment for the last eight months, and will continue throughout the duration of the UCI World Tour season, which consists of 37 road cycling events in Australia, China, Europe, Middle East and North America. The Dejero GateWay solution is now permanently installed in one of the team’s key cars, and was most recently used during the team’s victory at the Tour de France. The world’s most prestigious bike race saw GateWay aid Team Sky to overcome the connectivity challenges posed by terrain, crowds and weather. So how does it work? The solution provides mobile connectivity by blending together cellular connections from multiple mobile network carriers with the aim of creating more bandwidth, speed and reliability over a secure connection.
64 | TVBE SEPTEMBER 2018
“During the race the cars and the bikes are moving every day, creating a lot of technical challenges for what is and isn’t possible,” says head of Sky Performance Hub Scott Drawer. “What GateWay does is create a mobile internet platform in some respects, which enables you to stay connected to the world wherever you are.” At the time of interview just after the Tour de France, Team Sky were in month seven of a comprehensive R&D journey with Dejero, a journey that has involved a number of elements. With space constraints due to a packed out vehicle and carbon fibre bikes causing a reduction in signal quality, Team Sky were not short of challenges during this particularly illustrious race. “We had the classics earlier in the year in Belgium and Holland, and then we have other week-long races as we head into the grand tours. In all of these races you’re presented with some quite challenging environments,” Drawer adds. “What we’ve been trying to learn and understand with the use of the technology is where we get really good signal, which enables us to get data into the car – that may be video/weather data etc. – but also where you get black spots.” The device gave the team the ability to hone in to different types of networks, cumulatively blending them from multiple SIM cards to produce a more powerful signal. The reality when dealing with numerous mobile environments however is that this was never going
PRODUCTION AND POST
to be perfect while the team were relying on the local communications environment, as Drawer explains: “When up in mountains with cloud cover, with thousands of spectators on the side the road, a strong signal may be tough to achieve. In some countries, particularly France, the mobile quality is not as well connected as other countries and therefore a lot more dead spots are encountered.” Along with existing connected systems like an in-car radio and TV system, the GateWay solution provided a third network of communication to get information to sport directors. “From January to now, it’s been an opportunity for us to road-test connectivity in the real world, giving us a really good feeling of what’s possible with the technology today and what direction we need to go in with Dejero to constantly improve our operations. To provide an uninterrupted connection wherever you are in the world is the ideal position. There’s no doubt on a number of occasions though that having the Dejero GateWay and the connection is much better than we’ve ever had before.” To get this type of device to work during the Tour de France is arguably the ultimate challenge, and perhaps the harshest test environment it has faced up to now. As a result, Team Sky have learnt a huge amount about its capabilities. “We’ve had lots of snagging that you’re constantly
trying to resolve,” says Drawer. The long term aim for Team Sky is to equip its whole fleet with in-car connectivity, with aspirations to achieve high, reliable levels of connection through which it can receive video and data that’s pretty much real-time. As 5G mobile networks come in and connectivity improves, the capability of Dejero’s solution will surely only get better. “The way we see it is that we’re ahead of the curve,” Drawer observes. “The other thing that we haven’t looked at yet is the concept of satellite broadband. Dejero has got some brilliant partnerships where perhaps in the long-term we may be able to have a satellite dish built in to feed into the GateWay. Our aim is for uninterrupted coverage when we’re moving in extreme environments.” According to Drawer, the beauty of the device as it is, is its simplicity: “In the way we’ve got it set up now, a sport director doesn’t need any specialist technical skills; we jump in the car, turn the ignition on, switch the radios on, and within two minutes you’re live and connected,” he explains. Drawer cites the openness of Dejero and their willingness to engage on a research development angle as the most important aspect for Team Sky. “We know there’s possibility but we need people to come on the journey with us and continue to develop. We’ve been part of the process so we believe that gives us a competitive advantage,” Drawer concludes. n
TVBE SEPTEMBER 2018 | 65
LOOKING FORWARD IN POST Colorfront Transkoder proves a winner at Roundtable Post in London
I
f you’re involved in post production, especially episodic TV, documentaries and feature films, then it’s highly probable that High Frame Rate (HFR), Ultra High Definition (UHD) and High Dynamic Range (HDR) will have come your way. “On any single project, the combination of HFR, UHD and HDR image-processing can be a pretty demanding, cutting-edge technical challenge, but it’s even more exacting when particular specs and tight turnarounds are involved,” says Jack Jones, digital colourist and CTO of Roundtable Post Production. Amongst the central London facility’s noteworthy credits are broadcast TV series and feature documentaries
66 | TVBE SEPTEMBER 2018
for ITV, BBC, Sky, Netflix, Amazon, Discovery, BFI, Channel 4, Showtime and film festivals worldwide. “Yes, you can render-out HFR/UHD/HDR deliverables from a variety of editing and grading systems, but there are not many that can handle the simultaneous combination of these, never mind the detailed delivery stipulations and crunching deadlines that often accompany such projects,” Jones remarks. Rewinding to the start of 2017, Jones says that, “Looking forwards, to the future landscape of post, the proliferation of formats, resolutions, frame rates and colour spaces involved in modern screened entertainment seemed an inevitability for our business. We realised that we were
PRODUCTION AND POST going to need to tackle the impending scenario head-on. Having assessed the alternatives, we took the plunge and gambled on Colorfront Transkoder.” Transkoder is a standalone, automated system for fast, high-quality digital file conversion which, since launching in 2014, has become the ultimate post workflow tool for handling the vast range of HFR, UHD, HDR camera, colour, editorial and deliverables formats. Customers include Sony, HBO, Warner Bros., Disney, Fox, Dolby, Samsung, Netflix, Amazon, Deluxe, Technicolor, BBC and NHK. Roundtable Post’s initial use of Colorfront Transkoder turned out to be the creation of encrypted DCP masters and worldwide deliverables of a variety of long-form projects, such as Nick Broomfield’s acclaimed Whitney: Can I Be Me, Noah Media Group’s Bobby Robson: More Than A Manager, Peter Medak’s forthcoming feature The Ghost Of Peter Sellers, and the Colombian featuredocumentary To End A War, directed by Marc Silver. “We discovered from these experiences that, along with incredible quality in terms of image science, colour transforms and codecs, that Transkoder is also incredibly fast,” says Jones. “For example, the deliverables for To End A War, involved ten different language versions, plus subtitles. It would have taken several days to complete these straight out of Avid, but rendering in Transkoder took just four hours.” More recently, Roundtable Post was faced with the task of delivering country-specific graphics packages, designed and created by production agency Noah Media Group, for use by FIFA rights holders and broadcasters during the 2018 World Cup. The project involved delivering a mix of HFR, UHD, HDR and HD SDR formats, resulting in 240 bespoke animations, and the production of a mammoth 1,422 different deliverables. These included: 59.94p UHD HDR, 50p UHD HDR, 59.94p HD SDR, 50p HD SDR, 59.94i HD SDR and 50i HD SDR, with a variety of clock, timecode, pre-roll, soundtrack, burn-in and metadata requirements
as part of the overall specification. Furthermore, the job encompassed the final QC of all deliverables, and it had to be completed within a five-day working week. “For a facility of our size, this was a significant job in terms of its scale and deadline,” says Jones. “Traditionally, projects like these would involve throwing a lot of people and time at them, and there’s always the chance of human error creeping in. Thankfully, we already had positive experiences with Transkoder, and were eager to see how we could harness its power.” Using technical data from FIFA, Jones built an XML file containing timelines all of the relevant timecode, clock, image metadata, Wav audio and file-naming information of the required deliverables. He also liaised with Colorfront’s R&D team, and was quickly provided with an initial set of Python script templates that would help to automate the various requirements of the job in Transkoder. “Colorfront could not have been more responsive,” says Jones. “We were able to take these early scripts and adapt them ourselves precisely to meet the needs of the job.” As it turned out, by deploying Transkoder, Roundtable Post was able to complete the FIFA 2018 World Cup job, including the client-attend QC of the 1,422 different UHD HDR and HD SDR assets, in under three days.
“You can render-out HFR/UHD/HDR deliverables from a variety of editing and grading systems, but there are not many that can handle the simultaneous combination.” JACK JONES, CTO, ROUNDTABLE POST PRODUCTION
TVBE SEPTEMBER 2018 | 67
PICTURED ABOVE: Colorfront’s work on this summer’s FIFA World Cup
“In operation on our various projects, one of the beauties of Transkoder is that you can stack up the different versions of a particular asset, each with their specific image, editorial and audio processing needs, and hit render,” says Jones. “The ability to pick your distributor, such as a Netflix IMF or Disney or HBO, on the mastering page makes it super easy to know you’re delivering accurately. Transkoder processes them all simultaneously, and deposits the result in a bespoke folder.” Jones adds: “It’s astonishing to see Transkoder’s level of performance on HFR/UHD/HDR material, as there are few alternative systems that can do that. The fact that Transkoder also enables you to do HDR analysis, with a full QC compliance report, is just brilliant.” Whilst Jones is full of praise for the technical support provided by Colorfront – particularly from scripting expert Szlot Molnar, and CTO Bill Feightner – he also appreciates the strategic, forward-looking development that underpins Transkoder. “Several years ago, Colorfront correctly identified the industry movement towards IMF/DCP authoring and
68 | TVBE SEPTEMBER 2018
packaging, HDR mastering, the translation between various HDR flavours and multiple light-level versions,” Jones explains. “They kept on top of the latest developments and the result is that Transkoder is an amazing piece of software. Running on commodity hardware, it’s lightning fast, with a broad range of creative/editorial tools, unmatched colour and image science, and a neat visual interface that allows you to line up all of your deliverables to happen simultaneously. “By contrast, other current alternative solutions are built on old, often inflexible code, developed five or ten years ago, and some lock you into expensive proprietary hardware and software upgrades.” Jones says that while Roundtable Post originally took something of a punt with Transkoder, it has actually become a indispensable central hub, used for a variety of ingest, mastering and versioning tasks. Jones concludes: “There are absolutely no negatives to Transkoder, and our experience with it has proven to be nothing but positive. There is no way we could have done the FIFA project without Transkoder.” n
TECHNOLOGY
PREPARING FOR THE FUTURE OF UHDTV AND OBJECT BASED SOUND
Jenny Priestley hears from the EBU about their UHD tests with HFR, HDR and NGA during this summer’s European Championships
A
t this summer’s European Championships the European Broadcasting Union (EBU) worked with its members and partners from across the media technology industry to carry out tests using UHD with High Frame Rates. The aim of the trial was to shoot, process, record, and distribute live Ultra High Definition (UHD) content, with High Frame Rates (HFR), High Dynamic Range (HDR), and Next Generation Audio (NGA). In Berlin, a 2160p100 HLG/BT.2020 production workflow was set up in collaboration with EBU members BBC, France Télévisions, IRT, RAI and ZDF, and a range of technology partners. Dr Hans Hoffmann, senior manager of the Technology and Innovation department at the EBU, says planning for the project began a long time before the first event took place. “My colleagues Paola Sunna and Frans De Jong were leading the trials for the UHD production proof of concept. They approached us with the idea about six to nine months ago,” he explains. “At that point the Eurovision Media Services were already heavily engaged in the broadcast of the European Athletics Championships. Our department and partners carried out several other innovation trials at the event too – such as a live data driven journalist and commentators tool and some 5G tests.” “For UHDTV, we had done a lot of tests on the individual parameters
70 | TVBE SEPTEMBER 2018
of UHD and the individual parameters of object-based sound production,” Hoffmann continues, “so we already understood the value of resolution, HDR and NGA.” The EBU viewed the project as a way of taking their tests out of the lab and into a real live production: “It gave us the opportunity to conduct a proof of concept where we could really try to identify the value and the challenges brought by all of the different parameters that UHD TV offers as well as object-based sound,” says Hoffmann. “It also gave us the opportunity to work together with our Members and a large amount of industry partners to realise such a UHD trial. What was really remarkable is that we have industry partners who of course compete on the free market, but this was a trial where everybody worked together to realise something which has never been done before.” Once the EBU had approved Sunna and De Jong’s project, they began discussions with Members, the industry, and Eurovision Media Services, the event’s host broadcaster, about the realities of the project, including whether there would even be enough space to include extra cameras. “We had four cameras that we used for the project and a couple of microphones which all needed to be placed on the field on the best positions available close to operations, but we couldn’t interfere with the real operations of the event,” explains Hoffmann. All of the footage the tests generated was seen live at the event by those in the control room container as well as in Glasgow and in the Aosta Valley by EBU Members RAI. Guests were also invited to come along and observe the innovation trial. The equipment used (see box out) is close to being released to the market but some is still in the prototype phase. “We wanted to realise an end to end chain and we also used a display which can show Higher Frame Rate and Higher Dynamic Range and can show the different formats that we were generating,” says Hoffmann. “We had displays located in Glasgow and Berlin, and some of the content was picked up by RAI. “This wasn’t a typical broadcast that went around the world because a lot of the devices we employed are not yet available to the market. In fact, it was the first time UHD, HDR, HFR and object based sound was distributed in such a way, anywhere. It really was very forward-thinking.
TECHNOLOGY It was not a research project, it was real proof of concept and a success. The EBU members could see it and all the partners on site could see it.” Although the team at the EBU have a huge amount of data and information to work through, there are already results they can shout about. “First of all, we can say the multi format trial that we conducted taking the feeds from the cameras and microphones to different locations has been a success and has worked,” Hoffmann says. “Number two, we also have learned a lot about the operational consequences of doing Higher Dynamic Range and object based sound. For example, we decided on day one to change our shading according to Standard Dynamic Range and on day two, we shaded for High Dynamic Range. “So we experimented as to how everyone from camera operators to directors would deal with this new medium of HDR in order to understand the operational consequences and what training is required and whether broadcasters will easily be able to adopt this type of new format.” The next step will be the more scientific analysis. Hoffmann says the EBU will employ a “very formal test methodology where we need more detailed analysis. That will be the next part of the project, looking at the particular implications of High Frame Rate etc.” The plan is to reveal the operational implications “very soon” so that EBU Members benefit directly from their strategy development from the trials. “I hope we’ll be able to release our first findings in the next couple of months. In terms of the formal tests of video quality, audio quality and so forth, that may take a bit longer,” says Hoffmann. “It will be up to the different groups within the EBU technical community to analyse the results because we want to be absolutely correct when we release our findings, ensuring it is based on solid scientific analysis.” The big takeaway from the project so far seems to have been the work the EBU has done with its partners, particularly their willingness to be involved and the feedback they have already given as to the impact of using these technologies on real operations. “All of the partners were willing to collaborate with us,” says Hoffmann. “When we explained what we were planning and the value of what the tests would generate then all the partners immediately got on board. This is really shaping the future of what we do here and ultimately the direction of the market and value for our Members, so they were all very keen to work together with us.” It all sounds incredibly positive so far, but testing new workflows and technologies can’t have been all plain sailing. Did the team encounter any technical difficulties? “Absolutely, there were a number of challenges. If it was easy then we wouldn’t have to do it!” laughs Hoffmann. “We realised that the whole infrastructure we have traditionally used has been SDI-based and when you talk about Higher Frame Rate, Higher Dynamic Range UHD TV, you have a lot more cables which you have to wire and that is a huge challenge. That’s also why the industry is moving towards IP. For this test IP was out of scope, but hopefully we’ll be able to use it in the future.” “We have found the project really exciting,” Hoffmann continues. “Some of the production team have told us they don’t want to see
any pictures without HDR anymore because they got so used to seeing content in that format during the project. But of course we were working in ideal conditions. “Also the bitrates we used in the distribution were really impressive, 28 megabits for the UHD 2160p100 signal (HDR HLG), 15 megabits for the 1080p100 signal (HDR HLG), and then the standard 1080p50 high dynamic range with eight megabits per second. And it provided really good images. It’s now up to us and our partners to analyse all these different recordings and understand the value that the different formats generate.” To conclude, Hoffmann stresses that at the end of the day, the project was all about the user, or viewer, experience and cost implications. “Yes we are all engineers and we like to play with parameters, but ultimately it is about creating an immersive user experience for the consumer as a whole and to manage costs for broadcasters through efficient workflows.” n
THE TECHNOLOGY PROVIDERS WHO WORKED WITH THE EBU AND BROADCASTERS ARE: n ATEME encoding and remixing technologies for the 1080p100 feed with all NGA flavours n Dolby immersive and object-based audio technology (AC4) n Ericsson contribution technology (HEVC, MPEG-H Audio) for 2160p50 n EVS server for near real-time 2160p100 editing n Fraunhofer IIS immersive and object-based audio technology (MPEG-H Audio) n Jünger Audio 3D audio monitoring and authoring units (MPEG-H Audio) n KaiMedia encoding technology (HEVC, MPEG-H Audio) for 2160p50 n Klang and Areitec 3D audio monitoring over headphones n LG prototype OLED TV sets capable of decoding the live UHD streams with HFR (2160p100Hz), HDR (HLG & PQ), and NGA (Dolby AC-4) n NTT encoding technology for the 2160p100 signal n b<>com scene-based production tools and Qualcomm compression technology (MPEG-H Audio) n Rohde & Schwarz servers for uncompressed recording n Schoeps and Areitec ORTF-3D microphone outdoor set n Sony XVS video switcher, PWS Live production server and storage solutions configured for a 2160p100 video production workflow and HDR acquisitions, plus one complete Sony UHD camera and 2160p50 picture monitoring n Solid State Logic System T S300-32 mixing console with 3D panning and Network I/O
TVBE SEPTEMBER 2018 | 71
TECHNOLOGY
THE INTERSECTION OF TODAY’S IP WITH TOMORROW’S PRODUCTION by Jason Pruett, product marketing manager, NewTek
T
wo letters and a world of implications for the video industry at large: IP. As manufacturers and developers in the video space continue to push the limits of the emerging technology, end users are innovating themselves, advancing their respective fields and achieving successes with business models that their predecessors would not even dare dream about. Indeed, the video production environment and the bigger universe that surrounds it are rapidly changing face. With NDI, SMPTE ST 2110, and other standards gaining more than buy-in and widespread traction that could lead one to believe that the IP revolution is nearing completion—that it is now simply a matter of time, market maturation, and fiscal readiness before the world of broadcast, sport, and more endeavor to replace their baseband investments and never look back. But, where viewing through this lens would present the enablement of interoperability, and the subsequent proliferation of IT-based infrastructure and operations, as the end destination, it is merely the first milestone of the journey into a new frontier. For as finished as interconnectivity within the context of a control room, OB vehicle, or even an entire facility or campus using networking as the backbone feels, we would be remiss to simply replicate pointto-point digital workflows, tracing over the sprawling map of SDI production framework, while the unique advantages of IP go untapped. Fortunately, the IP-focused vendors and producers at the forefront of the revolution—innovators alike—are not settling for just reconnecting the dots, but pressing forward to reshape the industry as we know it. LONG-DISTANCE WORKFLOWS The clichéd visual of the entire production crew, along with all of their equipment, huddled together in a dark room and bathed in the
glow of illuminated instruments will live on for some time. But, just as video calling and screen sharing have become commonplace in the day-to-day conduct of business, soon too will sending, receiving, and processing actionable real-time content in multiple locations become a mundane activity in the context of a live production workflow. Solutions providers simply need to adjust their mindset from clutching tightly to what they’ve always known to trusting what IP can do. Think about it—is a server rack not a condensed version of a control room, which itself is a microcosm of a facility full of technology? So, if deploying IP for interconnectivity and interoperability is the end game, what is holding the concept back from being extrapolated from a few feet to a few miles and much further beyond but the limitations of equipment that remains in the proverbial box? PRODUCTION VIRTUALISATION Speaking of said box, releasing the essence of a live production solution from its proprietary cage is the next consideration—and a distinct advantage of software-driven production. Beyond scalability and flexibility through code, the benefit of what amounts to a highly specialised computer program is that it isn’t beholden to meticulously engineered hardware unless it is designed to be. This means it can be moved about the ether—called upon as needed, where needed, and within whatever data centre there is vacancy, as well as combined with other platforms to create a virtual ecosystem. While a known and proven concept in the world of IT, it is worth pointing out that virtualisation currently requires a unique constellation of leading-edge technology for live production. But, the point is, with IP, that door is now open to broadcasters and producers—and not merely for ancillary operations supporting the core process of program creation.
‘Continuing down the road being paved by software-driven production, as powered by computers and networks, is where the real opportunities lie.’ TVBE SEPTEMBER 2018 | 73
TECHNOLOGY
PICTURED ABOVE: Jason Pruett
CLOUD-BASED IMPLEMENTATIONS Ultimately, exploring the paradigm of decentralised video production takes the mind beyond the control room and the facility, to beyond the site entirely. Indeed, legitimate cloud-based production will soon be on the horizon, with software-driven, IP-native video solutions free of their black boxes and living in external data centres, readily and remotely accessible by entitled users anywhere in the world, and with sources and destinations similarly distributed. In fact, it’s hard to argue that the cloud business model that has taken the world by storm, allowing other media and technologies to flourish, isn’t inevitably where this IP movement is headed—perhaps, by design. And by all accounts, the groundwork is being laid right now. It was, after all, not that long ago that the consensus next big thing and the elements comprising it were subject to smirks and scoffs from the field. COMPUTERS, SOFTWARE, AND NETWORKS It’s no secret that computer-based production was once considered a novelty, software a mere accessory to the big iron doing the lion’s share of the job, and IP networks the domain of another department within
74 | TVBE SEPTEMBER 2018
the organisation. But, as the modern world evolved, and the amazing things that could be done through these three technical components working together were seen and experienced in other industries, it was inevitable that those in the video space would find a way to follow suit. In the last decade-plus, production with computers, software, and networks has advanced by leaps and bounds—now manifesting itself as more than a force to be reckoned with, but the engine driving the transformation of an entire industry. While it is true that the established practices, principles, and long-standing ways of working that the industry has relied upon for decades remain valid, and stand to benefit immensely from IP interoperability, continuing down the road being paved by softwaredriven production, as powered by computers and networks, is where the real opportunities lie. n
‘Releasing the essence of a live production solution from its proprietary cage is the next consideration.’
DATA CENTRE
SEIZING THE ‘IP-PORTUNITY’ CC Group’s Duncan McKean considers the challenges ahead in the transition to IP
A
s the dominant communications protocol globally, IP represents a common delivery mechanism that is having a dramatic effect in levelling the playing field for content service providers. Its standardisation, reliability, and ubiquity make it a safe, inexpensive and obvious platform over which to deliver media. As long as a subscriber has a decent internet (IP) connection, reaching them with a wealth of IP-centric services has become considerably easier. According to the ITU, in 2016 there were 1.9 billion fixed broadband subscriptions globally (53 per cent of homes have internet access), and while not all of them will have sufficient bandwidth for media services, it’s a heck of a base to build on. And let’s not forget that IP also extends beyond the home to IP-addressable mobile devices, too. IP’S OPPORTUNITY KNOCKS? The gradual slide to IP-based networks has two major effects: Firstly, IP heralds an almost once-in-a-lifetime opportunity for traditional service providers to reinvent themselves as its relatively ‘flat’ structure and standardisation lends itself well to customisation of services that run on it. But secondly, its ubiquity also allows the entry of many more “new” service providers to encroach on traditional providers’ established market bases. IP’s biggest single influence is – or should be – that it allows service providers to substantially differentiate their offering and capture significant, latent opportunity. The next question, then, is how? This time last year, we surveyed a few dozen broadcast service providers to dig deeper into how they were planning for their transition to IP. In our survey, 48 per cent of respondents believe that IP is already dominant, or will be by 2020. IP might be a great leveller, but it will not suddenly make everything OK. There are some big challenges to overcome, which fall into two main categories: making networks work better so that they cost less to run; and improving the customer (viewer) experience so that they want to spend more time and money with that network. This was reflected in the survey, where the three biggest challenges cited in transitioning to IP were an ability to offer differentiated TV services (62
76 | TVBE SEPTEMBER 2018
per cent of respondents), monetise them (52 per cent), and maintain loyalty (48 per cent). WHERE, WHO, WHAT AND HOW? Forces that have an effect on how service providers evolve their offers are many and varied but fall into several main areas. First, devices – the “where”. Users are no longer
DATA CENTRE restricted to a monolithic TV set in the corner of the living room and will consume content on a range of devices. These devices include smart TVs, mobile devices like smartphones and tablets, IP-connected games consoles, and laptop (and even desktop) computers. It’s not too farcical either to suggest that more, as-yet unimagined devices will appear as form factors change – like entertainment systems in cars, and even internet-enabled smart watches. The problem here is that broadcasters are losing track of viewers as they switch between devices – and worse still, use two or more devices simultaneously. The challenge is knowing how and where to serve content to, and in what format. When we asked broadcasters about their biggest content consumption challenges, the number one answer was “an ability to deliver content to multiple devices in the right formats” (43 per cent). Second, and directly related to devices, is user behaviour – the “who”. Because of the plethora of devices – or, basically, options – now available, viewers will consume content as it suits them, wherever they are, and whenever that is. And it’s no longer a case of consuming whatever content is available, but demanding precisely what content they wish to be served. The effect on linear TV has been obvious, and the on-demand nature of consumption means that service providers need a strong arsenal of content to satisfy expectations, else increase the risk of churn. Here, aspects around personalisation and recommendation come in, allowing viewers to specify what kind of content they want to be able to see and offering them something they will like (and pay for). In short, “know your customer.” Unsurprisingly, the second most common response around content consumption challenges was “an ability to extract data to enhance TV services” (38 per cent). Third, the type of content – the “what”. Having a huge content library is one thing, but knowing how to handle it is another. Much of the content that has sustained linear and traditional broadcasting is “live”, such as news and sports. In these instances, the ability to deliver content coherently and as close to real-time is paramount so that viewers aren’t left behind. Getting live content into a network in the first place is also a notable challenge as there are layers of production facilities and third-party broadcast systems to integrate. When we asked about content distribution, 45 per cent of respondents cited “an ability to ingest and distribute content on demand” as the biggest challenge in this category. On the other side of the coin, the non-linear/ on-demand paradigm has been driven in part by content with big production values, so broadcasters might need
78 | TVBE SEPTEMBER 2018
to consider the ability to deliver UHD streams and multichannel sound to satisfy the big screen viewer experience. When discussing content acquisition, 32 per cent of respondents – still a ‘significant minority’ – said that “an ability to process multiple types or formats of data from content sources” is a challenge. Fourth, service integration – the “how”. No single service provider has all the jigsaw pieces and dealing with other service providers to pool content and resources is a necessary pre-requisite. IP allows the integration of multiple service streams, which in the case of some pay-TV providers also includes “on-deck” content apps (YouTube, Netflix et. al) alongside their core TV services. It’s no surprise, then, that 36 per cent of respondents see “an ability to deliver new services over existing bandwidth/access services” as a significant content distribution challenge. n
9000