3 minute read
GROUND-CLOUD CLOUD-GROUND
Ground-CloudCloud-Ground: Creating common practices for cloud production
Recently there has been a lot of talk about producing television in the cloud; but in truth, most productions are still done onpremises in physical facilities. While this is a time-tested approach, a lot of equipment sits idle for a large amount of time. In contrast, the cloud allows broadcasters to scale up and down as they need, and to not have to purchase and maintain the worstcase peak amount of hardware.
The key question that broadcasters face in their cloud production transition is: How can I do a complex multi-vendor production completely in the cloud, with comparable latency to on-premises, and get it to a viewer? There are many technical challenges that the Ground-Cloud-Cloud-Ground (GCCG) group in the Video Services Forum (VSF) is looking to address.
“I’m already doing live production in the cloud” Many people think they are already doing live cloud production, but this is not entirely true. For one, many are single-vendor monolithic applications which end up with the ‘jack of all trades, master of none’ problem. For example, the encoding in these products is not as good as that of dedicated encoder manufacturers. While some multi-vendor solutions exist, they mostly use proprietary compressed transports such as NDI. Therefore, we require a vendor-neutral exchange mechanism for video and audio. There are often claims of people doing ST 2110 in the public cloud. That is not possible yet, as no public clouds offer the full Precision Time Protocol (PTP) required. The public cloud is also by its very nature a shared resource, which affects the control users have over it. There will be packet loss, which sometimes can be quite substantial, so it’s important to have a protocol that is tolerant to packet loss or other issues. In many cases, this protocol could be baked into the cloud provider, such as in the case of the AWS Scalable Reliable Datagram (SRD) Protocol, with the underlying workings not exposed to the user.
What do we actually want? Fundamentally, for live production to work in the cloud, we must get data from the cloud back to the ground and have it work into traditional broadcast workflows, whether that is SDI, ST 2110, satellite, cable or overthe-air terrestrial networks (usually based around MPEG-TS). For this, we need to establish an open standard for multi-vendor cloud production. We also require an uncompressed exchange and an agreed mechanism for going to and from ground to cloud. In any case, 2110 in the public cloud is not the right approach for this environment. It is linear and lockstep – in a software world, that doesn’t make sense. In some cases cloud instances can actually process data cases faster than real time, in others cases marginally slower; this means that with the right buffering and timing, a realtime output can be generated.
Transport in the cloud is hard While producing and transporting video content in the cloud brings a number of efficiencies, it is still extremely difficult. The cloud is lossy and if we want to do uncompressed transport, we have to use some sort of protocol that underlies that. Using a protocol provides much-needed guarantees that broadcasters rely on, such as throughput with reliability and latency.
How do we solve it? We need to use the efficiencies in large data transfer for big data in the cloud. AWS Cloud Digital Interface (AWS CDI) is an agreed way to exchange data between Amazon cloud instances, and handles many of these challenges. However, some parts need GCCG input, and one area that needs to be addressed is the delivery of a more detailed timing model – something which sounds simple, but is actually a very challenging technical problem. The VSF GCCG working group is trying to solve these challenges. Ultimately, I see that work combining with solutions such as AWS CDI to make fully cloudbased live production possible.
(Note: These are the author's personal views and not the views of VSF.) Kieran Kunhya is CEO of Open Broadcast Systems