This Program Guide takes you on a tour of new products introduced for TV, film, video, streaming, radio, audio, pro AV and IT professionals. The Best of Show Awards are for products introduced at IBC2024.
This digital guide features all the nominees and winners that participated in the Future awards program. It offers an excellent sample of new technology on the market today and allows companies to tell you in their own words why they believe a certain product is noteworthy.
Five Future publications participated in the awards program: TVB Europe, TV Tech, Radio World, Installation and ITPro. Manufacturers paid a fee for each entry and could enter multiple products. Winners were selected by panels of professional users and editors based on descriptions provided via the nomination form as well as on judges’ inspection at the convention.
Turn the page to learn more and thanks for reading!
Absen
AbsenLive Saturn (SA) Series
The AbsenLive Saturn (SA) Series is the ultimate rental LED display, delivering stunning HDR visuals to create immersive and captivating experiences for any broadcast and indoor application. Available in two models, SA2.6 and SA3.9, the LED panels offer exceptional performance, versatility and easy maintenance to fulfil all requirements, including its ability to cater to all flat, curved, ceiling, hanging or stacking display configurations. Its durable and lightweight composite structure, fastlocking system with magnets, interchangeable modules and power supply box make it easy to install and operate; ensuring the SA Series provides an allround reliable and flexible solution for all LED eventualities.
LIGHTWEIGHT DESIGN
The SA Series boasts a versatile weight of 5.5 kg per panel and is complemented by its durable and high-strength composite structure, enhancing its full-automatic corner protection for durable cabinet strength.
HIGH PERFORMANCE
Both SA Series panels offer premium performance qualities.
The SA2.6 panel features 1,500 nits high indoor brightness, alongside 16-bit high grayscale, 7,680 Hz high refresh rateand 1/12 scan ratio, creating a one-of-a-kind stage presence.
The SA3.9 differs slightly featuring 1,200nits of high indoor brightness and 1/8 scan ratio, however, it still boasts an incredible 16bit high grayscale and 7,680 high refresh rate.
The SA Series supports High Dynamic Range (HDR) display, allowing it to produce more realistic and colourful images. The wider brightness range, richer colours and fine image details work hand in hand to deliver first-class visual content.
The visual clarity is supported further through the SA Series' compatible control systems with Brompton and NovaStar, which enhance AbsenLive's partnership capabilities further, providing encapsulating graphics both in person and through the camera lens.
MULTIPLE APPLICATIONS
A wide range of creativity capabilities can be explored with the
SA Series through its range of horizontal installation options, including flat display, curved (–2.5, +2.5) or ceiling display. In addition to the display type offerings, the SA Series presents multiple installation methods, either hanging (ring or hook) or stacking, to provide clients with the flexibility to tailor setup according to their specific requirements.
EASY MAINTENANCE
Swapping modules has never been easier than with the AbsenLive SA Series. The LED panels use identical and interchangeable power supply box and modules, allowing easy accessibility for front and rear maintenance, swapping modules and reaching the power box. The small module possesses mighty benefits such as: lowering repair and maintenance costs, a longer life span, better heat dissipation and providing a higher module turnaround and availability ratio.
QUICK INSTALLATION
The connecting magnets as part of the SA Series' fast locking system with instant feedback mechanism are vital to enable a quick set up and cabinet connections. In addition to fast set up, the SA Series also offers an exclusive compact design to enable stacking of up to three levels, available in both 5-in-1 and 10-in-1 flight cases.
The SA Series is the ultimate solution for any broadcast or corporate event, combining its high-quality LED display features with its flexible configuration, easy installation and quick maintenance.
Accedo
Accedo Insights
To be successful in today's highly competitive video market, OTT services need to achieve lower churn rates and higher engagement. At the same time, sustainability is becoming increasingly important across the industry, but it can be challenging to assess the impact. All of this requires the right technology and an optimized user experience, which is only possible with full observability throughout the video service.
Launched in February 2024, Accedo's observability service, Insights, enables video service providers with performance and operational insights through real-time monitoring and advanced analytics.
This includes the ability to better understand their audience thanks to valuable insights into the content that resonates most, enabling them to make informed decisions regarding content acquisition, production and promotion, ensuring that their catalog aligns closely with the tastes and preferences of their subscribers.
Insights also enables OTT providers to fine-tune engagement strategies by understanding the viewing habits and engagement levels of their audience. Insights can identify patterns in peak viewing times, binge-watching behaviors, and user retention rates, enabling video providers to optimize release schedules, promotional campaigns, and recommend personalized content to individual users.
At the same time, Insights helps identify growth
opportunities by identifying emerging genres, niche markets, or untapped demographics that exhibit significant potential for expansion. OTT providers can then direct their content acquisition strategies, invest in original productions, or explore new partnerships that cater to their specific audiences, expanding their subscriber base and driving longterm growth.
The latest version, which launched ahead of IBC, sees the addition of data points relating to consumer engagement and experience, as well as a series of sustainability metrics. Leveraging Accedo's observability software, the additional capabilities are all AI-based, enabling deeper, faster and more effective data analysis. The new capabilities will empower video service managers to take data-driven actions to reduce churn and increase engagement, while also improving the sustainability of their services.
The addition of advanced customer engagement metrics to the observability service, via the integration with discoverability specialists Jump, provides deep insights into campaign impacts, customer acquisition and churn prediction, pricing trends and content engagement patterns. Additionally, it offers the flexibility to define custom metrics tailored to customers' unique business needs. This enhancement empowers businesses to make data-driven decisions, improve customer satisfaction, and achieve their strategic goals more effectively.
Furthermore, the addition of sustainability metrics to the solution has been achieved through integration with Humans Not Robots' HNR to Zero, an analytics platform that enables data-heavy companies to optimize their technology operations for increased efficiency and sustainability. Social listening capabilities have been enabled through integration with Accedo's in-house sentiment analysis technology. This functionality reviews user feedback and comments across public and shared sites such as app stores and social platforms. It provides analysis of content, whether positive or negative, and assigns a risk score, to help video providers address service and process issues.
It is anticipated that Accedo's award-winning observability service will be further expanded to cover additional video service functions in due course.
Agile Content
Agile CDN Director
Agile CDN Director is a new solution by Agile Content, first deployed by Telenor Sweden, one of the country's largest mobile, TV and broadband service operators. Despite it being launched just over a year ago, it has already proven itself a significantly effective product in its ability to block pirates or other illegitimate accesses. Illegal streaming services are a growing problem not just for Telco/ISP operators, but for society as a whole, creating a significant economic impact on all types of content providers.
By deploying Agile CDN Director, operators such as Telenor can fight illegal streaming services by implementing security barriers and increasing transparency and authentication checks in their systems. According to Telenor's team, Agile CDN Director provides a completely new level of security, blocking up to 150,000–200,000 invalid requests each day trying to access the CDN.
This is done via its capability of providing enhanced visibility of every request hitting the CDN at any given time, making it easier to identify invalid requests and deny them access. This includes the use of common access tokens, which shows that customer requests are coming from Telenor's authorised backend layer, and not from outside agents.
This additional security layer in turn frees up capacity and provides better network availability and thus bandwidth for audiences consuming video services. So, not only does the Agile CDN Director act as a shield for operators, it also ensures that all users receive the best QoE. From this point on, the solution acts as the entry point for every OTT video delivery request and assigns the session to the optimal cache based on factors like network topology, content popularity, user location, current load and capacity in the CDN, among others.
Agile CDN Director supports offloading to third-party CDNs to handle peak loads, for example during popular live sports events. It can also
manage and control the delivery from multiple CDN vendors, through the capability to select a CDN, a PoP or a streaming server based on current capacity and QoE, just to mention a few examples. With in-stream switching it is also possible to switch seamlessly between these multi-CDN endpoints in real time during an ongoing session, which means that even if a failure occurs, the viewers will be instantly redirected to another server, PoP or CDN without even noticing a drop in quality.
Some other Agile CDN Director benefits are:
• Cache-agnostic: Managing private, cloud and hybrid CDN and optimising them for high-quality video delivery. Optimisation of prior investments by re-using existing CDN caches.
• Adaptable routing logic and user-friendly APIs, which enable seamless integration with external systems. This supports redirection via HTTP or DNS and can function as a content steering router.
• Ease of operation and configuration of routing rules, and comprehensive monitoring dashboards to enable datadriven operations and troubleshooting.
AJA Video Systems
KONA
IP25
KONA IP25 is AJA's next-gen SMPTE ST 2110 IP video and audio I/O card, designed to address the needs of modern broadcast, TV/film production and other professional environments requiring reliable, uncompressed video and audio transmission over IP networks. With 10/25GbE SFP connectivity and multichannel UltraHD support, KONA IP25 is tailored to support an extensive range of applications across broadcast, film, truck, venue, studio, school, house of worship, greenfield station builds and post production, as well as AJA Developer Partner solutions. The 8-lane PCIe Gen 4.0 card seamlessly integrates into SMPTE ST 2110 ecosystems, ensuring full support for evolving IP standards, with true cross-platform compatibility on macOS, Windows and Linux. At launch, it is the only 25GbE ST2110 card on the market with macOS support.
As IP video continues to gain traction, more live broadcast and production environments are transitioning to SMPTE ST 2110 for uncompressed SDI replication and the separate transmission of video, audio and ancillary data. KONA IP25 is engineered to meet these demands with a robust feature set, maintaining the reliability and longevity that AJA's KONA products are renowned for. The card is compatible with a wide range of third-party creative and streaming applications, making it a versatile tool for professionals across various sectors.
KONA IP25's configuration and control are userfriendly, accessed via the host computer's UI, with both in-band and out-ofband capabilities. Once its IP parameters are configured, the card operates similarly to other AJA KONA cards. It is supported by AJA's industry-leading OS drivers and application plug-ins or can be managed through the AJA NTV2 Software Development Kit (SDK). All SMPTE ST 2110
control functions and processing are handled onboard, including Networked Media Open Specifications (NMOS), PTPv2 (Precision Time Protocol), and the AJA SDK, ensuring no additional software is required and that the load on the host computer is minimized.
The card supports a wide range of video formats, including UltraHD, 2K and HD, with built-in HDR passthrough support. It includes a 1GbE RJ45 connector for expanded out-of-band control options. KONA IP25 can be paired with AJA IPT-10G2 and IPR-10G2 Mini-Converters for SDI/HDMI to/from SMPTE ST 2110 conversion or used alongside other AJA PCIe cards in the same server for baseband I/O access. Additionally, KONA IP25 is fully compliant with PTPv2/IEEE 1588-2008/ST2059-2 standards, ensuring precise synchronization across media networks.
KONA IP25 exemplifies AJA's commitment to innovation and quality, delivering a powerful and reliable solution for organizations embracing IP workflows. With its comprehensive feature set and compatibility with a broad range of applications, KONA IP25 is poised to be an essential tool for professionals transitioning to IP-based environments, making it an exceptional candidate for a TVBE Best of Show Award.
Alpha Networks DeltaUX
DELTAUX: REDEFINING CONTENT DISCOVERY FOR THE BROADCAST INDUSTRY
In today's dynamic media landscape, connecting viewers with the right content at the right time is crucial. Alpha Networks introduces DeltaUX, a revolutionary solution designed to transform content discovery across streaming platforms. As leaders in innovative content management, we developed DeltaUX to meet the rising demand for personalized, dynamic user experiences tailored to diverse audiences.
A NEW STANDARD IN CONTENT DISCOVERY
DeltaUX represents a significant shift in how content is curated and presented. The name "Delta" reflects the platform's versatility and our commitment to continuous innovation. Building on the success of our OneUX app, part of the Gecko E2E video platform, DeltaUX expands its capabilities to address the complex needs of today's broadcast and streaming industries.
DeltaUX offers a highly customizable, no-code interface, enabling content providers to create dynamic applications effortlessly. Unlike traditional systems that require coding expertise, DeltaUX allows users to fine-tune every aspect of their platform's design and functionality without writing a single line of code. With over 154 adjustable parameters, this unprecedented level of customization ensures alignment with brand identity and audience preferences.
KEY FEATURES AND INNOVATIONS
DeltaUX's metadata-driven interface is a standout feature, leveraging user-based and content-based metadata for highly personalized content recommendations. This dynamic approach supports semantic queries, populating content rails based on various
parameters such as top-ranked content or specific genres. The result is a user interface that adapts in real time, delivering content that resonates with each viewer.
The DeltaUX detail page configurator further enhances customization, allowing content providers to choose which metadata to display on individual asset pages. This is particularly valuable for market verticals like sports, broadcasters and SVOD platforms, where tailored content presentation is key to user engagement.
DeltaUX also integrates new monetization strategies, such as freemium models with targeted ads and FAST channels. The platform's customizable menu structure allows users to adapt the app's layout to their needs, boosting engagement and satisfaction. Additionally, DeltaUX is designed with SEO performance in mind, ensuring content remains discoverable beyond the app.
TECHNICAL EXCELLENCE
DeltaUX is both user-friendly and technically robust. Its dynamic API creator extends application functionality without additional coding, allowing for continuous innovation and adaptation to market demands.
The platform's automatic and live adaptation capabilities ensure a seamless user experience across different content types, whether VOD, live streaming or EPG. This adaptability is crucial for maintaining a consistent, high-quality user experience across various formats.
CONCLUSION
DeltaUX by Alpha Networks is more than a content discovery tool; it's a comprehensive solution for broadcasters, content providers and professional AV users in today's digital landscape. With advanced features, no-code customization and a focus on personalization, DeltaUX sets a new standard, ensuring users connect with content in meaningful and engaging ways. As we deploy DeltaUX across our customer base and introduce it to new markets, we are confident it will become indispensable for content providers aiming to enhance their platforms, drive engagement and stay competitive in the evolving broadcast and media technology landscape.
Amazon Web Services
AWS Deadline Cloud
AWS Deadline Cloud is a fully managed service that simplifies render management for teams that produce computer-generated 2D/3D graphics and visual effects for films, episodic series, commercials, video games and design.
Render farms are traditionally costly and can take months to procure. On-premises render farms have fixed capacity, which can limit the assets, shots or scenes a team can process during crunch time and lead to project delays or missed deadlines. Companies often provision their on-premises systems for peak demand, which results in oversized render farms that sit idle between delivery periods.
With AWS Deadline Cloud, it's easy to set up, deploy and scale a rendering pipeline without the need to manage backend infrastructure with a simplified setup that reduces deployment from months to minutes. Customers can scale thousands of compute instances up and down, minute to minute, so that they can render complex assets, accelerate production timelines, take on new projects and meet challenging turnaround times.
Companies can use their preferred digital content creation tools for design, modeling, animation, visual effects,
compositing, finishing and rendering with the service. It provides built-in installers — or support through customer packages — for software such as Adobe After Effects, Autodesk Arnold, Autodesk Maya, Blender, Chaos V-Ray, Foundry Nuke, KeyShot, SideFX Houdini and Unreal Engine.
AWS Deadline Cloud also includes customization tools such as the Open Job Description (OpenJD) specification for customers to develop their own DCC integrations. OpenJD is an extendable framework for studios, technical directors (TDs) or developers to build DCC job submitters that describe all the requirements and options for a specified render job — compatible with multiple job schedulers. It enables users to freely integrate AWS Deadline Cloud with current tools, and custom submitters and schedulers.
AWS Deadline Cloud enables direct integrations with other AWS services, using AWS accounts, so that users retain full control over their data and usage. Storage, compute and logs are all in the customer's account. The AWS Deadline Cloud workstation application associates IAM IDC profiles with local credentials, so users can leverage on-premises workstations as well as cloud-based resources in the same pool, allowing studios to capitalize on the value of existing hardware while scaling with the cloud in a cohesive, unified way.
AWS Deadline Cloud's unique cost-tracking capabilities offer producers both visibility into and control over the render spend. While usage reports provide deeper insights into spend, a budget tool template allows users to control that spend with actions such as notifications, stalling new work or stopping everything altogether depending on the thresholds you set.
AWS Deadline Cloud marks a significant step forward in cloud-based rendering, enabling holistic cloud-based workflows that realize the greatest benefits in cloud-based production. It provides both simplified access to powerful compute resources and the ability for deep customization, making it a prime candidate for a 2024 TVBE Best of Show Award.
AP Workflow Solutions
AP Storytelling
Everyday our team supports and works hand in hand with journalists and newsrooms around the world, and these are the common challenges they are facing:
Editorial managers are under relentless pressure to cover the right stories that will drive engagement on social media, clicks on their websites, and viewers during their broadcast. This coverage must align with limited resources while also selecting stories that will inform their communities across all platforms.
Journalists and reporters must do more with less, but no journalist wants to compromise the quality of their craft while trying to avoid tool overload when attempting to create content for each platform.
Here's how AP Storytelling responds to these challenges.
AP Storytelling is developed by AP Workflow Solutions to address the complex needs of today's media professionals. This platform doesn't just adapt to the evolution of media — it drives it, transforming traditional workflows into streamlined multiplatform operations. Designed for newsrooms that demand precision and efficiency, AP Storytelling integrates cutting-edge technology with user-friendly interfaces, giving you the power to focus on
what matters most: delivering compelling content.
Storytelling boosts efficiency with real-time collaboration tools enabling work on multiplatform coverage planning, story creation and rundowns simultaneously, improving communication and breaking down silos. Visibility of story variations for different channels and audiences is all available within a single dashboard.
AP Storytelling's AI support can summarise and reversion stories based on original journalistic content. It also maintains the unique tone of a brand across different outputs, enhancing narrative consistency and engagement. The journalist focuses on their craft while the AI does the repetitive and mundane work.
AP Storytelling is architected as a cloud-optimized solution, ensuring that journalists and media teams can access the system from anywhere, on any device. It can be used independently or with AP's ENPS. This flexibility is crucial for modern newsrooms that operate on a global scale and require robust, scalable solutions that support both digital and broadcast workflows.
Ateliere Creative Technologies
Ateliere Live
Ateliere Live is a cloud-native live production and editing platform that is significantly more cost-efficient and environmentally friendly than traditional systems. Production editing, mixing, graphics and effects are integrated on a single software platform that enables REMI workflows with advanced proxy editing. The innovative design simplifies the creation of multiple production versions, enabling customers to reach more viewing platforms with relevant content.
Video stays in the GPU (Graphics Processing Unit) until just prior to distribution, instead of being repeatedly encoded and decoded through a chain of video processing steps. This unique production pipeline eliminates unnecessary processing, significantly lowering the overall power requirements per production. This results in a substantially lower environmental footprint than traditional solutions. Current customer assessments show at least a 30% reduction in power requirements.
Ateliere Live supports proxy contribution feeds and synchronized proxy-master feed timing for remote production (REMI) workflows, allowing geographically dispersed production teams to maintain frameaccurate camera selections.
All editing, events, camera selections and graphics overlays are available via a public API, allowing third-party applications to operate or interact with custom behavior. Customers can easily tailor live production parameters to fit different workflows and platforms, including Twitch, TikTok and YouTube.
REAL WORLD CASE STUDY
Take the example of Swedish public service broadcaster Sveriges Television (SVT). SVT's surprisingly popular "The Great Moose Migration" uses a slow TV format, streaming nonstop for 500+ hours. Multiple versions are distributed, including a multicamera and a main camera, plus versions with unique graphics for Sweden, Finland and Germany.
With 30 robotic cameras, the program previously required deploying equipment to the woods, with five-to-six staff remaining in a cabin for the length of the production. This year Ateliere Live was used to produce everything remotely using the internet as transport. (See illustrations.)
Ateliere Creative Technologies
Connect AI
Connect AI is the industry's first end-to-end media supply chain that is driven by sophisticated, continuously learning GenAI engines that leverage machine-learning models. Connect AI automates the packaging, localization and distribution of content with real-time, AI-driven insights that enable stakeholders to optimize efficiency across the business and operation.
Connect AI provides insights to clearly understand your content inventory, identify distribution requirements and bridge any gaps. Through partner integrations, Connect AI can analyze global viewing trends and make recommendations for licensing opportunities based on demand and asset availability.
Connect AI transforms unstructured assets into organized, easily accessible data, identifying titles, metadata, languages and more, so your content is efficiently ingested and metadata is normalized for streamlined access and management. It reduces errors and significantly cuts down the time needed for content management.
Companies can significantly improve operations efficiencies through intuitive "smart agent" interactions and continuous learning algorithms. The added functionality transforms Ateliere's flagship solution, Ateliere Connect, into a pioneering AI-first media supply chain platform.
Ateliere used a customer-centric approach to build
out the GenAI road Map. The company established an AI Advisory Council of leading entities from the global media, telecommunications and entertainment sectors. Council members provide input on how GenAI could fix the broken media supply chain. The deep collaboration with the community underscores Ateliere's commitment to innovation, offering comprehensive solutions that go beyond conventional predictive AI applications.
Connect AI is designed to provide actionable insights into the media supply chain through seamless workflow integration and intuitive interactions. With Connect AI, businesses can optimize their operations, reduce costs and improve speed to market by leveraging advanced AI technologies.
The solution features AI-driven automation and optimization for inventory organization, content discovery, metadata enrichment, content delivery and monetization. The added functionality transforms Ateliere's flagship solution, Ateliere Connect, into a pioneering AI-first media supply chain platform.
The Connect AI engine utilizes off-the-shelf, large language models (LLMs) along with Connect AI's custom models (the proprietary special "sauce") to intelligently drive these media supply chain processes.
Maximizing efficiency and business opportunities requires that your "metadata house" is in order. The first Connect AI module made available in July 2024, "Connect AI Ingest," brings order to data chaos by automating the ingest of content, identification and metadata organization.
Connect AI Ingest module automatically enhances asset metadata with technical, descriptive and relational data, providing a clear view of your catalog so sales teams can quickly and effectively sell your content. It is the first in a series of AI-driven modules. Future Connect AI modules will focus on enhancing monetization and optimizing other elements of the content supply chain.
BCNEXXT
Vipe RT
Vipe is a cloud-native, software solution that simplifies and speeds up the deployment of fully automated linear playout, OTT streaming and VOD playout. The latest release introduces Vipe RT, a dedicated, real-time playout engine that supports the most complex live event and sports requirements.
VIPE FEATURES:
• Containerized microservices and third-party outbound services are carefully managed through a centralized orchestration layer. This levels the workload, minimizes resources and uses lower processing power creating huge efficiencies and therefore cost savings.
• Multichannel error indicators enable a monitorby-exception workflow so issues can be fixed as they arise, before delivery time and contributing to the 99.9999% uptime seen by Vipe's users.
• All deliverables go through an automated quality-control process to check for availability, compatibility, timing and corruption. An embedded player enables manual checking, including audio and subtitles.
• Vipe's cloud-based services can be accessed from any web browser, with easy to use and logical interfaces.
• Live feeds are decoded; DVE, graphics, subtitles, watermarking and any audio manipulation are applied and then re-encoded to be spliced into the transport stream playout process.
• Switching between multiple live events and pre-rendered clips is seamless and resources are only deployed as required by the complexity of the content.
Vipe can manage extremely complex graphics and has a native Adobe After Effects plug-in providing creative freedom for designers.
BROADCAST-GRADE CAPABILITIES:
1. Subtitles INV/DVB/TT/SCTE-27
2. Audio mapping
3. Loudness correction
4. SCTE-35 CUE tones
5. Watermarking
6. Join in progress
7. Vchip
NEW CAPABILITIES WITHIN VIPE RT INCLUDE:
• Advanced break management for sports like cricket without fixed break times
• Live Master Control panel with real-time switching, graphics selection, emergency slates and audio mapping, with the security of a low-latency Web RTC confidence monitor
• Swimlane View for management of breaks in regional opt-out channels
• Low-latency, live subtitling workflow
• CPU-only processing provides even more deployment flexibility
VIPE IS UNIQUELY AT THE EPICENTER OF THE NEW AND OLD WORLDS
• Old: It has all the broadcast-grade features required for live linear playout, whilst simultaneously consolidating streaming, FAST and VoD onto a common platform with a single point of technical and financial management.
• New: A modern, fully abstracted, containerized cloud-native architecture utilizes the dynamic scaling ability of the cloud to minimize resource requirements based on the complexity of the content. Vipe reduces resources and therefore costs by up to 8x, whilst maintaining better than hardware-based reliability of up to 99.9999% as measured by users.
The BCNEXXT team of industry veterans have a combined hundreds of years of playout automation knowledge, with the understanding of modern cloud architectures and technology to produce a robust, hugely efficient cross-platform playout solution. Our goal is to help today's media companies reach their reduced expense goals.
Bitmovin
Bitmovin's Live Encoder
Bitmovin's Live Encoder, built upon the Emmy-Awarded Bitmovin VOD Encoder, delivers high-quality video and audio outputs on a platform built with a focus on resilience and reliability. Currently, Bitmovin's Live Encoder is the only Live Encoding SaaS platform on the market that allows customers to launch thousands of live channels or events in multiple cloud vendors globally at any time, with content owners able to distribute across multiple CDNs. As a testament to the robustness of Bitmovin's Live Encoder, it has recently been deployed by one of the world's leading social media platforms with over 1 billion users to deliver over 1 million live events a year worldwide, serving 600,000 live viewers daily. Bitmovin developed a pooling logic to allow hundreds of end users to go live instantly simultaneously, using an integration with Azure Cognitive Services, providing AI-generated closed captions.
The SaaS platform expanded support to include Akamai's Connected Cloud and Oracle Cloud Infrastructure. Customers using Bitmovin Live Encoding and Akamai have reported reductions in operational costs by as much as 90%. This joint Bitmovin and Akamai solution is currently being used by OneFootball, one of the world's biggest football platforms, helping reduce costs, and being used by media experts from both suppliers to optimize delivery and end-to-end latency by 60%.
KEY BENEFITS:
• Effortless management: The underlying infrastructure is managed by Bitmovin, truly offering the application SAAS.
• Deep integrations: Bitmovin's Live Encoder seamlessly integrates with users' applications using Bitmovin's comprehensive API.
• Higher quality with less energy: Like it's VOD counterpart the Live Encoder can provide outputs with high VMAF scores at lower bitrates than it's rivals. In some cases this can result in a 50% reduction in storage and distribution.
• Live monetization: The encoders SCTE-35 support and integrated partner solutions, ensures customers can maximize revenue potential Additionally, the CPU-based nature of Bitmovin's Live Encoder means it is built to scale at speed in cloud environments to support high concurrency. Moreover, it is one of the most resilient live encoders on the market, delivering rapid startup times, perfect for ad-hoc live events or DR solutions, and its highly resilient recovery on input loss, dual inputs and distributed architecture.
Bitmovin's Live Encoder has over 200 customers, from SMBs to national broadcasters and OTT platforms. One of its first customers, OKAST, leveraged the encoder on the Amazon Web Services (AWS) cloud to achieve high-quality live streams with seamless transition to on-demand playback, contributing significantly to OKAST's growth, with a 230% increase in customers, with 70% attracted by the enhanced live experience.
The Bitmovin Live Event Encoder is also integrated with Grass Valley's Agile Media Processing Platform (AMPP), and Vimond's VIA CMS. On inputs it works well with Videon's LiveEdge æ contribution encoders Zixi Broadcasters, Osprey's Talon 4K and a host of SRT Alliance partner devices and can deliver DASH-IF to Unified Streaming's Unified Origin, amplifying its ability to deliver streams in the highest quality imaginable. Additionally it's recently tested compatibility with Google DFP, AWS Elemental MediaTailor, Yospace and Broadpeak.io SSAI platforms.
Bitmovin
Bitmovin's Multiview
Multiview is a new feature for the Bitmovin Player that provides a more compelling experience for audiences by enabling them to watch multiple streams simultaneously on one screen. Bitmovin's Multiview is designed for use cases such as sports, esports and live events, where it's advantageous to provide the audience with the option of staying immersed in the content they want by removing the need to flick between different channels.
Bitmovin's Multiview has several key benefits that differentiate it from other solutions on the market. The first is that it is built on Bitmovin's Playback, which is proven to deliver high-quality video streams at scale on all devices. It's accompanied by in-built features such as advertising support, low-latency streaming and Digital Rights Management (DRM). Multiview is also supported by Bitmovin's Encoding and Analytics. As a result, Bitmovin's Multiview users can deliver superior streaming experiences to audiences without sacrificing monetization, security or viewer experience.
Bitmovin's Multiview is also differentiated because it's an end-to-end solution that includes Bitmovin's award-
winning encoding technology and pre-integrated analytics. Bitmovin's Encoding ensures audiences enjoy high-quality streams regardless of bandwidth with its Per-Title encoding, multicodec support with 8K, and multi-HDR support. The pre-integration of Bitmovin Analytics ensures that audience adoption of multiview can be measured, together with pinpointing and resolving any playback issues in real time before they impact the viewer.
Bitmovin's Multiview is compatible with iOS, Android, Web, Smart TVs and Connected TV devices. It also has a fully customizable user interface so viewers can easily switch between multiview and single streams. In multiview, viewers will see tiled playback where the player window contains multiple tiles. These tiles can show completely separate content, enabling custom use cases. By playing multiview streams in tiles, Bitmovin overcomes the challenge of most TVs only having one video decoder and only allowing the user to view a single video stream at any given time.
You can see a visualiser here: www.linkedin.com/feed/update/ urn:li:activity:7226222160732659712
Blue Lucy Lucia
Blue Lucy's new AI assistant, Lucia, makes media management and workflow orchestration in BLAM accessible to everyone.
Blue Lucy's BLAM is a powerful orchestration platform with highly configurable workflow design capabilities. This model pays real dividends for repeat functional requirements. And, when you just need to do something once, and don't want to spend time designing bespoke workflows — there's Lucia.
From simple searches to bulk actions and complex chained commands, Lucia automates tasks that would otherwise require a sequence of manual steps. For example: if you've got a big batch of new content coming in and you need to generate placeholders, remove the bars and line-up and then save the media in a new storage folder, Lucia's got
it covered. And she remembers what you've asked her to do before, so it's really easy to repeat processes or revise them without having to start from scratch.
In addition, because Lucia is based on a large language model (LLM), she can also answer general questions that users may have about content formats, metadata and almost anything else — and you don't have to leave the platform to find the information you're looking for. Lucia can also be used as a business intelligence and support tool. If the AI assistant is integrated with BLAM's data processing service, Lucia can use historic data to predict assetbased costs, identify workflow bottlenecks and create ad-hoc reports.
Similarly, Lucia could rewrite BLAM user documentation in everyday language, tailored to your operational context, and answer users' technical queries, cutting down on customers' first-line support costs. Lucia isn't only powerful, this AI assistant is also completely safe, the application's abilities are limited according to the user's access privileges so existing workflows or work orders can't be modified or deleted by Lucia.
With Lucia, complex multi-step tasks that would normally take 30 minutes take 30 seconds, and you don't need to leave the platform or be a technical expert to make them happen. Even typing is optional because Lucia responds to both text and voice prompts in conversational language, making BLAM accessible to almost anyone. Lucia takes care of the mundane, automating manual processes and providing business intelligence so that your team can focus on tasks that deliver value.
Bolin Technology SD530NX
The SD530NX is a revolutionary PTZ camera that raises the bar for professional broadcast and live production environments. Engineered for exceptional performance, this camera has superior PTZ capabilities, stunning image quality, and rugged outdoor design.
Powered by an advanced Sony image block with a STARVIS II sensor and Super Image Stabilization+, the SD530NX delivers stunning image clarity, even in challenging lighting conditions. This sensor excels in low-light performance, providing vibrant and detailed images even in dim environments. Combined with a 30X high-resolution optical zoom lens, it offers flexibility and detailed image capture.
The continuous 360-degree integral PTZ functionality is designed for smooth, accurate, and quick operation. With an industry-leading 255 VISCA steps, the camera allows fine control over pan, tilt and zoom movements, with speeds ranging from a slow 0.01 degrees to a rapid 120 degrees. The camera supports up to 255 preset positions with exceptional accuracy, always ensuring precise framing.
A groundbreaking feature of the SD530NX is its industryfirst 15 degrees mechanical roll axis, which allows for horizontal adjustments and ensures perfectly aligned images even when the camera is installed at an angle. When paired with Bolin's KBD-1020N keyboard PTZ controller operators can manually adjust the roll axis to create unique, dynamic shots. With dual output capabilities, the SD530NX supports simultaneous 3G-SDI and IP streams (RTSP, RTMP and SRT) and NDI 6-ready HX3, including Embedded Bridge, with independent resolutions. It also accepts multiple control protocols, such as VISCA Over IP, NDI, ONVIF, VISCA over Serial and PELCO P/D.
The SD530NX fully implements the FreeD protocol, with accurate data reporting enabling AR/VR production capability. Race tracks in Europe and professional sports organizations in the U.S. have chosen Bolin cameras because of the image quality and accuracy of the FreeD implementation. The video output, control options, and FreeD implementation provide a flexible and versatile PTZ camera platform for present and future professional broadcast applications. Adding to its broadcast capabilities, the SD530NX provides access to the color matrix so that the footage can be intercut with other
cameras in the workflow.
Built for the most demanding conditions, the SD530NX boasts true outdoor capabilities. It has an IP67 rating for protection against dust and water, a reinforced rain wiper, and a C5-level salt corrosion-resistant coating. It has an ambient temperature sensor and an IR illuminator for shooting up to 70m in total darkness. It can withstand sustained winds up to 60 m/s and operate in extreme temperatures ranging from 40 degrees C to 60 degrees C (–40 degrees F to 140 degrees F). These features distinguish the SD530NX and ensure reliable performance in nearly any weather condition.
The SD530NX offers broadcasters an outdoor IP67-rated PTZ camera with an industry-first Roll Axis adjustment. It is capable of the SDI workflow they know and trust and has dual IP output, including NDI HX3 AVoIP solutions, to bridge present and future needs seamlessly.
BT Media and Broadcast
In live broadcasts, speed is of the essence. Every moment must be delivered to viewers with minimal delay. But it's a long road from the camera to their screens. Getting content from "glass-to-glass" takes something special. It takes a complete, end-to-end portfolio and a rock-solid, high-speed network. This is where Vena comes in, our intelligent media platform.
At IBC, we recreated a live sports event, but using a Scalextric track setup on our stand, so everyone can be involved. We had cameras filming the action on the track, with the feeds sent live from Amsterdam via Vena to our London Switch, before being sent straight back to the display screens on our stand. Showcasing just how low-latency and reliable our network is.
Visitors to our stand saaw an end-to-end workflow from camera to screen, using Vena as the backbone for both contribution and distribution. We also showcased how Vena integrates with the entire portfolio. Starting with our TV Outside Broadcast services, "Vena on the move" encoded the feed to JPEG-XS 1080p HDR at the stand before transporting the content to our London Switch. Here the feed was distributed back to Amsterdam.
Our Vena Cloud and internet gateways enabled transport via SRT back to Amsterdam and into AWS, via a direct connect.
Utilising our playout product, Channel Origin, this same Vena feed was generated into a channel, which was visible on the stand.
The Vena platform portal allows us to monitor the feeds throughout the show. And the sheer scale of Vena was also showcased by a brand-new interactive map, enabling visitors to visualise the network and witness the Path Computation Engine (PCE) live in action.
Why Vena over a standard network?
Significant investments in infrastructure underpin our commitment to delivering what broadcast customers need: high reliability, low latency and route diversity. Vena was built using 100 GbE core circuits and over 140 core nodes across the U.K., ensuring comprehensive coverage, high bandwidth and low latency of 10ms. Even with complex and large networks, Vena's in-house developed Path Computation Engine (PCE) can guarantee geographical diversity and rerouting in moments with hitless switch over.
The PCE calculates diverse paths, which are then used by the Orchestrator, as it automatically builds, whilst obeying the design rules even at a fibre and ducting level.
Vena is a robust and versatile platform. Dynamic provisioning enables quick and reliable service creation and removal without extensive infrastructure upgrades. Capabilities include but are not limited to H.264, H.265 and JPEG-XS connectivity options coupled with IP or BNC Handoff features.
Vena delivers millions of hours of content a year and has been deployed at 300 customer sites. There are nearly 3,000 services on Vena at present with 1.5 Tbps of media traffic. We have 24 live permanent contracts on Vena including ITV, Digital 3&4, BBC, Arqiva and Racecourse Media Group. And we brought it live to a trade show, in real time from our stand in a car park in Amsterdam!
Calrec Audio Argo M
Calrec celebrated 60 years of putting sound in the picture at IBC2024, with the launch of Argo M. Built on the same multi-award-winning technology that powers Calrec's established Argo platform, the new Argo M brings the exact same feature set and operational familiarity as the larger Argo Q and Argo S consoles in a compact 24- or 36-fader footprint. Designed for small-to-medium applications and adopting the same "everything anywhere" approach, Argo M is a plug-and-play SMPTE ST 2110 native console.
Argo M features integrated DSP processing and no networking or PTP sync required for independent operation. It also has built-in analogue and digital audio I/O and GPIO, 3x modular I/O slots for further expansion, and a MADI I/O port via an SFP.
The introduction of Argo M not only delivers Calrec's renowned heritage and feature set at incredible value, but it enables more broadcasters to take advantage of Argo's award-winning flexible IP technology. It answers the growing needs of Calrec's broadcast customers who are looking for more flexible workflows while still having the right monitoring and metering tools in front of them without the need for ancillary equipment. Argo M helps them to redefine their production models and retain all that in a cost-effective way. It caters for a variety of workflows; cores and surfaces can be geographically diverse to enable
remote and distributed production, as well as supporting interoperable networking and stage-box interfacing via AES67/ ST 2110-30 with ST 2022-7 dual-network/packet merging redundancy with either 1G or 10G connectivity.
Argo M supports 5.1.2, 5.1.4, 7.1.2 and 7.1.4. immersive paths for input channels, busses, metering and monitoring as standard, and the same familiar Calrec Assist UI for remote control on a standard web browser. Meanwhile, the Calrec Connect AoIP system manager makes IP streams easy to manage and adds essential broadcast functionality. Argo M is available with 304 or 356 internal DSP processing paths, while external ImPulse and ImPulse1 processing cores can expand processing capacity to 432 DSP paths as well as provide additional redundancy. This enables customers to expand Argo M as their production demands grow. With up to four Argo surfaces able to share access to a pair of redundant ImPulse cores, Argo M can also connect to an existing ImPulse core alongside other Argo surfaces to create multi-console environments, while the introduction of ImPulseV gives Argo M access to cloud-based DSP packs, with scalable and secure cloud-native DSP processing core and control software on demand.
KEY BENEFITS INCLUDE:
• Operators immediately have the right tools and feature-set including monitoring and metering, without the need for ancillary equipment.
• Customers can expand Argo M from internal to external processing as production demands grow.
• Same first-class Calrec heritage and feature set at a new price point.
• Smaller and lighter console with lightweight aluminium metalwork instead of steel parts.
• Carbon footprint with 85% PCR (high recycled content) studio trims. 100% of cardboard packaging and 70% of foam inserts are recycled.
Calrec Audio True Control 2.0
Continuing its quest to solve tomorrow's audio challenges, Calrec's new True Control 2.0, which was launched at IBC 2024, builds on Calrec's RP1 True Control implementation to provide expanded levels of control in two key areas.
First, it gives users far greater levels of remote control without the limitations of mirroring or parallel controlling, with control of an expanded feature set including EQ, dynamics, routing, direct outputs and delay.
More fundamentally, it gives broadcasters unparalleled flexibility to scale their remote productions as needed by expanding the number of products it works with. True Control 2.0 is available on Type R, ImPulseV, Argo M, Argo Q and Argo S, allowing any of these products to remotely control any other True Control 2.0-enabled product. Moreover, any one of these controller consoles can access up to five other consoles simultaneously.
True Control 2.0 delivers a new flexible approach to audio mixing. The ability to control anything from anywhere is an incredibly agile way to work because it provides access to more cores, more faders/surfaces, and more control from more locations. It gives our broadcast customers more options.
True Control 2.0 enables users to control multiple consoles across different venues and extending control to Calrec's Assist headless utility also allows broadcasters to access core functionality even when no physical faders are available. At IBC2024, Calrec demonstrated True Control 2.0 across
multiple consoles on the Calrec stand, with a 60-fader Argo Q delivering control to a 48-fader Argo S, a 36-fader Argo M, a 24-fader Argo M on an ImPulseV core and a Type R.
THE POWER OF TRUE CONTROL 2.0
• A new flexible approach to audio mixing
• Control anything from anywhere, meeting the demand for national and localised content
• Ultimate flexibility in what you control, both small or large consoles can control or be controlled
• More of everything from anywhere (More cores, more faders/surfaces, more control, more locations, more flexibility, more agility, more options)
BENEFITS
• RP1: Expanded levels of control
• Cost-efficient/effective: Reduce travel, accommodation and equipment costs
• Flexible and scalable: More flexibility to cover more events by scaling/expanding resources (people and size of software/ hardware) as well as repurposing for another event
• Enhanced collaboration: Centralised talent pool with remote access to experts
• Growth in sports: More national and localised content
• Sustainability: Reduced carbon footprint
• Health and safety: Pandemic protection
• Increased remote control: Across feature set (EQ, dynamics, routing, direct outputs, delay) and Calrec technology
• Distributed production: Controlling locally and remotely
Cinegy
Cinegy Subtitling Service
Cinegy is thrilled to announce the upcoming launch of a groundbreaking new version of Cinegy Subtitling Service. This AI-powered update will revolutionize broadcast accessibility. As this is a future release, please note that the current website and available screenshots still reflect the previous version.
Once released, this update will transform broadcast accessibility with AI-powered automatic subtitling. The new AI component will enable real-time speech-to-text conversion, dramatically improving accuracy across multiple languages, reducing manual intervention, and enhancing our ability to handle live broadcasts and last-minute changes.
Our AI-enhanced service addresses the challenges broadcasters face in providing accessible content quickly and efficiently. It offers real-time, accurate subtitles for both live and prerecorded content, transcribing spoken language into DVB or Teletext subtitles on the fly.
Features include real-time AI transcription, multiformat support (DVB, Teletext, SRT, EBU-STL) and extensive language support using ISO 639-2/T codes. The service offers seamless integration into Cinegy Air Ultimate, providing the most
efficient solution for customers. While standalone deployment is possible, the integrated solution delivers optimal performance and streamlined workflow. Its scalability suits productions of all sizes.
This update represents a paradigm shift in broadcast accessibility, boosting efficiency and reducing costs. It helps broadcasters meet and exceed accessibility standards, reaching wider audiences with high-quality, accurately timed subtitles. In a competitive market valuing accessibility and innovation, our AI-enhanced service gives broadcasters a definitive edge.
Consider a breaking news scenario: As events unfold, our service works silently, transcribing every word in real time with unprecedented accuracy. Viewers with hearing impairments stay informed without delay, while non-native speakers benefit from the written text.
As a future-proof investment, our AI-enhanced Cinegy Subtitling Service will continue to evolve. Its software-based nature ensures easy updates, making it ideal for forwardthinking broadcasters. It offers an unparalleled combination of speed, accuracy and accessibility, setting a new standard for inclusive broadcasting in the AI era.
By choosing our upgraded Cinegy Subtitling Service, broadcasters invest in a vision of television that's accessible to all, efficient to produce, and prepared for tomorrow's media landscape. This service represents the subtitle revolution — broadcasting for everyone, in real time, powered by the latest AI technology.
Cobalt Digital UltraBlue IP-MV
Cobalt is introducing the UltraBlue IP Multiviewer, a software-based multiviewer for compressed and baseband streams over IP. This product is available as a turnkey package (server plus software) or software-only, and the customer supplies a dedicated server.
Features include support for receiving audio/video content over IP across a variety of protocols and formats with very flexible audio routing, bringing a multitude of options to suit every application.
The intuitive web interface incorporates support for compressed and baseband (ST 2110) IP/SDI inputs and outputs allowing the multiviewer to grow alongside customers' needs. Multiple user logins are supported, as well as per-user access privileges for maximum security.
Mosaic configurations are a snap with arbitrary sizes and orientations, graphic overlays, ancillary data, tallies, UMDs and IDs. PIP configurations can be easily copied, and setups can be saved and restored. UltraBlue IP-MV will drive multiple HDMI displays in any orientation (landscape or portrait, selectable per display). PIPs can also be arbitrarily placed and rotated.
The COBALT UltraBlue IP-MV incorporates support for the most common types of ancillary data, including various types of closed-captioning display. The multiviewer features full audio support, with flexible output audio routing and fully configurable audio bars. UltraBlue IP-MV will be able to use GPU acceleration if available. The number of inputs, streams, and heads will be a function of the available CPU/GPU.
THE MAIN PRODUCT HIGHLIGHTS ARE LISTED BELOW.
Input protocols:
• Compressed streams: UDP, RTP (with support for ST 2022-1 FEC), SRT and RIST (Simple and Main Profiles).
• Baseband streams: SMPTE ST 2110-20 (video), -30 (audio), and -40 (ancillary data).
Outputs:
• the multiviewer supports as many outputs (heads) as available in the hardware.
The UltraBlue IP-MV offers the following features:
• Arbitrary PIP positioning and rotation
• Support for portrait or landscape orientation, on a perscreen basis
• Support for arbitrary image and text (UMD) overlays
• Support for tally, including suitable protocol support
• Support for on-screen clocks of various types
• Support for arbitrary routing of inputs to PIPs, including replication
• Support for audio bars, with arbitrary size and positioning
• Support for output audio routing — any audio service can be routed to any output.
• Intuitive web interface for configuration, with support for multiple user access levels
• Support for copying configuration between PIPs
• Support for saving and restoring configuration
The UltraBlue IP-MV is part of a family of multiviewers, which also includes two SDI-based multiviewers in openGear card format:
• The cascadable 5-input 9970-QS multiviewer, with support for signals up to 3G-SDI.
• The 18-input, two-head 9971 multiviewer, with support for signals up to 4K in both inputs and outputs.
The feature set of the SDI multiviewers is the same as the software-based UltraBlue IP multiviewer. By offering both software-based IP multiviewers, as well as hardware-based SDI multiviewers, Cobalt Digital can address the needs of a wide range of applications, with a consistent product.
Comprimato Twenty-One Encoder
The Twenty-One Encoder by Comprimato is a versatile ST 2110 gateway encoder, designed specifically for live broadcast production. It enables contribution encoding and decoding of live formats including H.264, HEVC and JPEGXS. Additionally, it effectively bridges traditional broadcast with ProAV environments by connecting ST 2110 networks with other live IP networks such as NDI, TS/ SRT or RTMP. This integration is facilitated by NMOS control, which allows for precise management and integration of networked media devices, streamlining operations across diverse broadcast settings.
Addressing the rapid pace of technological advancements, the encoder is both softwaredefined and COTS-based, which facilitates easy updates to accommodate new formats and broadcasting standards as they evolve. This adaptability not only extends its operational life but also enhances its value, positioning it as a strategic asset for broadcast facilities looking to minimize hardware
turnover while maximizing functionality.
The encoder's dual capability to perform both encoding and decoding eliminates the need for multiple units, thereby saving space and reducing operational costs. Its design optimizes workflow efficiency, enabling the management of more channels and formats with less equipment. Despite its compact 1RU form, the Twenty-One Encoder handles up to 16 channels, offering a high channel capacity that is ideal for high-demand scenarios and tight spaces alike. It includes ST 2022-7 redundancy, exemplifying its robust and efficient design suited for the rigorous demands of live broadcast production.
Overall, the Twenty-One Encoder meets the critical needs of modern broadcasting and ProAV, providing a forward-looking solution that ensures reliability and scalability in professional broadcast settings.
CuttingRoom CuttingRoom
CuttingRoom's cloud video editor revolutionizes live video editing by enabling real-time manipulation of live feeds through its pioneering Growing Live Timeline feature. It is designed for seamless content creation, editing and publishing, eliminating delays in editing live footage and enabling instant content delivery. CuttingRoom's approach significantly enhances the efficiency and flexibility of video production, making it a game-changer in the industry.
CuttingRoom addresses the critical need for speed and efficiency in live broadcast content production. Customers choose CuttingRoom to increase their output and reach a wider audience with engaging video content. With its super responsive user interface and fast ingest, upload, rendering and publishing of videos, it is the preferred cloud video platform for anyone looking for a scalable video editing platform that answers today's video editing, working from anywhere, and collaboration requirements.
The video platform has smart features for capturing content directly from live streams, the CuttingRoom Reporter iPhone app or connected cloud services. Users can collaborate in real time when editing, adjusting aspects and frames, creating or importing multilayer graphics, and creating video clips in any required formats. Videos can be shared directly to any connected social media channels, creating an extremely efficient workflow for distributing content to a broad audience.
As the first video editor in the world, CuttingRoom was at IBC 2024 to launch native support for the new CNAP interoperability standard.
Out-of-the-box integrations make it quick and easy to use with footage from external cloud servers and for publishing content directly to favorite media platforms. New video assets can be saved in the CuttingRoom cloud platform or any connected MAM system, such as Mimir, Iconik or VIA, to AWS' S3 cloud storage buckets, Wasabi or Backblaze, to Dropbox or other integrated platforms.
CuttingRoom is built for access from anywhere and requires only an internet connection. There are no limitations on the number of users, projects, incoming streams or simultaneous outputs. No installation, updates or maintenance is needed, as it is a true cloud-native Software-as-a-Service (SaaS) offering.
A typical use of CuttingRoom for a broadcaster is to cut directly from live video streams during premium events, like a world championship or e-games events. Users can efficiently and quickly create clips from connected live video sources, add branding and multilayer graphics, create the different video aspects required, and publish directly to MAMs, VOD platforms, websites and any social media platforms. Time to market is crucial, especially for ad-driven content.
The CuttingRoom team demonstrated the platform at IBC and showed a range of features, such as:
• Real-time collaborative editing from anywhere
• Editing with multiple video tracks
• Animated and keyframed pan and scan with portrait mode outputs
• Editing straight from live sources coming from MAM partners like Mimir
• A powerful graphics engine enabling full motion graphics support in the editor
CuttingRoom stands out from the crowd with its unique combination of getting started in a minute, finding the footage needed, editing with professional features, and publishing to any platform from the same easy-to-use interface.
Dalet
Dalet AmberFin
The Dalet AmberFin premium transcoding solution now gives customers even more deployment flexibility, updated best-in-market pay-per-use pricing, as well as additional codecs and tools to optimize media supply chain workflows from capture to distribution. Dalet AmberFin delivers high-quality media conversions for advanced production formats to complex delivery standards with flexible deployments and elastic processing power. Dalet AmberFin is widely used to preserve artistic intent when transcoding for high-quality broadcast and OTT/streaming distribution. For studios, broadcasters and OTT providers, the enterprise media processing platform with a scalable transcode and workflow engine is indispensable thanks to its unparalleled image and captioning accuracy.
Regarded by the media and entertainment (M&E) industry as the standards bearer for premium image quality, leading studios and streaming services rely on Dalet AmberFin. Its high-quality, enterprise media processing delivers the best possible viewing experience. In addition to the revised pricing, the latest updates continue to build on this reputation by bringing new codec support and elastic workflows to the platform.
Dalet AmberFin utilizes the same proprietary transcoding engine regardless of your deployment. Thus, Dalet AmberFin on-premise customers can augment existing deployments by utilizing the cloud for additional capacity and managing peak usage. The elastic processing power gives customers a new level of agility and limitless scalability while delivering consistent results.
Clients can choose from a pay-per-use model as either Software-as-a-Solution (SaaS) deployment managed and hosted by Dalet or hosted on their own AWS infrastructure. Dalet also continues to offer on-premises nodes on a subscription-per-node basis.
The new self-hosted pay-by-the-hour pricing has zero subscription fees and provides the lowest upfront and operational costs for customers who self-manage their cloud services. Regardless of the deployment model, Dalet provides customers with the industry's most flexible pricing.
When paired with Dalet Flex media logistics and Dalet Pyramid unified news operations platforms, Dalet AmberFin
becomes a uniquely powerful transcoding and media orchestration platform that enables content transcoding for mass distribution. Robust APIs and workflow engines ensure seamless integration and power state-of-the-art automation for complex workflows, including global delivery of localized titles and content to multiple viewing platforms.
FEATURES AND CAPABILITIES INCLUDE:
• Elastic media processing — depending on the use case, cloud deployments can save up to 50% over fixed on-premises nodes.
• Supports most production formats such as Red raw, Pro-res RAW or 444 XQ, Adobe DNG, Nikon NEF, DPX — with ability to ingest camera cards, and output proxy and mezzanine formats costeffectively, with top-quality results.
• Handling all the complexity of frame rate conversion, upscaling, de-interlacing, HDR and color space conversion with certified results from Dolby, Apple and others.
Dalet AmberFin gives customers more choices on how to best manage their media operations. Since there is no need to port conversion profiles or workflows, customers can use existing workflows fully in the cloud or create a hybrid framework without ever compromising on quality. The cloud transcoder capability is included seamlessly within the Dalet AmberFin Workflow Designer, so that customers can be up and running in the cloud on day one.
Dalet
Dalet Flex
Dalet Flex is a cloud-native media workflow and supply chain solution that enables media-centric organizations to produce, manage, orchestrate, deliver and monetize content at speed and scale using AI-powered tools. It can configure workflows in the cloud, on-premises or hybrid, and brings much-needed efficiencies to content production, packaging, repackaging, distribution and archiving. As market demands evolve, the platform provides increased extensibility thanks to its cloud-native, microservices-based architecture.
Dalet Flex offers collaborative, AI-enabled workflows that harness a single, cloud-native technology stack and workflow engine to facilitate production, distribution, archiving and monetization for news, sports and entertainment. Standardizing enterprise workflows and integrating key technologies enable customers to achieve greater efficiencies at scale. Reducing touch points means customers don't need to increase staffing significantly, even as they grow. The structured environment, automation and integrated technologies result in cost savings and improved profitability.
For example, Dalet Flex provides companies a much-needed migration path forward. Transitioning from entrenched manual workflows to an automated system requires a shift in processes and a mindset change. During busy periods, people are often tempted to take shortcuts, which can affect the quality of output. Dalet Flex workflows are designed to ensure structure and reduce the inclination to cut corners. More accurate results significantly boost overall productivity and success.
Due to its data-driven framework, Dalet Flex also leverages automation and data insights to achieve superior operational efficiency. Valuable data insights identify bottlenecks in processes, such as open review sessions or other human interaction workflows. Customers can easily and quickly course correct workflows via the flexible workflow engine, avoiding any slowdowns.
The latest version of Dalet Flex includes a range of operational efficiency capabilities, including enhanced growing-file management and editing capabilities that accelerate workflows. Improved UX functionality provides faster, easier metadata management. Plus, new
cost-monitoring capabilities track cloud storage and processing costs.
REAL WORLD CASE STUDY:
Recently, a high-profile creative facility incorporated AI into their ingest workflow to better manage specialised localization services such as transcription, speech-to-text and text-to-text at scale.
The company has offices in London and Los Angeles and services 200+ clients globally, each with specific workflows. Instead of on-site linguists, the facility collaborates with interritory translators globally to localise content for nearly 90 territories. With an annual output of 40,000 pieces of content, manual transcription is impractical. While automatic transcription may seem ubiquitous, Dalet Flex provides a secure, scalable enterprise process that leverages human review and correction. Although the system is closed to protect clients' IP, human intervention ensures that the intricacies of idioms, colloquialisms and cultural references properly represent tone and intent.
The team now works from an enterprise-level AI-generated version validated within Dalet Flex. Optical Character Recognition (OCR) converts typed, handwritten or printed text in the media into machine-encoded text. This transforms staff from transcribers to editors, drastically reducing manual effort. Automating the process saves significant time, money and resources.
Dalet
Dalet Pyramid Planner
Dalet Pyramid Planner is a cloud-native, customizable news planning application that maximizes collaboration and synergies across the news organization. The solution employs a story-centric approach that enables digital-first workflows that are not tied to the rundown. This enables a much more dynamic workflow that is adaptive and responsive to audiences on every platform.
Traditionally, news stories revolved around the rundown. Under this workflow, resources, assets and story ideas are typically siloed and not shared across the wider news organization. Further, digital teams would have to wait until the broadcast team stories were assembled before preparing the stories for social media and digital platforms. This doesn't work for several reasons:
1. Audiences are going to digital platforms first.
2. News Stories often have national and global appeal — audiences want constant updates.
3. Modern newsrooms need to support (sometimes globally) dispersed teams collaborating on a common pool of story assets.
Dalet Pyramid Planner provides a centralized dashboard to cover stories collaboratively. Instead of working on their local siloed standalone system, all users — the desk, field correspondents, newsroom producers, editors and digital teams — have access to assets, assignments and more in the cloud. Alerts provide important notifications on story updates/developments. Federated tools, underpinned by the
industry's most performant media asset management and orchestration platform facilitate collaboration, story creation and distribution with the highest degree of transparency and speed.
By centralizing all news planning with Dalet Pyramid Planner, users can visualize, manage, assign and track stories quickly and easily across teams and platforms (digital, social, TV and radio broadcast, print), locations (stations, offices, field teams) and entities (partners and sister organizations). Highlights and capabilities include:
• Facilitate production workflows. The planning-centric workflow becomes the primary organizational tool for journalists and production teams. They can assign stories, set deadlines, add tasks and share editorial status.
• Go digital-first. No need to wait for the TV story to be broadcast to repurpose the content for digital platforms. The digital and social team can take the lead and cover the story before it is broadcast on TV.
• Address multiple platforms for any given story. The notion of the story is radically transformed in a planning-centric workflow. The same story now hosts all its different versions for other platforms with their specific needs. For instance, the script and the VO are different on a TV story than on YouTube, TikTok and X.
• Centralized planning. The centralized news planning solution allows you to always keep the source media and related projects under the same umbrella. It is therefore easy to reuse existing media and projects to further develop a story in a shorter or longer format for digital media.
• Digital/broadcast teams can organize around and collaborate on important stories across a multi-site news operation.
• Have visibility on who does what, deadlines, why and when the delivery must be done, what is the next assignment and with whom you will collaborate.
Densitron
ProDeck 24
Our ProDeck 24 is a state-of-the-art control surface designed for desktop control in the broadcast and AV industry. IBC 2024 was the world premiere for our ProDeck 24 variant. The ProDeck series to date also includes our ProDeck and ProDeck Touch. Together, this series suits and solves a multitude of user needs.
The ProDeck 24 boasts a robust embedded computer, ensuring smooth performance for tasks like content streaming and multimedia editing. The device also features a high-resolution 8-inch IPS display with 1280 x 800 resolution, offering a tactile touchscreen experience.
Each button on the ProDeck serves a dual purpose, functioning as controls and providing interactive menu or real-time video previews. This allows users to switch between applications while maintaining focus on their work. The buttons are precisionengineered to emulate real-life buttons, enhancing user interaction. The hybrid version, which combines a touchscreen with 24 buttons, provides additional flexibility and control, allowing users to benefit from both tactile and touch-based inputs.
ProDeck 24 is powered by a single Ethernet cable (PoE+), simplifying installation and reducing operational costs. The single Ethernet cable connection ensures a hasslefree setup, allowing users to get started quickly without the need for complex wiring. It supports both Linux and Windows applications, providing flexibility to customize the interface according to user preferences. ProDeck also allows the installation of third-party software these applications. This means users can integrate their preferred tools and applications, enhancing the device's functionality and adaptability to specific workflows.
ProDeck is a standalone device designed for desk mounting. It does not require any additional computer to be connected for its functionality, making it a compact and efficient solution for various control needs. Additionally, it features a solid industrial-grade finish, ensuring it remains stable and does not move when operated or when buttons are pushed.
ProDeck is designed with user convenience in mind. Its intuitive touchscreen interface allows for touch, swipe, pinch and scroll functions, making it easy to navigate through various applications. Additionally, the customizable buttons
and real-time video previews enhance operational efficiency, enabling users to manage workflows effortlessly.
Versatility is a big plus point for the ProDeck 24. This control surface can be used in various scenarios throughout the broadcast and AV industry, including:
• Studio Control: Ideal for managing cameras, lighting, audio settings and video displays, ProDeck provides a centralized control solution for studio environments.
• Live Production: Its real-time video preview feature is particularly useful for live production settings, allowing operators to switch between feeds seamlessly.
• Content Creation: With its powerful processor and customizable interface, ProDeck is perfect for multimedia editing and content streaming tasks.
• System Integration: ProDeck's modular design makes it suitable for integration into larger broadcast systems, enhancing overall workflow efficiency.
In summary, Densitron's ProDeck 24 stands out as a powerful, user-friendly, and versatile control surface. Its innovative features and ease of use make it an invaluable asset in the broadcast and AV industry. The availability of multiple variants ensures that there is a ProDeck model to meet the diverse needs of users.
DPA Microphones
2061 Miniature Omnidirectional Microphone
DPA Microphones' new 2061 Miniature Omnidirectional Microphone is intended for individuals working with professional-level sound capture of the voice, in all of its nuances. From broadcast studios and ENG applications to theatres, event venues and houses of worship, the 2061 lavalier is a high-quality, easy-to-use solution that is great for any live or recording application.
Offering simplicity and natural sound, the 2061 was designed to appeal to audio professionals who are not yet part of the DPA family and are looking to upgrade their audio quality with a crystal-clear, no-frills microphone. With its durable and rugged enclosure and premier DPA sound, the 2061 does not compromise on excellence. It is a great miniature microphone that is a worthy competitor to other well-known and widely used models.
Borrowing several design elements and principles from other DPA lavaliers, the 2061 features a completely new, simplified 5 mm capsule construction that targets superior performance in the 50 Hz —16 kHz frequency range, with 128 dB peak SPL. Like its predecessors, the 2061 has best-in-class speech intelligibility via the clear, natural DPA sound, as well as a robust mechanical construction for longlasting performance.
This pre-polarized condenser microphone is designed to accurately capture the human voice with minimal coloration and a high level of fidelity and intelligibility, from any direction and offers a flat frequency curve, with a soft 3 dB boost at 8 kHz —16 kHz. The mic is not pre-tailored to compensate for any specific position in which a lavalier is commonly placed, which allows a sound engineer to capture authentic sound from any arrangement.
Additionally, the 2061 features the same durability, reliability and repairability that users have come to expect from DPA microphones. This is best exemplified in the 1.5 meter-long (4.5-feet) Kevlar reinforced cable, with long and flexible strain relief for resilience against physical stress, as well as the microphone's advanced sweat-repelling capabilities and IP57 certification for exposure to water, dust and makeup, which affords ease of cleaning and a prolonged life.
The 2061 also features a fixed cap and permanent
connector options that are compatible with 3-pin LEMO, TA4F and Mini-Jack, which together provide an easier-to-use solution, especially in chaotic work environments requiring quick and smooth results.
Available in multiple colors (black, white, beige and brown) it can blend in with any actor, talking head, outfit or costume. The black and white clip options offer integrated cable management, 360-degree mic rotation for desired angle and an easy to open clamp with non-slip surface for secure mounting. Additionally, the recently launched AIR1 Universal Miniature Fur Windscreen is an ideal accessory for the 2061 in windy, outdoor applications.
As part of DPA's new environmental initiatives, the company is focusing on reducing the impact of its packaging and shipping, resulting in a reusable, resealable pouch for the 2061 that is designed for less weight and volume for shipment and provides a practical advantage to users. The 2061 comes with a two-year limited warranty.
Electric Friends
AxisCtrl
AxisCtrl represents a major leap forward in broadcast technology, delivering a seamless and scalable solution for managing all types of studio robotics, from PTZ cameras to advanced robotic systems, all while leveraging the power of AI-driven automation.
AxisCtrl is built with the philosophy of open robotic control, reflecting the shift from vertically integrated, proprietary systems to more flexible, horizontal frameworks that integrate best-in-class technologies. This evolution mirrors transitions in other industries, such as computing in the 1980s and mobile technology in the 2000s. By supporting all robotic and camera hardware without restrictions, AxisCtrl delivers a future-proof solution for studios seeking an open, scalable, cost-effective system.
The power of AxisCtrl lies in its ability to integrate various hardware components into one unified control system seamlessly, with advanced features for robotics control. Whether building a new studio or upgrading an existing one, AxisCtrl provides an affordable and scalable robotic control platform that evolves with your production needs.
KEY FEATURES AND BENEFITS:
AxisCtrl is designed to simplify and automate complex production workflows. Some of its standout features include:
• Multi-Shot Salvo With Timeline: This feature allows operators
to easily schedule and automate camera movements across multiple shots with precision timing, offering full control over shot sequences to ensure smooth transitions.
• AI-Driven Automated Production: Leveraging artificial intelligence, AxisCtrl automatically adapts to production needs in real time. Recognising and responding to talent movements, speech and scene changes enables dynamic, responsive camera control without requiring constant manual input.
• Face Tracking and Pose Estimation: Places talent properly in a user-defined frame and dynamically adjusts to talents pose.
• Open Robotic Control: AxisCtrl is compatible with all major robot and camera hardware systems, from PTZ to advanced robotic arms, allowing studios to integrate the best equipment. Its table-driven dynamic interface makes setup and management straightforward, enabling studios to scale and expand as needed.
A NEW STANDARD IN ROBOTIC CONTROL:
AxisCtrl has been under development since 2020, and from the start, the goal has been to eliminate the restrictions imposed by vertically integrated robotic control systems. Creating a solution that integrates easily with any hardware enables studios to choose the best tools for the job without being locked into a single ecosystem. This flexibility simplifies the technical challenges of managing studio robotics, reduces costs, and expands possibilities for creative productions. Tracks can be hidden on the floor, and camera movements are virtually silent, ensuring that productions run smoothly and without distractions.
As the broadcast industry moves into an era of greater automation and more complex productions, AxisCtrl stands at the forefront of innovation. Electric Friends sets a new standard for managing robotic systems in modern broadcast environments by empowering studios with open, flexible control systems.
Eluvio
Eluvio Content Fabric and Application
Suite — Casablanca Release
The Eluvio Content Fabric and Application SuiteCasablanca Release is a next-generation content distribution and storage technology that provides ultra-fast, efficient, tamper-proof streaming, download and monetization of video and other digital media at scale.
In April 2024, Eluvio unveiled the Casablanca release of the Eluvio Content Fabric protocol to deliver premium live streaming, PVOD, FAST channels and video archive monetization at scale for all sports and entertainment companies.
The protocol significantly simplifies distribution by replacing file workflows with a hyper-efficient streaming and delivery pipeline in protocol. This dramatically reduces the bandwidth and storage compared to clouds and CDNs and slashes cost and carbon, and enables unlimited re-use of the same content (no files/ egress), personalization without compromising scale, and built-in security, authenticity and rights.
The Casablanca Release delivers mass scale performance and deterministic low-latency live and VoD adaptive bitrate streaming (<1 second segment delivery times for 99% of clients and segments); end-toend per-session encryption, DRM for all formats, forensic watermarking and visible watermarking; automatic and instant catchup viewing and Live-to-VOD (DVR) with no file copies; automatic configuration for MPEG-TS/SRT/RTMP live stream sources; and many advanced features for premium, broadcast-grade audio/video streaming for frame-accurate content composition; scalable server-side personalization; and in-stream HTML-5 graphic enrichment.
The AI Tagging and Search service includes generative and multi-modal AI tagging, Semantic Search and automatic Summarization of video and images in-Fabric. Unlike other AI content workflows, no content or metadata must move since AI inference happens in place and the resulting metadata is stored in the respective object. Content is automatically searchable and actionable in the Fabric's dynamic streaming pipeline to automatically create clips, insert content/ads, create highlights, etc., with no file or metadata transfers.
Building on the Fabric's novel tamper-proof and ownercontrolled security model, Casablanca includes a new content
verification API and executes a Verification proof to display the authenticity and provenance of streaming video and images. Users can verify that content served by the Fabric is authentic (not fake) and can view its ownership, version hash and C2PA credentials.
Compared to legacy distribution, this saves huge (10x+) resources by avoiding file copies and producing all media variants from the same source, yields ultra-low latency streaming performance by design, and simplifies operation. Metadata in "parts" drive media output composition for "server side" personalization and AI-driven content. Use cases include live recording, tagging and clipping with no copies or file movement/egress. Scalable streaming, archive monetization and title servicing have at 5–10x cost savings over legacy distribution, and all content is reusable without remaking or redistributing variants.
Companies and creators whose content experiences have been powered by Eluvio include Amazon Studios/MGM, Dolly Parton, European Professional Club Rugby, FOX, SONY Pictures, Telstra Broadcast Services, UEFA, Warner Bros., WWE and many others.
ENCO enCaption Sierra
EnCaption Sierra represents the next generation of ENCO's groundbreaking enCaption solution for fast and accurate automated conversion and delivery of captions in broadcast and AV environments. enCaption Sierra continues the story of innovation and success in applying AI and machine learning to live captioning, with new benchmarks attained in speed and accuracy, the same as each prior generation. For the first time, enCaption Sierra brings ENCO's market-leading automated captioning technology together with an SDI captioning encoder to create an allin-one solution for on-prem environments and containerized deployment options for cloud.
ENCO continues to push the boundaries of live automated captioning in diverse applications and environments. enCaption Sierra's ability to produce and deliver faster and more accurate captions correlates directly with new parallel processing capabilities powered through a GPU and large language models. The GPU architecture delivers unprecedented
computational power to ENCO's AI-based speech-to-text engine.
The same improvements are present in Sierra's integrated enTranslate module, which uses machine translation and grammatical structure analysis to deliver captions for up to 37 languages simultaneously. ENCO also adds new languages to enTranslate's engine on a consistent basis, with three new languages added in time for Sierra's public debut at NAB Show 2024, including a bilingual language model for Spanish-English content.
ENCO has also sharpened the listenability and responsiveness of the system with the Sierra release, improving performance in challenging audio environments. enCaption Sierra also leverages GPU processing for improved speaker change detection and the ability to recognize music, laughter, applause and crowd noise. Sierra is an excellent listener and reliably computes fast speech, strong accents and problematic audio quality to consistently deliver clean captions.
enCaption Sierra can be delivered on Windows or Linux operating systems and is managed and monitored from a web browser. enCaption Sierra's modern GUI features a simple calendar scheduler and various configuration settings, including custom dictionaries, word models, filtering and bilingual language options.
ENCO continues to lead the charge for captioning innovation as it pertains to helping broadcasters and MVPDs produce clean closed captions for their broadcast content with greater efficiency. The same philosophy applies to open captioning needs in AV environments to help audiences improve comprehension. Sierra also addresses the need for flexible deployment options, offering complete on-prem and cloud options that align with each broadcaster's operational model with significant performance and cost-reducing benefits.
Evergent The Evergent Sports Accelerator
CHALLENGE
The sports industry is experiencing a shift towards direct-toconsumer (D2C) streaming as federations and rights-holders look to generate maximum revenue from content assets while building data-driven fan relationships. While a wide range of sports brands, leagues and streamers are launching their own D2C services, they need robust, fan-centric tools to gain, retain and monetize subscribers.
Seasonal churn remains a significant issue, with fans often canceling subscriptions once major events or sports seasons end. Acquiring new subscribers is costly, ranging from five to 25 times more expensive than retaining existing ones. Additionally, sports networks must navigate complex geofencing rules and blackout restrictions, which, if mismanaged, can lead to user dissatisfaction and further churn. With a busy summer of sport behind us and a new season for premier European sports leagues just kicking off, providers need to offer personalized D2C experiences to differentiate themselves from competitors.
SOLUTION
Evergent's Sports Accelerator addresses these challenges with a suite of features that optimize monetization, enhance subscriber retention, and ensure compliance with legal and broadcasting requirements.
• Seasonal Subscription Management: Aligns subscriptions with sports seasons to ensure continuity and minimize churn, providing consistent pricing and seamless transitions between seasons.
• Pause and Resume Functionality: Allows subscribers to pause their subscriptions during off-seasons rather than canceling, reducing churn and retaining fans year-round.
• Couch Rights for Geofencing Compliance: Ensures fans can access their favorite sports content regardless of their location, maintaining engagement while adhering to regional licensing agreements.
• Enhanced Sales and Distribution Channels: Supports diverse sales
channels and localized payment methods, enabling fans to subscribe directly through their favorite teams or platforms, maximizing reach and revenue.
• Personalization Through Data Utilization: Captures valuable subscriber data to deliver personalized content and offers, increasing engagement and creating upselling opportunities.
OUTCOME
Evergent's Sports Accelerator has helped deliver significant benefits for sports streamers:
• Reduced Churn: Features like seasonal subscriptions, and pause-and-resume, options help alleviate subscriber churn, maintaining a stable customer base throughout the year, even during off-seasons, as seen by the NBA.
• Increased Revenue and Engagement: Platforms such as SonyLIV and YES Network have reported substantial gains, including a 90% increase in active subscribers and a 400% rise in subscriber acquisition during major events, demonstrating the effectiveness of Evergent's solution in maximizing subscriber value amidst rising sports rights costs.
• Compliance and User Satisfaction: By managing geofencing and blackout restrictions effectively, Evergent's solution ensures seamless viewing experiences for fans even when away from home, reducing churn and maintaining high viewer satisfaction, a strategy that has worked for regional sports networks such as Bally Sports.
• Enhanced Personalization and Loyalty: Using data insights for personalized experiences based on player and team preferences has driven higher engagement and long-term loyalty. This data-driven approach allows sports providers to deliver a fan-centric feel, from acquisition and onboarding through to ongoing management and even cancellation, improving retention and brand equity.
Evergent's Sports Accelerator provides a comprehensive solution for sports streaming providers, enabling them to maximize the value of their content rights, reduce churn and drive profitability through flexible and fan-centric experiences.
Evertz
NEXX Routers
NEXX by Evertz provides broadcast facilities with access to the latest UHD (4K and 8K) technology, supporting SD/HD/3G/6G/12G data rates. This fully passive solution offers a clear pathway for future IP expansion, enabling seamless integration of cloud services into workflows.
NEXX routers come in compact modular frames with main interface/backplane options, available in 5RU (384x384) and 3RU (96x96) configurations. FX-LINK, Evertz's next-generation X-LINK for Expansion and Router Distribution Architectures, facilitates interconnections between NEXX frames. By connecting three NEXX routers, users can create a single 960x960 non-blocking routing matrix, making it the largest 12G-SDI router in the industry.
All NEXX products feature fully redundant control and easy component swapping, including crosspoint, fans and I/O modules. They offer native full audio shuffling and an integrated, software-enabled multiviewer with over 40 pre-configured layouts. Internal Evertz X-Link signaling ensures NEXX remains penalty-free, avoiding unnecessary output usage. The platform's ability to tap into additional licenseenabled features makes it highly customizable.
The NEXX-670 module expands processing options for NEXX, allowing users to choose functionality by loading
different FPGA-accelerated apps. Based on Evertz' popular ev670 platform, NEXX-670 is ideal for facilities transitioning to IP with gateway apps. It offers advanced bulk processing and a sophisticated, fully-featured multiviewer with high-end monitoring and multiviewing options.
Completing the next-generation processing module lineup is NEXX-SCORPION, which supports miniature input and output (MIO) module that allows for high customization and flexibility. It natively supports non-SDI interfaces, including HDMI and Dante.
The NEXX platform provides a path to SMPTE ST 2110 with IP gateway apps and a bridge to remote and cloud production with the support of JPEG XS. Controlled by MAGNUM-OS, NEXX ensures future-proof broadcasting solutions for users of all sizes and applications.
Evertz
RF Over IP
In a move that is comparable to the transition from SDI to IP, Evertz' advances will help the satellite and ground systems industry develop the network transport of digitized IF signals. These groundbreaking advancements in revolutionizing satellite ground infrastructure are poised to redefine workflows and enhance efficiency for broadcast and media professionals worldwide.
As a global leader in media and broadcast solutions, Evertz is playing a key role in the Digital IF Interoperability Consortium (DIFI), an international nonprofit membership consortium that aims to advance interoperability in satellite and ground systems networks and set new standards for the industry.
These advances are comparable to the broadcast industry's transition from SDI to IP, an area where Evertz has demonstrated leadership and expertise. Once implemented, they will revolutionize current business structures and help the satellite and ground systems industry develop the network transport of digitized IF signals.
Underscoring its commitment to advancing DIFI and ensuring a smooth transition for those ready to embrace this new era of satellite signal processing, Evertz recently joined industry peers and stakeholders at the second DIFI PlugFest, which was held in the U.K. in June. Nine vendors tested the compatibility of version 1.2 of the IEEE-ISTO 4900-2021 Standard, which was introduced last September, while version 1.1 was also tested for functionality. In total, 178 test cases were executed, with 93% of tests at least partially compliant and 75% fully compliant. Once again, Evertz provided the RF and IP routing infrastructure necessary for seamless signal routing and distribution among various vendors.
Evertz cutting-edge digital IF conversion and processing technology is setting new standards for the industry. Highlighting the modular and hot-swappable 7880RFIP platform, Evertz offers up to 28 bidirectional conversions in a 3RU frame, or up to eight bidirectional conversions in a compact 1RU frame. A groundbreaking debut includes the capability to digitize up to 1 GHz of instantaneous IF bandwidth per channel, a significant advancement in spectrum management. The digitized IF is transported over WAN networks using IP. This removes the distance limitations
experienced by customers using dark fiber and creates new workflows and approaches for the RF industry for satellite dish farms placement. (Evertz Microsystems Ltd., 5292 John Lucas Dr.‚ Burlington, Ontario, Canada L7L 5Z9, evertz.com)
Additionally, Evertz presents its innovative 670WSP channelizer, empowering operators with flexible spectrum management to significantly reduce IP bandwidth requirements, fostering dynamic workflows. Complementing these advances, Evertz highlighted the integration with its MAGNUM-OS orchestration system, streamlining the integration of complex digital IF or hybrid workflows, further solidifying its position as an industry leader in satellite ground infrastructure digitization supporting end-to-end control over IP networks.
FOR-A FOR-A MixBoard
FOR-A introduces the groundbreaking FOR-A MixBoard, powered by ClassX, a revolutionary software-based production switcher set to transform broadcast and live production. Launched at IBC 2024, the FOR-A MixBoard (formerly known as SOAR-A SWITCH) combines cutting-edge features with an intuitive interface, meeting the evolving needs of modern production environments.
The core innovation of FOR-A MixBoard is its ability to compose unlimited layers, providing unprecedented creative freedom. This allows for complex, multilayered compositions previously challenging or impossible with traditional hardware switchers.
Available in two variants, a basic model (eight inputs, two outputs) and an advanced model (16 inputs, four independent outputs), the MixBoard adapts to various production scales. Its versatility in input types, supporting SDI, NDI, media and live streams, enables seamless integration into hybrid production setups.
The software introduces innovative features enhancing production capabilities. The Media Engine enables seamless playout of video clips and graphics, while the Composer facilitates complex layouts and transitions. The Video Input Switcher provides enhanced source management, offering greater control during live productions.
FOR-A MixBoard's advanced DVE capabilities enable complex transitions and effects, elevating production quality. Customisable shaders offer unique visual effects and colour correction options. The built-in multiviewer composition tool allows operators to create custom layouts for monitoring multiple sources simultaneously.
The software-defined architecture offers unparalleled scalability and futureproofing. As production needs evolve, the system can be easily updated without extensive hardware replacements, providing a cost-effective solution for broadcasters.
With its user-friendly interface, operators can create intricate compositions and transitions efficiently, allowing production teams to focus more on creativity than technical operations. This aligns with industry efforts towards sustainable production practices by reducing reliance on hardware and potentially lowering energy consumption and electronic waste.
The collaboration between FOR-A and ClassX in developing the FOR-A MixBoard demonstrates a commitment to leveraging cross-industry expertise. This partnership combines FOR-A's renowned reliability with ClassX's innovative software technology.
FOR-A MixBoard, powered by ClassX, represents a significant advancement in production switcher technology. Its unlimited layering capabilities, versatile input support, advanced features and software-defined architecture position it as a transformative tool in the broadcast industry. As production workflows evolve, FOR-A MixBoard offers the flexibility, scalability and innovation needed to meet current demands while anticipating future needs. It empowers broadcasters and live production teams to create more dynamic, engaging content with greater efficiency and creative freedom.
Frequency Frequency Studio
Frequency is revolutionizing the way streaming television channels are created, managed, and distributed, expanding its industry-leading Studio platform with the launch of new cloud-based innovations at IBC2024. These include the introduction of its Fusion and Program Templates automation workflows — all designed to increase flexibility, reduce cost and increase distribution opportunities for streaming television channels.
FUSION: REVOLUTIONIZING PROGRAMMING CONTROL
Frequency's Fusion service and new workflows offer unprecedented control over programming by seamlessly integrating traditional broadcast workflows with Frequency Studio's industry-leading channel creation capabilities. This allows content providers to blend externally-originated source streams with Frequency Studio's robust programming capabilities, creating a dynamic, hybrid programming schedule. Whether for regular programming or special events like breaking news, Fusion provides the flexibility to override and customize streams without disrupting upstream operations.
By utilizing Fusion, broadcasters can effortlessly incorporate externally scheduled channels into Studio, and alter the content in a stream with live, video on-demand or automated elements. This feature simplifies the transition for traditional broadcasters into the streaming world, and empowers them to create unique, engaging viewer experiences. The capability to integrate with external scheduling protocols, like the Broadcast Exchange Format (BXF), further enhances the workflow, making cloud-based playout more accessible than ever before.
PROGRAM TEMPLATES: AUTOMATED AND DYNAMICALLY OPTIMIZED SCHEDULE CREATION
Program Templates offer a powerful way to automate the creation of linear television schedules. By incorporating machine intelligence and reusable patterns, templates drastically reduce the manual effort required to generate programming. Users can configure parameters such as ad pods, interstitials and time fillers to tailor the viewing experience to their audience, and to modify the programming dynamically.
Metrics from initial customers has shown a decrease in scheduling time of up to 95%, slashing the effort required to create and manage schedules. This automation is particularly valuable when new videos are ingested, automatically generating corresponding programs based on predefined logic and cue points. This efficiency not only saves time but also ensures that programming is always current and relevant, aligning with audience expectations and business goals.
DYNAMIC RE-PROGRAMMING: INSTANT ADAPTABILITY TO CHANGING NEEDS
The Re-Programming feature augments scheduling flexibility by enabling dynamic adjustments to existing channel lineups. Through Program Templates, broadcasters can make modifications to ad loads, interstitials and program durations within seconds. This is invaluable for maximising viewer engagement and revenue, by optimising between advertising fill rates and programming.
KEY USE CASES INCLUDE:
• Ad Load Changes: Adjusting the number and duration of ads to optimize revenue without compromizing viewer experience.
• Experience Changes: Modifying interstitials, such as promos or channel bumpers, to enhance viewer engagement.
• Duration Changes: Tweaking program lengths to align with clock-based or flexible scheduling needs
DRIVING INNOVATION AT IBC2024
These new capabilities are transformative for the streaming television industry, offering unparalleled flexibility, automation, and control. They allow broadcasters to seamlessly blend traditional and modern workflows, improve viewer experiences and rapidly adapt to market demands. By significantly reducing the time and effort required to create, manage and modify streaming schedules, Frequency is setting a new standard in broadcast technology with these additions to Studio.
GatesAir
IMTX 2+0 Intra-Mast System
GatesAir's new IMTX 2+0 system adds a two-channel option to the popular Maxiva IMTX Intra-Mast series, offering the same benefits as its six- and eight-channel siblings with some clever twists. The IMTX 2+0 Intra-Mast solution supports two compact, low-power transmission modules in a shared protective infrastructure built. The modules can be flexibly configured to serve varied lowpower over-the-air TV services with support for most major DTV broadcast standards. Its chassis offers protection for all technology built into the system to ensure consistent and reliable service both indoors and outdoors, where the system is built for clean integration within towers and other supportive structures.
GatesAir's new IMTX 2+0 system adds a twochannel option to the popular Maxiva IMTX IntraMast series, offering the same benefits as its six- and eight-channel siblings with some clever twists. All three IMTX systems include a stacked array of transmission modules securely built into a chassis and pre-configured to provide TV service or fill coverage gaps where signal penetration is a challenge. IMTX systems are especially useful for populated urban areas with tight housing densities, or to establish a localized transmission point to serve rural or underserved communities.
The IMTX 2+0 substantially reduces the form factor to a sleek 13.35 inches x 6.7 inches x 12.58 inches form factor, simplifying its integration to fit into almost any structure. Its compact design, solid structural integrity and flexible connectivity points translate to clean installations inside hollow mast or traditional tower structures. Its sturdiness and unobtrusive presence also work well inside an RF shelter in desktop form. From an efficiency standpoint, tower structures provide complete protection from the outside environment while allowing heat dissipation via convection and forced
air cooling. Same as previous models, the IMTX 2+0 offers vertical air convection flow within the chassis, complemented by a series of small fans. A DVB-S/S2 satellite receiver option is available as well as GbE (TS over IP), ASI, T2MI and SMPTE310M inputs.
As developed by the GatesAir Europe, capacity is the only notable architectural difference. All additional system design and technology elements instrumental to broadcast operations have been carried forward, providing the same performance, efficiency and maintenance benefits for broadcasters.
At the same time, GatesAir's engineering team has enhanced the technology infrastructure to support new applications. The IMTX 2+0 is the first IMTX system with a built-in N+1 architecture that solidifies on-air protection for broadcasters through a parallel redundancy layer. N+1 systems, especially popular in Europe, support automatic failure to a secondary system that provides the same functionality and performance. The IMTX 2+0 design is built for redundancy across channels, with automatic failover between a main and backup chassis and through a single, simple connection between the corresponding modules.
The initial rollout will serve UHF channels, with VHF and DAB radio versions to follow as market demands dictate. All IMTX transmission modules can deliver up to 70W of average DTV power.
GB Labs NebulaNAS
In today's production workflows, creative teams work from anywhere, producers and clients want to view content remotely, and media files must always be protected. Keeping track of the evergrowing amount of content is complicated.
Creative teams must be able to access content at any time and collaborate regardless of location. They need a secure, flexible, collaborative and global storage platform that excels in complex environments with distributed workers who demand speed, performance, and the "bendability" to accommodate differences from one production to another.
Simply put, most on-premise storage does not effectively support the creative team, meet IT requirements, or help the business meet its and its clients' needs.
NEBULANAS SUPPORTS THE CREATIVE TEAM, THE IT TEAM, THE BUSINESS AND THE CLIENT
NebulaNAS is a transformational cloud storage solution from GB Labs, developed from the ground up for media and production professionals. Regardless of location, users will appreciate its NAS-like ease of use. Editors and artists can see files immediately, access data quickly, and collaborate globally as if they were in the same location. Producers will appreciate the detailed analytics and audit reporting maintained by the system. Organizations can even add local storage to provide
media acceleration and cache-level support for clusters of users. IT managers will approve of its security and resiliency, including 256-bit AES file encryption, user access controls, file locking and distributed use of cloud servers, which spreads data across thousands of servers. Due to these features, NebulaNAS provides 11 nines of reliability. In short, NebulaNAS provides cloud flexibility, onprem-like performance, global access and collaboration and enterprise security.
The first cloud storage solution focused exclusively on media production under today's demanding requirements, NebulaNAS delivers four key benefits:
• Media Centricity: Designed by media professionals for the most demanding workflows, NebulaNAS supports all creative and management tools used today — at any frame rate, file size, format or bit rate.
• Cloud Flexibility: NebulaNAS operates across a distributed cloud architecture, providing added security and operational uptime and improving performance by better managing data across thousands of distributed servers. Users will benefit from the solution's scalability, the flexibility to add local and NVMe caching storage, and the ability to mount drives seamlessly, mirroring existing work styles.
• Local Performance: To end users, NebulaNAS operates as if it is on-prem storage, but with several advantages, including the ability to interact with the storage from anywhere in the world, collaboration in near real time with other users regardless of their location, and a minimal learning curve. Applications like online editing fly because users access only the bits of files they actually need instead of an entire file all at once.
• Enterprise Security: In addition to the file management system's distributed nature and 256-bit AES encryption, NebulaNAS maintains file access and credential control regardless of the user's location. The audit management system maintains historical data showing who interacted with which files, when and from what location.
UNIO Personal Reference Monitoring Solution
The UNIO Personal Reference Monitoring Solution provides the most seamless bridge ever between professional loudspeaker and personal headphone monitoring. This allows the user to create accurate audio mixes that translate reliably between their studio monitors and headphones for a smooth, uninterrupted workflow — wherever they choose to work.
Originally introduced at IBC2023, the UNIO monitoring ecosystem brings together the power of Genelec's Smart Active Monitors, GLM calibration software, and Aural ID personal headphone technology, all under the tactile hardware control of the 9320A reference controller, and managed via the user's MyGenelec account.
At IBC2024, the UNIO ecosystem welcomed two new additions — the 8550A professional reference headphones and Aural ID V2.0 binaural headphone monitoring technology, which together strengthen UNIO by offering the most accurate and truthful headphone listening experience ever.
The new 8550A headphones, together with the 9320A controller and its reference measurement microphone, are the elements that make up the new UNIO Personal Reference Monitoring Solution, and with the additional headphone calibration features available within Genelec GLM software — and the optional Aural ID V2.0 binaural headphone technology — users now enjoy the most trustworthy reference monitoring system ever, even in challenging acoustic environments.
The innovative active design of the new 8550A headphones allows them to be individually factory-calibrated to the headphone output of their matching 9320A controller. This active principle of accurately matching the 8550A to the headphone stage of the 9320A enables exceptional linearity, acoustic precision and neutral sound character, and mirrors the kind of tailored precision that Genelec Smart Active Monitors provide for in-room loudspeaker monitoring environments.
The closed, circumaural 8550A provides exceptional sound isolation and comfort for all sizes of head and ears with a durable, adjustable design and replaceable component parts for a sustainable, extended product lifetime. The 8550A's dynamic design features 40 mm transducers and neodymium
magnets to deliver a 15 Hz to 20 kHz frequency response, +/0.3 dB level matching, and an impressive 119 dB of short term SPL via the 9320A controller.
With the calibration file for the 8550A headphones stored within the matching 9320A controller, further personal optimisation is available via Genelec's GLM 5 calibration software, including slope adjustment, parametric EQ, and L/R level balancing. Then, for the ultimate personal headphone monitoring experience, the optional Aural ID V2.0 models the user's own head and upper torso features to calculate their unique personal HRTF, delivering headphone mixes with externalised presentation, including precise azimuth and elevation, and producing a sense of space and direction that closely resembles in-room loudspeaker monitoring.
So, UNIO now effectively offers three levels of reference monitoring. Those seeking the finest in-room loudspeaker monitoring can enjoy the combination of Smart Active Monitors, GLM and the 9320A, while users that also seek accurate headphone monitoring will embrace the arrival of the 8550As. But, for those seeking the ultimate combination of in-room and personal headphone monitoring, the UNIO Personal Reference Monitoring Solution and Aural ID V2.0 represents the most accurate and portable monitoring system ever produced.
Grass Valley LDX 110 and LDX C110
The new LDX 110 and LDX C110 cameras bring the advanced technology of the LDX 100 Series to the entry-level market, setting a new standard for quality and affordability in the media and entertainment industry. Grass Valley introduces these cameras as game-changers for budget-conscious productions. The LDX 110 and its compact counterpart, the LDX C110, are native UHD cameras equipped with cutting-edge features designed for a wide range of applications, including sports, studio and entertainment productions.
Both models feature the Xenios UHD 2/3-inch CMOS image sensor, renowned for its global shutter technology, which ensures flawless image capture in any operating mode. This sensor, also used in other LDX 100 Series cameras, delivers exceptional image quality. Unlike previous versions of the LDX 100 series, the LDX 110 and C110 use a single imager instead of three, resulting in significant cost savings. This innovation allows Grass Valley to offer the same exceptional quality as the higher-end models at a more accessible price point.
The LDX 110 and LDX C110 support native HDR and WCG
(Wide Color Gamut) operation, including PQ, HLG and S-Log3, with custom LUT processing in the camera head for precise HDR-to-SDR conversion — features that are not available in any other camera system on the market, regardless of price point. A unique feature of the compact LDX C110 is its XCU connectivity, previously found only in the higher-end LDX C135/C150 models. This addition enhances the camera's versatility in a range of production environments.
The LDX 110 series also introduces NFC functionality and the Grass Valley Scanner app, which provide enhanced option management and diagnostics, simplifying setup and operation. These features, combined with the cameras' robust build and cutting-edge technology, ensure they deliver the best value in the market.
In today's fast-evolving media landscape, customers need high-performance HD solutions that not only meet current demands but are also future-proof. The LDX 110 and C110 excel in this regard, offering stunning HD/SDR performance now and a seamless upgrade path to native UHD/HDR when needed. These cameras are the ideal choice for productions seeking reliability, quality and costeffectiveness.
The versatile LDX 110 Series supports video formats ranging from 1080i and 1080p to native UHD, making it a highly adaptable solution for various production needs. Both models are fully compatible with the current XCU UXF camera control stations and all C2IP-based camera control solutions, including Creative Grading and Creative Grading X. Additionally, the LDX 110 integrates smoothly with existing LDX 100 Series camera viewfinders and accessories, ensuring effortless incorporation into established workflows.
The LDX 110 and LDX C110 was showcased for the first time at IBC2024, highlighting their unparalleled blend of quality, reliability and cost-efficiency. With these new models, Grass Valley continues to lead the way in providing innovative solutions that meet the evolving needs of the media and entertainment industry.
Grass Valley LDX 135 RF
The LDX 135 RF is the latest advancement in camera technology, delivering the same exceptional performance and reliability as its wired counterparts, now with integrated wireless transmission via RF or 5G bonded cellular. Designed to meet the demands of modern broadcast and production environments, this innovative camera offers unmatched flexibility without sacrificing quality.
A key feature of the LDX 135 RF is its ability to deliver high-quality video, audio and the camera control wirelessly, thanks to seamless integration with RF and 5G technology. This wireless capability enhances mobility and simplifies setup in dynamic settings like live events, sports broadcasts and remote productions. Developed in
overall setup, while also avoiding the complexities typically associated with wireless integration.
The RF edition of the LDX 135 is engineered with a shorter housing, creating space for the transmitter kit while maintaining a balanced, stable operation, even when shoulder-mounted. Despite these design changes, the LDX 135 RF maintains its exceptional performance, supporting all single-speed video modes up to UHD with HDR. It also features a global shutter and three Xenios imagers, ensuring outstanding image quality in challenging lighting conditions.
With a high sensitivity of F11 at 2000 lux, the LDX 135 RF excels in low-light environments, capturing detailed, vivid images without the need for extensive lighting. This makes it particularly valuable in live productions where lighting conditions can vary widely.
More than just a camera, the LDX 135 RF is a comprehensive solution for modern broadcast and production needs. Its seamless integration with all third-party RF systems and 5G transceivers makes it a versatile choice for any production environment. The optimized rear mechanical interface, developed with top RF manufacturers, further simplifies the integration process.
In summary, the LDX 135 RF is a forward-thinking camera solution that combines cutting-edge wireless technology with the proven reliability of the LDX series. Ideal for live events, remote productions and any scenario where flexibility and mobility are essential, the LDX 135 RF is a powerful, futureproof solution ready to meet the challenges of today's broadcast industry.
Grass Valley Maverik X
Maverik X, a native AMPP (Agile Media Processing Platform) production switcher, is meticulously engineered to meet diverse user requirements while maintaining strict control over production costs. This innovative solution has been conceived from the ground up as a no-compromise, microservices-based software platform. Designed to operate seamlessly on commercial off-the-shelf (COTS) servers, whether deployed on-premises or in the cloud, Maverik X is an ideal choice for modern production environments seeking flexibility and efficiency.
At its core, Maverik X is a highly flexible and scalable production switcher capable of expanding from very small to extremely large setups, depending on the needs of the production. Its scalability is underpinned by the robust AMPP platform, which further enhances Maverik X's versatility. This platform allows various other Software as a Service (SaaS) applications to run on the same COTS servers, whether on-premises or in the cloud. These applications can be activated on a short-term or subscription basis, providing unprecedented flexibility in configuring complete production solutions. This adaptability empowers users to configure and activate production setups as needed, reconfigure them for different requirements, or deactivate them when no longer necessary.
This dynamic approach not only reduces capital expenditures (capex) but also enables production costs to be allocated directly to specific productions as operational expenditures (opex). This shift from capex to opex offers a more efficient and cost-effective financial model, allowing organizations to better manage their budgets and scale their operations according to demand.
Maverik X supports a wide range of control options, including software-based GUI controls, third-party hardware control panels, and the familiar tactile modular hardware control panels of the Maverik series. This multi-interface approach caters to users' desire for familiar operating sequences and user interfaces while also providing the flexibility to use additional operating options for ad hoc applications without requiring dedicated hardware. This ensures that Maverik X can seamlessly integrate into existing workflows while offering the potential for innovation and customization.
By accommodating various control interfaces and deployment configurations, Maverik X can adapt to the specific needs of different production environments, making it a versatile and powerful tool for content creation. Its ability to integrate seamlessly with other SaaS applications and COTS platforms further enhances its value, providing users with a comprehensive production solution that can be scaled and customized as needed.
In summary, Maverik X is more than just a production switcher; it is a central element of a complete, scalable and costeffective production solution. With its flexible deployment options, support for multiple interfaces, and integration with the AMPP platform, Maverik X empowers users to create high-quality content with greater efficiency and control. Whether used on-premises, in the cloud or in a hybrid configuration, Maverik X is poised to revolutionize the way productions are managed and executed, offering unprecedented flexibility, scalability and cost savings.
Haivision Haivision StreamHub and MoJoPro on AWS
Haivision has recently announced the availability of Haivision StreamHub and Haivision MoJoPro on the AWS Marketplace, bringing an easy and flexible pay-asyou-go option for mobile journalists, video professionals, and broadcasters to cover breaking news, sports and live events with HD video from smartphones.
Whether a user needs a single camera for a few hours, or multiple cameras spanning multiple days, Haivision's PAYG solution on AWS includes both the StreamHub and MoJoPro licenses and helps broadcasts go live quicker and easier than before.
Haivision StreamHub is a versatile HEVC and H.264 video receiver, decoder, transcoder and IP gateway bridging 5G and bonded cellular to SDI, NDI, ST 2110, SRT, RTMP and other IP protocols for on-premise and cloud-based live broadcast workflows.
Designed to meet the requirements of live sports and news broadcasters, StreamHub receives and decodes multicamera video streams transported over mobile networks, the internet and the cloud at very low latency. Each StreamHub can receive up to 16 concurrent SST and IP streams from Haivision mobile transmitters, encoders and third-party sources.
Optimized for remote production workflows, StreamHub can establish bidirectional audio communications with production, send IFB audio to field talent, and transmit video returns. StreamHub's intuitive UI makes it easy to manage live audio, video and configure different types of multiviewers. The Data Bridge feature can remotely control PTZ cameras and other equipment connected to a field unit via IP.
With advanced transcoding and IP gateway features, StreamHub can convert between codecs, resolutions and frame rates. It supports multiple protocols, enabling broadcasters to receive live video from the field and transcode and distribute it over IP networks and the cloud.
As part of a StreamHub workflow, Haivision MoJoPro is a free professional camera app that allows anyone with a smartphone to contribute high-quality content to live broadcast production workflows via StreamHub. Leveraged extensively for breaking news and for some of the biggest sporting events, MoJoPro was recently used for the live production of the Opening Ceremony and sailing competitions
at the Paris Games.
MoJoPro can bond a phone's 4G or 5G modem with Wi-Fi or Mi-Fi for more bandwidth and reliability using Haivision's Emmy Award-winning SST technology. Designed to be integrated with live broadcast production workflows, it can also receive a return feed for real-time communication with the production studio.
FEATURES OF THE STREAMHUB AND MOJOPRO SOLUTION INCLUDE:
• Cloud Production: StreamHub receives video from MoJoPro and outputs to SRT, NDI, RTMP and other IP protocols used by cloud-based platforms.
• On-prem Workflows: Cloud instances of StreamHub can act as a gateway and relay live video to on-prem appliances leveraging SDI, ST 2110 and NDI.
• Unrivaled Flexibility: Users can choose between four, eight or 16 inputs and outputs and each can receive smartphone feeds from MoJoPro. After a user spins up StreamHub in the cloud on AWS, all they need to do is determine the IP address and enter it into the MoJoPro app to begin streaming.
Hammerspace Hammerspace
Hammerspace is a software solution designed to accelerate production pipelines by bridging incompatible storage silos, to provide extreme high-performance global file access and automated data orchestration across on-premises and/or cloud storage and compute resources from any vendor.
With Hammerspace, companies can not only automate their production workflows across existing on-premises storage/compute and cloud providers & regions, but Hammerspace's new Hyperscale NAS architecture lets them even dramatically increase performance of their existing storage to support rendering or other highperformance use cases.
From a user's perspective, all files for which they have permissions are accessible via standard protocols on their computers. No client software was needed, and the file/folder structure on their desktop is the same as before, whether accessing via SMB, NFS, or S3. The difference is that with Hammerspace users see a cross-platform global namespace that seamlessly spans all storage silos, data center locations, and clouds seamlessly, rather than having to navigate through multiple mount-points, gateways, and separate access protocols. In addition, the days of having to shuffle files copies between sites are gone. Everyone, everywhere is working off of the same files.
Customers such as Animal Logic, Mathematic Studios, Jellyfish Pictures, and others have dramatically accelerated
production output by enabling real-time collaboration globally across multiple silos, sites and cloud resources. At Animal Logic, the ability to share content across sites and clouds globally without proliferation of file copies is a game-changer.
Animal Logic's initial use case for Hammerspace was to create a centralized repository accessible by all sites in AsiaPac, Europe and the Americas for the many software tools their teams need. This reduced headaches for admins, who had to maintain copies of the same files in sync between locations.
For users it means that everyone in all Animal Logic offices globally have direct access to the latest tools via the shared Global File System. This also was much better than using AWS as the shared repository, since Hammerspace eliminates the cost and complexity of wrangling copies between Amazon regions by unifying access globally across all geographies.
Additionally, the storage acceleration capabilities of Hammerspace's Hyperscale NAS architecture proved to be an unforeseen benefit, improving performance of an existing scale-out NAS platform. Although this was not the original problem they purchased Hammerspace to solve, testing has proved that for single-threaded latency-sensitive workloads, using Hammerspace with its native standards-based pNFSv4.2 integration massively accelerated performance over the native capabilities of existing storage.
This echoes results seen in other M&E environments, as well as in AI workloads at massive scales. This includes Meta, who are using Hammerspace and the pNFSv4.2 Hyperscale NAS architecture to power its massive AI Research SuperCluster, moving data at 12 TBps between 1,000 commodity storage nodes and a cluster of 24,000 GPUs.
With Hammerspace, broadcast and film customers can now unify workflows globally. Native integration with tools such as Autodesk ShotGrid and Flame enables artists to continue using tools they are accustomed to, with Hammerspace transparently stitching together workflows behind the scenes across multiple silos, sites and clouds.
Harmonic
XOS Advanced Media Processor
Harmonic's next-generation software-based XOS
Advanced Media Processor features a unique cloudnative software foundation, a complete playout feature set and advanced media processing technology, enabling operators to efficiently deliver video streaming and broadcast services. Through AI-powered video compression and quality optimization technology, the XOS media processor allows service providers to reliably deliver crystal-clear video quality at the lowest possible bitrates.
BOOSTING EFFICIENCY FOR VIDEO STREAMING AND BROADCAST
Efficiency is paramount in today's cost-conscious video market. The new XOS media processor addresses this critical requirement by enabling 50% more channel encoding and transcoding than its previous generation. It also features a new encoder stress performance gauge that measures CPU resource consumption and video quality of channels encoded by XOS, in real time. The innovative performance gauge allows broadcasters and video service providers to maximize the number of channels encoded per XOS processor to ensure exceptional video quality.
In addition, the new XOS media processor provides 33% reduced power consumption per channel (8 watts per HD channels), significantly lowering operational costs.
By offering a substantial reduction in total cost of ownership — including lower costs per channel, reduced capex investment and minimized power consumption — the XOS media processor empowers broadcasters and video service providers to achieve greater operational efficiency with less equipment, cables, rack space and cooling requirements.
SIMPLIFYING THE TRANSITION TO IP
The broadcast industry's shift to IP-based video delivery is eased through DMS X, a new SaaS-based version of the company's Distribution Management System for the XOS media processor. The DMS X features industry-first control over playout workflows, addressing evolving primary distribution
requirements. Running on the public cloud, DMS X elevates primary distribution, enabling broadcasters and content providers to securely distribute video content over satellite, managed IP and open internet delivery networks. With DMS X, content providers can control 10,000-plus XOS media processors from a centralized user interface.
DMS X SaaS for XOS features centralized playout management for ad insertion and program localization, HTML graphic insertion, edge playout management, the ability to schedule blackout events with unparalleled scalability, remote device and service configuration and monitoring. The DMS X SaaS provides these capabilities for any network and any content format up to 4K HDR.
LEADING INNOVATION IN MEDIA PROCESSING
Harmonic's XOS media processor deserves to win the IBC Best of Show Award for bringing significant advantages to video streaming and broadcast delivery, including 50% more channel encoding and transcoding, 33% reduced power consumption per channel, industry-first control over playout and groundbreaking distribution management capabilities to streamline the transition to IP.
Imaginario AI
Imaginario AI
Imaginario AI is a multimodal video curation platform and API that has been trained to interpret visual video, dialogue and audio close to human understanding.
Customers, such as Warner Bros. Discovery and Universal Pictures, are finding 70%+ time efficiency gains using Imaginario AI to search and discover scenes, chaptering content, clipping with a single click, and exporting entire rough cuts and compilations to NLE systems without the need to rewatch long-form content. This is significantly boosting efficiencies in content repurposing for new projects, social cuts, onset dailies curation, ideation and compliance editing.
IT'S MORE EFFICIENT AND COST-EFFECTIVE
Imaginario AI efficiently analyzes and integrates three modalities — visuals, audio and speech — into a single system, making it more precise and costeffective than traditional enterprise MAM and AI labeling platforms, which typically require separate models for each indexing type. This architecture also makes Imaginario AI more efficient, flexible and accurate when it comes to understanding and managing different genres and use cases.
HIGHLIGHT FEATURES INCLUDE:
• Optimize media assets with advanced search capabilities for vision, speech, and sounds, no metadata or labels required.
• Effortlessly discover common topics across multiple videos
and create custom clips and rough cuts with a highlight to edit feature. Fine-tune your clips with easy-to-use adjustment handles.
• The one-click clipping tool simplifies rewatching and editing long interviews and podcasts when repurposing content for TikTok, YouTube Shorts and IG Reels.
• Organize your assets into easily navigable chapters with the Collections feature.
• Plus, a user-friendly social media packaging and branding tool streamlines the process in just a few clicks.
Imaginario AI vastly simplifies and enhances the content creation process, empowering postproduction and marketing teams of all sizes to effortlessly distribute more content to a wider audience and better monetize their assets.
Unlike many AI companies still in development, Imaginario AI is actively helping Hollywood studios and production companies streamline their workflows, making this technology accessible to media companies of all sizes, not just Enterprises.
Imaginario AI's video indexing model is notably more GPU-efficient, accurate, and cost-effective than traditional Media Asset Management (MAM) systems and AI labeling platforms. By consolidating multiple functions into a single model, it reduces both resource usage and costs, making it a practical choice for studios aiming to improve their content management processes.
The platform's intuitive interface enhances user experience with features like multimodal library search, chapterization, rough cut automation and easy export of timeline sequences. These tools simplify and accelerate content preparation and repurposing tasks, such as creating onset dailies, compliance editing, and generating clips for social media.
IMAX
IMAX StreamSmart On-Air
The number one video technology challenge for streaming businesses is cost control, and the biggest cost for delivering video is Content Delivery Network (CDN) distribution charges. Designed to significantly reduce live streaming distribution costs, StreamSmart On-Air delivers a 15–25% bandwidth savings, translating into millions of Euros in reduced distribution costs and a better end-user experience.
The IMAX approach to reducing live streaming distribution costs is entirely unique. StreamSmart On-Air employs IMAX ViewerScore (XVS), a patented perceptual quality metric based on IMAX VisionScience that measures video quality in real time based on human vision to ensure bitrate reductions only occur when they are visually imperceptible to viewers. StreamSmart OnAir, an API-based optimization software, leverages existing encoding and packaging workflows and uses an AI-driven approach to dynamically select optimal segments from ABR/HLS/standard streams and optimize bitrates in real time.
IMAX XVS is the only video quality metric that maps to the human visual system, making it the most accurate and complete measure of how humans perceive video quality. The Emmy Award-winning technology has a >90% correlation to Mean Opinion Score (MOS), verified across various video datasets. It sees quality differences that other metrics can't and works for all types of content, including live, VOD, HDR and 4K.
StreamSmart On-Air takes the source content, which is encoded to the top profile, using the provided encoder configuration. StreamSmart On-Air, in parallel, generates a set of alternate, lower bitrate profiles using Advanced Machine Learning. The software then analyzes and quantifies quality for every video segment from all the profiles and selects the lowest bitrate segments that maintain the same video quality, guaranteeing an optimal viewer experience while maximizing bitrate savings. It then sends an optimized manifest file to the Origin. Because the implementation is based on manifest manipulation, it is codec-, encoder- and configurationagnostic.
Importantly, StreamSmart has a single-ended (no-reference) component to its quality measurement, which is AI-based
and designed using perceptually motivated Deep Neural Networks. The goal of the no-reference metric is to predict a full reference quality score, without relying on a source file for comparison. IMAX built a database that has tens of millions of sources and designed a machine-learning approach that directly passes pixels to a deep-learning model. This, and the fact that it can run in real time at the speed of encoding and it's computationally efficient, makes it possible to use on live workflows, where latency is a consideration, and there is no source file to compare to quantify quality.
Streaming businesses are facing enormous challenges this year. Subscriber growth is slowing, and churn is increasing, both impacting bottom lines. The focus for streaming businesses right now is capturing new revenue opportunities while significantly cutting costs. IMAX StreamSmart On-Air, with IMAX ViewerScore, offers seamless integration with existing workflows and saves 15–25% on distribution costs without compromising video quality. StreamSmart On-Air's pricing is tied to a fraction of the cost savings, ensuring a cost-effective, risk-free solution that guarantees positive ROI.
INFiLED STUDIO Series
No two modern broadcast sets are alike, and neither should their technology be. This is why INFiLED created the STUDIO Series — a versatile, highperformance xR and VP LED display system, enabling creative freedom and providing high contrast, brightness, colour accuracy and ultra-black antireflective surface.
The series boasts a modular system allowing compatibility between various product series to match specific production requirements: Ceiling (STUDIO AR Series), Background (STUDIO DBmk2 and STUDIO Xmk2 Series) and Floor (STUDIO DFII).
As the media landscape evolves, content creators are increasingly relying on advanced LED display solutions to craft captivating experiences. Recognising this need, INFiLED has developed the STUDIO Series to serve as the ultimate LED solution for broadcast studios, xR stages and virtual production sets. Its modularity allows for customisable setups, including curved and flat configurations and its high refresh rate ensures high-quality visuals, allowing to create tailor-made stages that can adapt to the unique needs of any production environment, from newsrooms to film sets.
For the STUDIO Series, INFiLED has developed its unique CBSF (Color and Brightness Shift Free) technology, a 2-in-1 LED package for STUDIO DB and STUDIO XII. The new M2 technology enhances background performance by improving colour accuracy, eliminating colour shifts, ensuring robustness and optimising cost efficiency.
One of the most significant achievements within the STUDIO Series is the introduction of INFINITE COLORS technology. Historically, LED displays have relied on a combination of RGB (red, green and blue) emitters to reproduce colours. While this has been effective, it has its limitations, particularly when it comes to rendering white light and achieving a full light spectrum.
Recognising these challenges, INFiLED developed INFINITE COLORS technology, which integrates an additional colour emitter into the LED package. Resulting in a significant advancement in colour and texture reproduction, ultimately leading to a higher Colour Rendering Index (CRI), going beyond traditional display solutions, allowing the STUDIO Series to
deliver a richer and broader colour spectrum, enabling full variations in tone, saturation and colour appearance.
STUDIO Series also tackles another challenge in LED display technology: the creation of seamless curves. Curved screens created with flat panels often result in visible seams and colour inconsistencies on camera, requiring extensive postproduction adjustments. STUDIO Series solves this issue with its seamless curved solution, which combines curved and flat cabinets to create a continuous, smooth display, reducing post-production time and cost.
The STUDIO Series is designed with an ultrablack masking material and exclusive black LEDs to produce a high level of contrast for LED screens. The high-brightness LED panels using customised black frame LEDs are the ideal solution for dynamic lighting and video. STUDIO Series also offers outstanding colour reproduction and excellent visual performance with a high refresh rate. The panels are of a modular design built around a robust die-cast aluminium frame, with built-in feet and edge protectors to prevent physical damage during assembly.
With its pioneering features, INFiLED's STUDIO Series delivers the solution for transformative visual experiences, setting a new benchmark for quality and versatility in professional AV production.
InSync
MCC-HD2 ST.2110 option
InSync has developed the world's first IP, dual-channel motion compensation frame rate converter with ST 2110 wrapped up in a compact 1 rack unit form. The MCC-HD range, already recognised for its AI (supervised machine learning) algorithm quality, versatility and cost that has been used in the Paris Olympics and US Open Tennis Tournament. Now it can feature in cutting-edge workflows with ST 2110 integration to keep quality at uncompressed speeds for an uncompromised video quality through production to distribution.
With the growing trend for all IP workflows, including contribution and distribution feeds, the demand for quality video to any screen size or type, no matter where in the world requires a video standards converter to uphold that production quality to the viewer. First of its kind, this is a feature packed converter now launched with both SDI and ST 2110 interfaces for frame rate conversions between 23.98 fps and 60 fps. It has been entirely designed and produced by InSync's team in the U.K. with the entire ST 2110 stack built from the ground up by the in-house engineering team over the last two years. Now for the first time, with the
brand new ST 2110 integration, broadcasters can experience industry leading multichannel HD conversion in the IP domain with NMOS and Seamless Protection Switching (ST 2022-7).
The FPGA design benefits from high performance and low latency, while operating at low power to reduce energy utilisation and CO2 by up to 80% when compared to traditional competitor converters at this quality level. This means carbon savings and operational financial savings for customers.
The features packing out this converter include world leading motion adaptive deinterlacing, HDR<>SDR conversion with custom LUT and WCG, up/down conversion (UDC) with frame sync, ancillary bridge, Dolby E/ Dolby Atmos passthrough, 16-channel audio (remapping, tracking delay and gain control), cadence management, SNMP and many more.
Management of the product is easy via the front panel or intuitive web GUI developed from more than two decades of experience with broadcaster users.
The product was released at IBC2024 and is now being shipped to customers.
Launched at IBC2024, ioMoVo is more than just a Digital Asset Management (DAM) system — comprehensive solution designed to transform how Media and Entertainment (M&E) companies manage, create, and distribute content. With advanced AI-driven features, seamless integrations and scalable architecture, ioMoVo sets a new standard for digital asset ecosystems.
ioMoVo's capabilities extend far beyond traditional DAM. It's designed to address the full spectrum of challenges faced by modern M&E companies. The platform integrates tools for content creation, collaboration, workflow automation and data analytics, streamlining every stage of content production and distribution. This holistic approach ensures that ioMoVo manages assets and enhances the entire content lifecycle, from inception to archiving.
The platform's user-friendly interface ensures that teams across an organization — from creatives to IT — can effortlessly leverage its features. ioMoVo's robust architecture integrates seamlessly with existing IT systems, providing secure, scalable operations. Advanced encryption and AI-driven anomaly detection keep content safe, while the platform's flexibility allows it to grow and adapt as business needs evolve.
ioMoVo is engineered to boost operational efficiency across the board. Automating routine tasks and providing advanced workflow tools reduces the time and costs associated with asset management. Its unified interface fosters seamless collaboration among distributed teams, and its scalability ensures that ioMoVo can grow with businesses, offering a long-term solution tailored to the evolving demands of the M&E industry.
ioMoVo's AI and machine-learning capabilities go well beyond simple asset management. The platform includes intelligent content discovery, automated multilingual captioning, contextual language model development, and more. These features enhance efficiency and unlock new possibilities for content creation and management, making ioMoVo an indispensable tool for forward-thinking companies.
One of ioMoVo's standout features is ioPilot, which is currently in its prerelease stage. ioPilot is set to revolutionize content discovery and management with its AI-powered search capabilities and automated tagging. It simplifies finding and organizing assets, while its intelligent recommendations inspire new creative directions. Although still in development, ioPilot promises seamless integration with tools like Adobe Creative Suite and AVID Media Composer, ensuring users can work efficiently within their established workflows.
ioMoVo's cloud-based architecture eliminates the need for costly on-premises infrastructure, reducing operational costs and enhancing scalability. This costeffectiveness, combined with the platform's ability to automate and optimize workflows, directly translates into increased profitability for M&E companies.
By offering a platform that is way more than just DAM, ioMoVo empowers M&E organizations to focus on what they do best: creating and distributing content. Its ability to unify, streamline and enhance content management processes positions ioMoVo as a transformative force in the industry, ready to redefine standards and drive innovation at every level.
ioMoVo
ioMoVo
Leader Electronics of Europe
With live production demanding more from outside broadcast productions, the demands for more advanced test-and-measurement equipment have never been more acute. Not only do live productions face the challenges of delivering both UHD and HD, HDR and SDR, they also now face the challenge of operating with both IP and SDI infrastructures.
The Hybrid IP/SDI LPX500 compact waveform monitor is the latest in advanced test-and-measurement instruments from the renowned Leader and PHABRIX brands.
Supplying a bank of four autonomous analyzers, the LPX500 enables the simultaneous display of four 4K inputs, 4K and 2K inputs, HDR and SDR inputs or even SDI and IP inputs. The instrument offers a 10G-IP toolset with dual SDI support as standard, and advanced Physical Layer Analysis (Eye and Jitter) offered as factory fitted options.
Housed in a fully redesigned and compact form factor, the LPX500 offers an 8-inch touchscreen and short depth, ideal for locations with limited rack space, including OB trucks. An independent second compact 8-inch touchscreen display is also offered via a dedicated USB-C connection. Efficient
cooling pathways and superior fan speed control keeps fan noise to a minimum and allows for adjustments to suit ambient temperature violations in production environments. Using its built-in noVNC, the LPX500 also offers fast access to both displays over a remote network.
With enhanced screen layouts and gestural swipes, users can navigate smoothly between configured instrument displays, offering up to 16 simultaneous instruments per display, optimizing monitoring. A new RGB vector display instrument provides a tool to monitor gamut violations, together with an extensive audio toolset, including 32 channels of audio metering and 5.1/2.0 Loudness measurement are included as standard. The LPX500's comprehensive feature set is designed to support SD/HD/3G/6G/12G-SDI, 10GE/25GE/100GE IP interfaces with SD/HD/UHD, SMPTE 2022-6, SMPTE ST 2110-10/20/30/31/40 with ST 2022-7, and AMWA NMOS. Optional software licenses can also be added for SDI/IP AV Test Signal Generation, UHD/4K support, HDR, EUHD (47.95-60p RGB YCbCr 444 formats), 25GE and 100GE IP support, and an advanced IP Measurement toolset.
LiveU
LiveU Studio
The new version of LiveU's complete cloud-native live video production solution, LiveU Studio, was unveiled at IBC2024. The only cloud live production solution to natively support LRT (LiveU Reliable Transport), LiveU Studio now offers enhanced usability and intuitive controls, and an enriched multicam toolset, making it even easier for storytellers to create professional and compelling live productions. Leveraging cloud workflows, the new LiveU Studio enables a single operator to switch between fully synced feeds from multiple sources, HTML-based graphics, and instant multi-angle replay, and more, delivering new monetization opportunities for sports and other live productions.
The LiveU Studio UX has been completely revamped to allow operators to quickly change focus between switching, replay and graphics on a single screen providing the optimal balance between speed and creativity when producing demanding live sports events. Innovative new tools include: Instant Replay, elevating viewer engagement with multi-angle replays and slow-motion playback, seamlessly replaying up to four camera angles in sync, and with a single click; ISO recording, enhancing postproduction efficiency with independent capture of up to six camera feeds for greater editing flexibility; and dynamic ad insertion, unlocking new revenue streams across OTT and FAST platforms with SCTE-35 marker insertion.
LiveU has also teamed up with innovative third-party partners to enable easy and seamless integration with HTMLbased graphics, logos and custom elements and AI video
editors in the LiveU Studio workflow. Producers can manage LiveU units efficiently and easily directly from the LiveU Studio Control Room. Enhanced graphics and audio can be added to live productions and productions can be customized with logos, templates, backgrounds, titles, intro and outro videos, jingles and more. Remote Guest functionality gives easy access with a one-click link experience allowing up to eight guests to interact at the same time in real time, during live shows.
LiveU Studio gives operators pinpoint control and automatically synchronizes LTM, SRT, RTMP and other industry-leading protocols. The new interface is built with operators live production priorities in mind. It reduces onboarding and training by using familiar production logic to existing systems and offers more efficient programme/preview (PGM/PRV) logic and consistency between apps.
LIVEU STUDIO INCLUDES A RANGE OF NEW FEATURES AND CAPABILITIES. HIGHLIGHTS INCLUDE:
• LiveU Studio Replay mode, introducing intuitive tools to live sports production making it simple to playback multi-angle highlight sequences for adding quickly, and telling more stories, during a live game.
• LiveU Studio's new intelligent timing mechanism, ensuring multiple replay angles are always referenced at the exact same timecode, which provides a robust toolset for multicam highlight playback or split-screen analysis.
• Variable speed playback, driven by LiveU's on-screen T-bar, which gives precise control to the operator and provides captivating slow motion to emphasize key moments during the game.
• Easy and seamless graphics integration, embedded into LiveU Studio.
• Dynamic ad insertion for increased monetisation opportunities.
• Efficient PGM/PRV logic and consistency between apps.
• Faster post-production workflows with individual ISO recordings.
LiveU Studio is a core element of LiveU's Lightweight Sports Production solution.
Media Distillery
Ad Break Distillery
Launched in 2023 and first used in production in early 2024, Ad Break Distillery is already making a significant impact on NPS and the TV replay user experience for our customer Swisscom. It is also being tested and evaluated by a range of other multichannel service providers because of its potential to transform their replay user experience and drive additional revenue.
WHAT IS AD BREAK DISTILLERY?
Ad Break Distillery employs advanced machinelearning algorithms to precisely identify the start and end of ad breaks within a linear broadcast, even when no markers are available. This accurate data is crucial in an era where streaming services strive to enhance viewer experience and maximize ad revenue without intrusive disruptions.
THERE ARE A RANGE OF DIFFERENT APPLICATIONS FOR THIS SOLUTION:
• Ad skipping — Where contractual agreements with broadcasters allow, such as in the case of Swisscom's Replay Comfort solution, operators can offer ad-skipping as a premium option to their subscribers. Swisscom has seen significant increases in Net Promoter Score (NPS) observed in users of this service, demonstrating tangible improvements in customer satisfaction.
• Trick-play restrictions — Where rights agreements prevent fast-forward through advertisements in replay content, many
operators disable trick-play for the entire playback because they don't know where ad breaks fall in the recorded linear broadcast. This represents a disappointing user experience for consumers. With Ad Break Distillery's accurate data on the start and end of breaks, they can disable trick play during the ads, but enable it for the rest of the programming, allowing users the freedom to skip to the parts of the content they care about.
• Ad replacement — Ad Break Distillery's technology can also enable dynamic ad replacement in replay content, allowing operators to swap timely and relevant advertisements into the stream in place of those that were shown during the original broadcast if they have become outdated, thereby increasing potential for higher CPMs and better viewer engagement.
INNOVATION THAT'S TAILORED TO THE STREAMING INDUSTRY
The core of Ad Break Distillery's innovation lies in the tailoring of its AI technologies specifically for the media and entertainment industry. Unlike generic AI solutions provided by large vendors such as AWS and Google, which require clients to undertake their own extensive model training efforts, Media Distillery offers a fully-trained AI solution that is ready to tackle very specific media use cases. Indeed, the solution has been trained on approximately 30,000 television programmes, amounting to over 8,000 hours of annotated label data created by humans, which ensures a high performance when analysing new content. This approach reduces the burden on media companies and enhances accuracy and efficiency from day one.
CONCLUSION
Ad Break Distillery not only addresses the technical challenges of ad insertion in streaming media but also significantly impacts business outcomes for operators and viewing experiences for consumers. The benefits have been demonstrated in production by our customer Swisscom.
Media Distillery Topic Distillery for Sports
INNOVATION IN CONTENT DISCOVERY
Topic Distillery is an innovative content discovery solution that leverages AI to transform how viewers find and engage with replays of live, unscripted broadcast content like news, sports, and talk shows. These programs often fall short in providing detailed metadata, making it difficult for viewers to locate specific topics of interest. Topic Distillery addresses this challenge by detecting topics in real-time broadcasts, generating chapter markers and meaningful descriptions that enable viewers to search or browse by topic.
Following the product's successful launch in 2022, Media Distillery has added three new variants in 2023–2024, including Topic Distillery for Sports. This marks a particular advancement in the use of Generative AI, and more specifically the use of new Large Language Models (LLMs), to cater specifically to sports content. Additional AI technologies have been applied and tuned to improve accuracy. The solution's appealing (but crucially spoiler-free) chapter descriptions are created to fit different user interfaces, along with useful sportsspecific tags. This helps fans to track coverage of major events but also related "shoulder" content like interviews and analysis, even when they appear on non-sports programmes.
TOPIC DISTILLERY FOR SPORTS IN ACTION WITH LIBERTY GLOBAL
A successful pilot project with Liberty Global demonstrated
overwhelming user approval. Subscribers were asked to test a prospective new sports service that would allow them to specify sports, teams or players they wished to "follow." They then received automatic updates in their regular TV app when the topics they'd chosen were featured in any programming. When opening a replay show, the viewer could automatically jump to the relevant part in the video, preventing the need to seek manually.
After a three week test period, 72% agreed or strongly agreed that the service was appealing and relevant. A massive 91% reported a positive overall opinion of the service. Many said it helped them find content in programmes they weren't used to seeing, often on channels they didn't typically watch.
IN SUMMARY
Topic Distillery for Sports employs live AI video analysis to rapidly identify the topics and people shown and mentioned in a video stream, dramatically improving content discovery experiences for replays of live, unscripted TV content and creating personalised "fan zones" for sports content. It gives users spoiler-free descriptions and tags that enable fans to quickly and easily follow their favourite teams or athletes for a personalised and engaging viewing experience.
By helping consumers discover more of the content they love, Topic Distillery is a game-changer in content consumption that benefits consumers and video service providers alike. It makes TV services inherently more user-centric, enhancing loyalty and reducing churn. Content "works harder" and is seen by more viewers, yielding better return on investment (ROI). OTT streamers, pay-TV providers, telcos and superaggregators can all benefit.
Topic Distillery for Sports deserves recognition for its innovative approach and impact on the industry.The benefits of this solution have been demonstrated in a successful pilot with Liberty Global and were seen at IBC2024.
Mimir is a video collaboration and production platform for professionals that runs in the cloud. Mimir stands out in the cloud production and media asset management sector by offering a cloud-native solution enhanced with artificial intelligence (AI) for PAM, DAM and MAM, unified into a single, intuitive workflow. This includes allnew intelligent, fully automatic scene description capabilities to improve search and set speed-cutting records for its cloud video production and collaboration solution.
Mimir facilitates access to all forms of content, including live, near-live and archive, leveraging AI for metadata enrichment and supporting extensive remote collaboration. Continuous delivery ensures weekly updates to customer systems, minimising downtime. Key features include the new Mimir Cutter for sequence editing, enhanced sharing workflows, and significant speed improvements in content transfer.
This platform revolutionises the accessibility and management of all content forms — live, near-live and archived — through AI-driven metadata enrichment, facilitating broad remote collaboration. Its continuous delivery model ensures weekly updates to customer systems, effectively minimising downtime. Key innovations include the Mimir Cutter for precise sequence editing, advanced sharing workflows, and substantial improvements in content transfer speeds.
Mimir's unique proposition lies in integrating AI-driven metadata enrichment with cloud-native scalability, delivering a comprehensive solution for live cloud video production,
media asset management and collaboration. This approach represents a significant leap in managing and accessing media content across global boundaries, emphasising operational efficiency and enhanced collaboration. The SaaS model supports weekly and sometimes daily updates, reducing the need for downtime or extensive upgrade projects. With around 30 updates introduced monthly, Mimir demonstrates a solid commitment to continuous enhancement and customer support.
The platform supports many features, including live ingest, editing of growing files, deep metadata search and seamless integration with major newsroom computer systems (NRCS) and nonlinear editing systems (NLEs) like Adobe Premiere Pro. The introduction of the Mimir Cutter improved sharing workflows and accelerated content transfers, leveraging Amazon's transfer infrastructure to respond to the industry's demands for speed and efficiency. Further updates address diverse media production needs, such as customisable actions, a new subtitle workflow and a nuanced metadata tree structure.
Mimir's innovation is not solely technological; it fundamentally changes how organisations access, manage and collaborate on content. Its cloud-native foundation and AI capabilities are game-changers, ensuring content is intelligently organised and searchable. The commitment to continuous improvement and the global impact of leveraging cloud technology and AI for a collaborative working environment highlight Mimir's role as a catalyst for change in the media industry. Introducing the Mimir Cutter and enhancements in sharing and transfer speeds are notable advancements, improving workflow efficiency and fostering collaboration.
News shared at IBC2024, included these features:
• Quickly cut a sequence and render or send it to Premiere
• Share lives and growing files
• Gather content from dynamic upload portals
• Optimise bandwidth use for faster transfers
• Attach anything to any asset
• Summarise transcripts and navigate quickly
• Automatic scene detection and description
MultiDyne MDoG Series of IP Gateways
The MultiDyne MDoG Series offers two modular OpenGear solutions that help customers build stronger bridges between the SDI and IP worlds. The MDoG Series includes SMPTE ST 2110 network interface and low-latency JPEG-XS codec modules to efficiently manage uncompressed and compressed signals as they move between legacy systems and IP networks, with full redundancy to optimize performance. Both products offer multichannel and bidirectional conversion between SDI and ST 2110 as signals move on and off networks.
The MDoG Series was developed to fulfill critical needs for broadcasters and media businesses as they navigate living in two disparate worlds. MultiDyne brings harmony to the situation by unifying these two worlds through seamless interconnection. MultiDyne's thoughtful engineering packs many pertinent features required for seamlessly moving SDI legacy signals to and from ST 2110 IP networks onto space-saving openGear cards.
MDoG-6060 ST 2110 IP Gateways are available in multiple I/O configurations for 3G-SDI (3x3, 6x2 and 2x6) and 12G-SDI (1x1, 2x0 and 0x2) systems. Each configuration offers two 25 GB network interfaces to support redundant transport streams, strengthened through SMPTE ST 2022-7 compliance. The ST 2022-7 standard provides hitless merge functionality, which means that the two network streams will borrow packets from each other to maintain signal integrity as they move through the network. This ensures that streams are repaired along their journeys, and arrive reassembled and intact upon reaching their destinations.
Each channel within the MDoG-6060 ST 2110 Gateway processes one video, 16 audio and one ANC data flow, and SDI signals are frame-synced within the module. MultiDyne has also incorporated NMOS workflows for inband control and configuration, while allowing openGear's DashBoard application direct access to remote monitoring functions and firmware upgrades.
of 12G-SDI, with the added benefit of ST 2022-7 redundancy over dual 10 GB network interfaces.
Offering a module to support JPEG-XS compression is highly valuable as broadcast and production teams increasingly navigate hybrid operations. SDI signals are uncompressed, and the ST 2110 suite of standards is structured to support uncompressed workflows. While this provides the same broadcast-quality video resolutions that broadcasters have long valued in the SDI world, uncompressed transport over IP networks is simply not a viable option for everyone today. Throughput is limited; for example, a broadcaster working with a 25G ST 2110 pipe has bandwidth for two uncompressed transport streams.
JPEG-XS compression establishes 20 times the efficiency over that pipe, using a fraction of the bandwidth with no noticeable latency of loss of visual quality. The optimized bandwidth usage translates to much lower costs for broadcasters, while content creators benefit from the codec's low latency encoding and decoding capabilities for their live productions. This solution essentially allows broadcasters to deliver the same experience as with SDI.
The MDoG-6061 JPEG-XS codec module addresses the need for high-density signal transport in modern workflows where uncompressed transport isn't yet a reality. The MDoG-6061 JPEG-XS encoders and devices support TR-08-compliant, lowlatency compression or decompression of up to two channels
nxtedition nxtedition's XR/Mixed Reality Studio Gallery
Nxtedition's XR/Mixed Reality studio control is a transformative innovation in broadcast production, seamlessly integrating XR/MR technology into a single, consolidated control environment. This system allows directors to manage live productions from virtually any location, breaking free from the traditional constraints of the control room. The key to this innovation is the use of advanced — passthrough — technology, which overlays digital displays within the real world, enabling seamless interaction with both virtual and physical elements, such as vision mixers, in real time.
The development of this technology was driven by nxtedition's participation in the IBC Accelerator project "Evolution of the Control Room." During the project, various remote desktop solutions were explored, but they introduced latency and instability. The breakthrough came when nxtedition leveraged the headset's built-in browser to connect directly to the platform, eliminating the need for additional software or even a computer. This approach not only simplifies installation and user experience but also enhances the system's reliability and responsiveness.
A standout feature of this system is its ability to interact with physical devices through the "passthrough" capability. For example, directors can use an iPad connected to nxtedition as a shot-box, allowing them to control cameras, cue graphics and manage other critical production elements from any location — whether in a studio or a remote setting like a
garden. This flexibility is unprecedented, offering broadcasters the ability to adapt to the increasing demand for remote production capabilities.
By consolidating traditionally separate tools into a single, intuitive interface, nxtedition's XR/Mixed Reality control reduces operational complexity and costs. This solution is not just about remote production; it sets a new industry standard by offering unmatched flexibility and efficiency in managing live broadcasts. The user-friendly design requires no additional software installation, and its consolidated software architecture integrates all necessary tools into one platform, significantly reducing both complexity and costs.
This product demonstrates an innovative approach to live broadcast production, offering a new way to use cutting-edge technology, breaking away from traditional methods like remote desktops or KVM systems, which simply adapt old tools to new environments. Instead, nxtedition has developed a solution that truly leverages the potential of XR/MR technology to create a future-proof, consolidated system for modern broadcasters.
In a world where remote production is becoming increasingly vital, nxtedition's solution provides a level of operational efficiency and versatility that is unmatched. The ability to operate without a traditional control room, or even a computer, represents a significant leap forward for the industry. This innovation not only meets the current demands of the broadcast industry but also anticipates future trends, offering a scalable and adaptable solution that can grow with the needs of broadcasters.
The combination of user-friendly implementation, advanced technology integration and the ability to streamline the production process makes nxtedition's XR/Mixed Reality studio control a deserving candidate for this prestigious award. It represents a significant step forward in broadcast technology, offering broadcasters a tool that is not only innovative but also practical, efficient and adaptable to the rapidly changing demands of the industry.
Pebble PRIMA
PRIMA, the Platform for Real-time Integrated Media Applications, is a state-of-the-art software platform designed to cater to the ever-changing needs of media and broadcast organisations. This comprehensive solution offers unified services for flexible deployment, robust security, economic scalability and centralised management.
PRIMA introduces three primary products: Playout, Workflow and Control, each designed to enhance Pebble's core business of channel origination. PRIMA Playout manages linear TV scheduling and playback of media, ensuring broadcast-quality streams. PRIMA Workflow simplifies media asset management across various storage systems and enables custom workflow creation with an intuitive interface. PRIMA Control, an IP broadcast controller, supports various NMOS specifications, allowing broadcasters to efficiently manage complex IP systems.
PRIMA's robust security framework adheres to industry standards like EBU R143, providing granular permissions management, session timeouts and support for Single SignOn and Mutual TLS. The platform's unified control interface offers dynamic workload management, improving operational efficiency. Its scalable architecture supports on-premises,
cloud or hybrid deployments, ensuring seamless operations during growth or unforeseen challenges.
The platform's redundancy and disaster recovery features include automatic synchronization for failover and redundancy, ensuring uninterrupted services. PRIMA also offers centralized monitoring for system states and alerts, improving operational awareness and response effectiveness. Key features such as system alerting, managed deployment, granular permissions, deploy-anywhere capability and secure open APIs make PRIMA a forward-thinking product that significantly enhances Pebble's existing offerings. These features enable broadcasters to modernise and streamline their operations, providing a user-friendly and integrated approach to broadcast management.
In summary, PRIMA stands out for its innovative integration of advanced technology, robust security measures and scalable architecture. It represents a significant advancement in modernising media operations, making it a valuable asset for media and broadcast organisations aiming for efficiency, reliability and future-proofing their services. PRIMA not only enhances Pebble's current solutions but also expands its applicability, setting a new standard in the industry.
Perifery
AI+
2.0
Perifery AI+ 2.0 is a transformational AI-powered solution suite designed to enable media professionals to revitalise existing content libraries. By offering a comprehensive set of AI and workflow automation tools, including updated transcription, metadata generation, facial and object recognition, and automatic translation, it enables the media industry to remonetise the content they already own.
These solutions streamline asset management, simplify technology administration and automate content organisation, enabling companies to enhance content library visibility, boosting productivity and creativity, and helping them reach larger audiences. Integrating with S3-enabled cloud, edge and on-premises storage, it creates metadata at ingest, ensuring that search and access to content becomes an intuitive experience. AI+ 2.0 also addresses the widespread challenges created by inefficient retrieval systems, complex infrastructure and lack of operational excellence. In addition, it allows users to benefit from global reach and multilingual support while eliminating operational silos and resource duplication. Integrating with commonly used industry tools, storage and MAM systems, it delivers highly effective workflow customisation options providing a fully automated, low TCO solution for media customers.
KEY PRODUCT FEATURES INCLUDE:
• Automated Content Tagging: Utilises machine learning to tag and categorise media, enhancing searchability automatically
• Workflow Automation: Streamlines content management processes, from ingest to distribution, ensuring efficiency
• Scalable Integration: Seamlessly integrates with existing media libraries and is built to scale for future expansions, making it adaptable for both small and large organizations.
As Jason Perr, CTO of Media and Entertainment Solutions at Perifery said, "Bringing these capabilities to market will
help organisations across the media and entertainment landscape fully unlock the value of their content by enhancing searchability and delivering highly efficient content management capabilities. These innovations will enable users to operate more effectively and stay competitive in a rapidly evolving industry."
Customers signing up for Perifery AI+ 2.0 will also be offered the opportunity to test Perifery's Intelligent Content Engine (ICE), which was announced first at the NAB Show. The revolutionary ICE acts as an AI Media Content Librarian, examining, understanding and cataloguing every file within its view. It automatically categorises media assets based on the content itself regardless of the existence of any traditional metadata.
With ICE being part of this innovative solution, Perifery's new system doesn't just manage media assets; it understands them. This transformative solution not only enhances operational efficiency but also sets a new industry standard, promising a future where media assets are managed through intelligent, contextaware AI, representing a significant leap forward in content management technology and setting a new benchmark for innovation in the industry.
In conclusion, Perifery is transcending MAM, making the process of finding content more intelligent, efficient and costeffective, and adaptive to the evolving needs of organisations managing vast amounts of digital content.
Quickplay Quickplay: Content Discovery Suite — Curator Assistant and Media Companion
Quickplay's Content Discovery Suite is tackling one of streaming's biggest challenges: content discoverability. By combining Generative AI with its cloud-native, open platform, Quickplay is revolutionizing how viewers connect with the content they love — and the content they'll love once they find it.
As streaming services multiply, the frustration of finding something to watch is only growing. According to Accenture, 72% of consumers "report frustration at finding something to watch," up six percentage points from the previous year.
Quickplay is addressing the issue head-on with The Quickplay Content Discovery Suite, which includes two powerful solutions: Curator Assistant for content programming teams and Media Companion for viewers.
Curator Assistant is integrated in the Quickplay CMS, allowing programming teams to conversationally engage with the tool. They can set up rails for topical, seasonal or passiondriven content that aligns with viewers' personal interests. The tool can also help discover new ways to merchandise content. Query results are matched in the Quickplay CMS against available content rights and user profiles, delivering surprisingly relevant content recommendations.
Media Companion's complementary role in the suite is to help viewers. It uses AI-driven conversational search and situational awareness to enhance the content discovery experience for subscribers. It can find movies, explore genres, gather detailed information about films and actors, and even use situational factors like location weather to refine recommendations. It also creates personality quizzes tailored to predefine personas, making the discovery process more engaging and effective.
While the industry has been abuzz
with the potential of AI, Quickplay has focused its efforts on delivering immediate tangible results that benefit streaming providers. Using the power of Large Language Models (LLMs) and AI Model Gardens, Quickplay's Content Discovery Suite is streamlining search and discovery, alleviating multiple industry pain points.
Both Media Companion and Curator Assistant not only improve the efficiency of content search and discovery but also learn from user interactions, adapting recommendations over time. This ensures that Quickplay stays in tune with individual preferences, providing a continuously evolving and personalized content discovery experience. Users can enjoy a seamless, engaging way to find the content they love and will love.
In an era where users crave tailored and dynamic experiences, Quickplay's proactive application of GenAI principles sets it apart as a forward-thinking solution. By moving beyond theoretical discussions and delivering a practical, effective AI-driven enhancement to the content discovery journey, Quickplay meets the evolving expectations of today's savvier users and viewers.
Qvest
Today, media organizations and broadcasters face major challenges; production and transmission landscapes are more distributed than ever before, and these are operated across multiple platforms and infrastructures.
Although the industry has seen many rapid technological developments and innovations recently, operators are often forced to use several different products for broadcast automation, depending on whether their playout solutions are in the cloud or on-premise. This makes their workflow far more complex than it needs to be.
makalu enterprise-level automation playout control, Qvest's latest addition to the makalu family of products, tackles this issue directly. makalu automation has been designed to provide playout operators with even more versatility for any media playout workflow and can be used for any live channel playout. It is the first enterprise automation to enable seamless control of any third-party playout products both in the cloud and on prem, simultaneously.
The new addition not only delivers greater operational efficiencies but also provides media organisations with the ability to invest in new technology that significantly saves on costs and, at the same time, future-proofs their workflows.
makalu automation eliminates the need for operators to learn multiple systems for each of their transmission environments. Instead, it provides one central intuitive interface for any third-party playout solutions. This not only means a reduction in upfront costs for the technology itself but also significantly lessens the amount of time businesses need to train operators on multiple complex systems.
makalu also implements a fair "pay as you grow" usage-based pricing model for users that means media organizations can use makalu automation to spin up and down pop-up or event channels as needed without huge expenditure.
makalu automation can securely and reliably run 24/7 for linear, OTT, FAST and disasterrecovery channel playout, maintaining its high performance and resilience throughout. At the
heart of makalu automation is a redundantly designed, realtime-capable event bus, via which all applications and devices are controlled synchronously. makalu supports a variety of broadcast protocols, but also enables cost- and time-saving integration based on REST.
With makalu automation, operators no longer face the challenge of having to navigate intricate and complex playout workflows. Its "single pane of glass" design has been specially constructed to allow operators with any level of experience and skill to use the system with ease. Its ergonomic interface lowers the barriers to entry, meaning that hybrid teams can quickly and easily collaborate without additional time needed for training. In today's modern hybrid working world, the ability to "access from anywhere" is crucial to allow operators to control their playout workflows from wherever they are, regardless of whether or not they're part of a distributed team or working from a mobile outside broadcast facility.
makalu enterprise automation directly tackles industry pain points still faced by media organisations and broadcasters in the post-pandemic hybrid working world. Its multiple benefits allow playout operators to maximize the efficiency of their workflows while simultaneously delivering a solution that provides time and cost-savings for businesses.
Ross Video OverDrive
In today's fast-paced media landscape, Automated Production Control (APC) has become a cornerstone of modern broadcasting, streamlining operations and eliminating complexities from the production process. Ross Video's OverDrive stands at the pinnacle of APC systems, empowering broadcasters to execute intricate productions flawlessly and consistently, all while significantly reducing operational costs.
With nearly 400 systems in operation worldwide, OverDrive is the leading APC system globally. Its unparalleled integration capabilities with over 220 different production devices make it the go-to choice for broadcasters. Whether it's Ross or third-party equipment, OverDrive seamlessly controls production switchers, routing systems, video servers, audio mixers, robotic cameras and graphics systems, offering both manual and automated control options.
With the flexibility to perform manual, semiautomated or fully automated productions from the same control room using the same equipment, OverDrive ensures that manual device control is always accessible. This adaptability ensures that manual device control is always accessible, whether through OverDrive or dedicated panels, giving operators the freedom to choose the level of automation that best suits their needs.
The user-configurable interface consolidates the control of numerous devices into one streamlined area. Custom icons allow for quick identification and preview of the rundown, making it easier for operators to manage their workflows efficiently and effectively.
OverDrive ensures consistent, repeatable and errorfree productions across a range of operator skill level. This reliability allows all operators to produce clean, consistent broadcasts every time, enhancing the overall quality and professionalism of the production. When it comes to accommodating live breaking news with on-the-fly changes, OverDrive excels. Its unmatched toolset
makes it the leading choice for managing unscripted or latebreaking events, ensuring that broadcasters can adapt quickly and maintain high-quality output under any circumstances.
KEY FEATURES OF OVERDRIVE:
• RundownControl: Manages all production elements, effects and transitions for both scripted and unscripted productions.
• DirectControl: Provides direct, manual control of production devices within a unified interface.
• TemplateEditor: User-friendly tool for editing, managing and configuring OverDrive templates.
• MOS and NRCS Integration: The first HTML5 MOS plug-in under the new MOS spec, enhancing newsroom workflow integration.
• Timing: Multiple user-definable timers to track program time, event time, clip time and NRCS timing information.
• Enterprise Grade Virtual Environment Support: Certified to work in various virtualization environments, offering redundancy and cost-saving solutions.
The latest version of OverDrive introduces QuickShots, a new tool in the QuickTools suite. QuickShots speeds up rundown creation by linking production cues with OverDrive assets, reducing the need for coding in common scenarios. It saves time, allowing users to focus on other tasks. QuickShots is simple, yet powerful, with code that can be used as-is or modified without needing a plug-in. It enhances the already popular live production tools used by OverDrive operators worldwide.
OverDrive also integrates CX Panels and Sidebox Modules, offering tactile control over critical production elements, ensuring operators can easily manage complex workflows. Providing intuitive, customizable buttons and extended control capabilities, the CX Panels and SideBox Modules ensure efficiency and high-quality output even under changing conditions.
Ross Video Raiden
Raiden is redefining weather graphics in newsrooms by offering an all-encompassing solution that enhances weather storytelling for meteorologists, producers and designers. Powered by XPression, the world's fastestgrowing real-time motion graphics engine, and Voyager, built on the cutting-edge Epic Unreal gaming engine for stunning augmented and extended reality, Raiden fuses data gathering, processing and visualization tools to create riveting content. This enables the creation of visually captivating weather content that engages audiences and elevates the quality of weather segments.
Designed for seamless integration into existing newsroom workflows, Raiden allows for quick and easy content generation from a unified graphics platform. This flexibility enables teams to collaborate effectively, whether in the studio or remotely. Raiden's web-based story creation tool enhances accessibility, allowing users to build and update weather graphics from any location, ensuring they can produce dynamic content regardless of where they are. Raiden acquires, processes and visualizes weather data from various sources for the XPression and Voyager engines. Its web-based tool allows for quick building or updating of weather stories and rundowns from any location for live production. The XPression Plugin with DataLinq automatically populates weather graphics with real-time data, enabling broadcasters to manage news and weather content from a single graphics engine.
The user-friendly web interface supports the creation of weather stories online, with features like template saving and predefined layout reuse to enhance newsroom efficiency. Raiden's compatibility with various data sources and forecasting models, along with its forecast editing capabilities, enables meteorologists to fine-tune predictions, providing audiences with the most accurate weather information.
Dynamic 3D maps and customizable annotations further enrich the visual quality of weather forecasts and real-time animations. These elements, combined with XPression DataLinq‚ enable the creation of unique, data-driven graphics that simplify complex weather data for viewers.
Raiden's integration with MOS devices ensures smooth operation within existing workflows and supports on-premise, virtual, hybrid and cloud hosting. This versatility caters to the diverse needs of broadcasters, whether they're focusing on traditional broadcasts or expanding into digital spaces. Raiden also extends its reach by creating weather content for web, mobile and OTT (over-the-top) platforms, ensuring that weather stories connect with audiences across various digital platforms.
Launched at IBC2024, Raiden's integration with Voyager adds a new dimension to Raiden's capabilities. The Voyager Raiden Plugin introduces a simplified DataLinq-based workflow for connecting Raiden point-data to the Voyager engine, allowing for the creation of immersive, data-driven weather content and environments. Support for the Ultra Dynamic Sky Plugin further enhances the realism of these simulations, providing meteorologists with powerful tools to visualize and communicate complex weather phenomena.
Global satellite imagery and real-time animation loops add another layer of situational awareness, offering broadcasters compelling visualizations that are crucial for accurate weather reporting. The inclusion of color-enhanced Infrared, Water Vapor and Visible imagery ensures comprehensive coverage, empowering broadcasters to deliver high-quality weather content no matter where they are in the world.
Raiden's innovative approach is transforming weather graphics, empowering newsrooms to create engaging, visually stunning weather content that resonates across all platforms.
Sennheiser
Spectera
The Sennheiser Spectera ecosystem represents a significant leap in digital wireless audio transmission technology. As the world's first bidirectional wideband solution, Spectera introduces unprecedented advancements in operational efficiency, flexibility and audio quality. Addressing the challenges faced by users of wireless multichannel systems today, Spectera uses Wireless Multichannel Audio Systems (WMAS) technology to offer a future-proof solution for large-scale productions, including touring, broadcasting or theatre applications.
SENNHEISER SPECTERA KEY BENEFITS:
1. Bidirectional digital wideband transmission: At the core of Spectera's innovation is its bidirectional digital wideband transmission technology. Unlike traditional narrowband systems, Spectera utilises a single wideband RF channel (6 or 8 MHz) to handle bidirectional audio and control data. This approach reduces system complexity, simplifies frequency coordination, and minimises the footprint required for high channel-count setups. Spectera employs a proprietary variant of OFDM-TDMA, tailored to reliable multichannel, bidirectional, low-latency communication.
2. Reduced system complexity: The rackmount Base Station is the heart of the Spectera ecosystem and is a true spacesaver. It handles up to 64 audio links, meaning 32 inputs and 32 outputs, in a mere 19-inch, 1U format, so can essentially accommodate an entire production. Each Base Station supports up to two RF wideband channels. The Base Station is frequency-agnostic; activation of the respective local license for the Base Station will automatically load the authorised frequency ranges.
3. Bidirectional bodypacks and transceiving antennas: The Spectera bodypacks handle both mic/line and IEM/IFB signals and make a production truly flexible. The Spectera transceiving antenna manages mic/line signals, IEM/IFB signals, and control data simultaneously, too. The RF is digitized at the antenna, eliminating the need for splitters, combiners and boosters. All antennas are time synced.
4. All-new software: The desktop software LinkDesk turns a Mac or PC into a remote control and monitoring centre, where the operator can choose between varying levels of
audio quality, latency, possible audio links and range — flexibly throughout a production.
5. Superior audio quality: Spectera delivers crystal-clear sound for mics, instruments, and IEMs. The RF channel can be used totally flexibly, for example for digital IEMs with a latency down to a spectacular 0.7 milliseconds.
6. Future-ready and scalable: Spectera is an evolving ecosystem. Designed with future hardware, software and service enhancements in mind, the system will continue to grow with the needs of its users. Upcoming hardware additions include the SKM handheld transmitter; software updates will feature SMPTE ST 2110 implementation.
"Spectera satisfies our customers' chief desires and needs regarding ease of use, operational reliability and flexibility. It offers less hardware, drastically reduced frequency coordination, redundancy and the flexibility of an ecosystem that grows with your needs. Spectera will also open up many new opportunities, 3D immersive audio being one of them. Thanks to its synchronised word clock for all audio over RF, Spectera will be the first solution that is able to capture phase-coherent wireless audio for immersive recording and reproduction," said Sennheiser co-CEO, Dr. Andreas Sennheiser.
Shotoku Broadcast Systems SoftRail Software Defined 'Virtual' Rail System
For decades, high-end robotic camera systems have included free-roaming studio pedestals such as Shotoku's SmartPed. These powerful systems allow the camera to reposition anywhere within the studio floor space. More recently, however, rail camera systems that are built "into" the set began to appear in TV news studios. While these systems follow every arc and contour of a set beautifully, they seriously hinder future production design flexibility and creativity. That's why Shotoku developed SoftRail, a softwaredefined virtual rail system that provides the best of all worlds for robotic camera systems.
Shown for the first time at IBC2024, SoftRail is a new concept that merges the path-following capabilities of a physical rail system with the total freedom of a free-roaming pedestal — it really is the best of all worlds. SoftRail maximizes flexibility by opening up studio floor space to limitless softwaredefined paths while eliminating the permanent restrictions on movement of presenters, guests and other cameras, not to mention the trip hazards, associated with all physical rail systems. SoftRail also simplifies and cost-reduces the installation, avoiding the mini-construction project inherent with rails and especially buried-cable systems.
SoftRail is incorporated into Shotoku's TR-XT studio control system. It allows the fully-robotic SmartPed to benefit from the path-following capabilities of a rail camera when needed.
Then, if following the path is no longer needed, simply turn off the SoftRail function and SmartPed returns to its traditional free-roaming pedestal mode maximizing the freedom of movement.
Using Shotoku's TR-XT's enhanced StudioView mode, operators can design a virtual rail to suit any studio and production. Once enabled on a SmartPed, it behaves as if physically mounted on rails — never leaving the defined path. An XY joystick move causes the pedestal to move forwards or backwards along the rail's path leaving the operator to focus on the other axes and maintain perfect framing. Preset shots will also be recalled by following the rail's path. And, because rails are defined in software only, they can take virtually any shape imaginable, even those that would be physically impossible to build, like looping back over itself without the need for a level crossing!
For other areas of the set, or when designers want a complete set change, an alternative rail design can be created and applied. In fact an unlimited number of rails can be defined and recalled whenever needed. If no rail is needed then SoftRail can be disabled at the press of a button causing the pedestal to revert to entirely free-roaming.
Multiple SmartPeds can have separate rails (or share the same one), all easily managed from the new and enhanced StudioView page within the latest TR-XT control system.
Shure
Axient Digital ADX3 Plug-On Transmitter
The Axient Digital ADX3 Plug-On Transmitter is one of the latest additions to Shure's Emmy Award-winning Axient Digital Wireless System. It is ideal for audio professionals in broadcast television and location sound fields who are seeking an industry-standard transmitter that enables real-time remote control of key parameters.
ADX3 transforms any XLR-terminated microphone into an advanced, portable Axient Digital ADX Series wireless microphone. It delivers the same transparent audio quality, RF performance and wide tuning of the well-established AD3 Axient Digital Plug-On Wireless Transmitter, with the addition of Shure's proprietary ShowLink technology. ShowLink ensures comprehensive real-time control of key features including interference avoidance from any bag, cart, camera or truck.
The ADX3 provides portability and flexibility to the user while ensuring professional quality sound in any setting. The transmitter features a rugged, dust- and moisture-resistant metal construction with a built-in OLED display. Its patented locking mechanism provides a secure wobble-free connection that enhances mobility during broadcasts and on TV/film sets while also readily interfacing with handheld and shotgun microphones.
The selectable modulation modes optimize performance and enhance spectral efficiency. High Density mode dramatically increases the maximum system channel count. The ADX3 can also run-in standard mode for optimal low latency coverage. It features automatic input staging and equips users with secure transmission with AES 256-bit encryption. A line-of-sight operating range of 300 feet, or even more with optional receiver antennae, ensures users will experience maximum signal stability and transparent digital audio.
The ADX3 comes with two SB900 lithium-ion rechargeable batteries, which provide up to six-and-a-half hours of continuous charge with precision metering and zero memory effect. The battery can be recharged via USB-C. The transmitter can also be powered externally through USB-C or AA batteries.
It is also compatible with Shure's Wireless Workbench, which supports efficient control and configuration while
optimizing spectral management and frequency coordination. The ADX3 is also monitored via the award-winning AD600 Spectrum Manager; enabling engineers to neutralize RF interference by manually switching the signal to a clear backup frequency or by setting the system to do it automatically.
With Shure's ADX3 Plug-On Transmitter audio professionals can truly manage sound from any location, anywhere.
Signiant and TMT Insights
Intelligent Ingest
TMT Insights and Signiant have teamed up to develop an Intelligent Ingest solution that revolutionizes and simplifies large-scale media ingest. Building on their existing partnership, this new offering integrates the Signiant Platform with TMT's Polaris operational management platform. Collectively, they provide M&E companies with enhanced capabilities to optimize their content supply chains.
As more brands look to rapidly monetize their content libraries, extending viewer audiences through premium experiences across VOD and FAST channels, the ingest phase of the media supply chain is becoming increasingly more complex with overlooked challenges that hinder efficiency and scalability.
Recognizing the transformative potential, the combined efforts of the Intelligent Ingest solution seamlessly integrates Signiant's intelligent file transfer capabilities with TMT Insights' Polaris operational user interface. TMT's Polaris provides a unified view for content operators to manage all proactive and reactive inbound content landing from a variety of sources, such as content aggregators, in-house production teams and independent freelancers. The solution simplifies a complex process that's required for asset identification, metadata validation and association within a customers title hierarchy. Signiant's advanced transport technology ensures that all content deliveries are handled with speed, reliability and security. The result is an efficiency tool that controls and manages media file ingest processes, saving time, reducing costs and accelerating time-to-market for media content.
• Centralized visibility and control: The Intelligent Ingest solution centralizes the management of media receipt and registration, enabling operations teams to easily plan, track and manage inbound content. This eliminates the need for disparate systems and provides a unified view of all media deliveries, allowing for efficient escalation of delays and ensuring content is always on track.
• Minimize errors: Intelligent Ingest takes a proactive approach by validating that only compliant file formats are uploaded, ensuring that content meets specifications before it even enters the system. This reduces errors and eliminates the costly and time-consuming process of re-uploading incorrect content.
• Ecosystem agility: The solution allows operators to manage, add or remove the various partners and collaborators involved in the media process, as needed. This type of granular control and agility allows users to fine-tune who can access or handle certain content, ensuring that the ecosystem of collaborators can adapt quickly to new requirements.
• Flexibility: It caters to both automated and manually initiated workflows, offering the necessary speed, dependability and security for media operations on a global scale. Regardless of whether the process is automated or manual, it guarantees that content is ingested efficiently and securely from anywhere in the world.
• Faster time-to-market: The Intelligent Ingest solution reduces the time required to make content available for distribution by simplifying the upload, validation and registration process. This is especially advantageous in today's fastpaced media landscape.
These two award-winning and industry-recognized innovators provide a first-of-its-kind solution that greatly improves media ingest processes by reducing errors, providing a fully customizable user interface for collaborators to have real-time visibility of content, and accelerating the time-to-market for deliverables. Their shared commitment to innovation is a clear demonstration of how strategic collaborations can drive industry-wide progress.
SipRadius SipVault
As the broadcast industry evolves, the need for secure, efficient and reliable communication systems becomes increasingly critical, especially with the growing adoption of IP connectivity and remote production. SipVault, by SipRadius, offers a breakthrough solution tailored specifically for this challenge. Unlike traditional intercom systems that have been retrofitted with remote connectivity features, SipVault is designed from the ground up to address the complex demands of modern broadcast production. It integrates real-time intercom, secure chat and document exchange into a single, self-hosted platform embedded within the production's IP stream. This approach ensures that all communication remains under the broadcaster's control, free from the vulnerabilities and inefficiencies of third-party cloudbased services.
In an industry where even the slightest delay can impact a broadcast, SipVault's ultra-low latency and real-time communication capabilities are indispensable. Traditional systems often require multiple devices and applications to manage different communication needs, resulting in fragmented workflows and increased security risks. SipVault consolidates these functions, providing a seamless user experience within a single interface, accessible from any device. By embedding the communication tools directly into the media flow, SipVault eliminates the need for juggling disparate systems, enabling teams to focus on content creation without compromising on speed or security.
Security is a paramount concern in media production, particularly when dealing with remote teams and sensitive information. SipVault addresses this with AES-256 encryption for all communications, ensuring that data remains secure even over public internet connections. This level of encryption, combined with configurable user rights, protects critical information and restricts access to authorized personnel only. By hosting SipVault on the production's infrastructure, broadcasters are safeguarded against the risks associated with cloud dependency, such as data breaches or
service outages.
The introduction of SipVault is timely, as the broadcast industry continues to adapt to more complex and geographically dispersed productions. By offering a solution that merges intercom, chat and document exchange into a single, integrated platform, SipVault not only improves communication efficiency but also sets a new standard for security and reliability in broadcast operations. As productions increasingly rely on IP-based workflows, the ability to maintain control over communication channels is essential. SipVault's self-hosted architecture ensures that broadcasters can do so with confidence, reducing risks and enhancing operational resilience.
Beyond its immediate applications in broadcast production, SipVault's architecture has the potential to influence other industries where secure, reliable communication is vital. However, it is within the media sector that SipVault's impact will be most profound. By addressing the specific needs of broadcasters — combining low latency, robust security and seamless integration — SipVault represents a significant leap forward in broadcast communication technology. As the media landscape continues to evolve, SipVault positions itself as a critical tool for those looking to lead the industry into the future.
Solid State Logic System T
Cloud-based or virtualised audio processing presents the next significant step forward in production technology, offering increased scalability and agility, as well as substantial operational benefits to broadcasters. It is also a key component in the development of remote and distributed production models.
System T Cloud offers truly flexible capabilities with virtualised processing, control and audio routing. It provides up to 256 processing paths, supporting stereo, 5.1 and immersive formats, all controlled via hardware or software interfaces from any location. With a fully integrated Dante Connect implementation, the Virtual Tempest Engine offers 256x256 inputs and outputs, with audio routing control managed directly from the UI and stored and recalled with the showfile. Across a distributed production architecture, any combination of hardware and software control interfaces can be utilised, offering a unified operator experience.
CLOUD INSTANCES OF THE VIRTUAL TEMPEST DSP ENGINE
• Cloud instances of the Tempest Control App
• 256 processing paths
• Full immersive formats up to 7.1.4 with 9.1.6 monitoring
• Dante Connect cloud audio transport and routing
• 256x256 Dante Connect connectivity on Virtual Tempest Engine
• NDI conversion options
• Hardware and/or software control positions located anywhere
• Control positions in multiple locations simultaneously.
System T Cloud is a fully virtualised audio DSP that runs in the public cloud and can be controlled from a variety of hardware and software user interfaces. These interfaces are specifically designed for broadcast production and provide a familiar feature set for broadcast audio mixers. Filters, EQ, dynamics processing, auto mixing and delays are available on all channels and busses. Mix minus, monitoring and talk back features are included in addition to main programs busses, group stem and auxiliary outputs. Any channel or any bus can be mono, stereo, 5.1, 5.1.4 or 7.1.4 providing a full toolkit for immersive productions and Dolby Atmos workflows.
System T Cloud utilises the Audinate Dante Connect SDK within the virtual DSP, to directly integrate and route Dante inputs and outputs within System T Cloud. Customers gain the ability to quickly define, configure and deploy cloudbased Dante audio solutions with System T Cloud. Providing synchronous audio transport and full channel by channel audio routing between System T Cloud and any other Dante Connect instance or on-prem Dante hardware.
System T Cloud is an additional DSP and control instance option running in broadcasters or service providers own AWS accounts. These instances may be remotely controlled from any existing System T broadcast studio (via a software upgrade), or from at-home production kits comprising of a computer and fader tile if required.
Solid State Logic has been working closely with key broadcasters to develop a greater understanding of production requirements which leverage cloudbased processing and virtualised control. Several Proofof-Concept (PoC) events involving shadow and live-to-air productions have been successfully completed, highlighting the robustness of System T's virtualised DSP and ability to deliver the workflows which broadcasters need.
swXtch.io
cloudSwXtch Intelligent Media Network
CloudSwXtch is an intelligent media network for all public clouds that helps broadcasters extend on-premise systems to the cloud. cloudSwXtch seamlessly funnels data between the ground and the cloud, and then fans that data to all endpoints with exceptional scale, whether on-prem, at the edge or mobile. cloudSwXtch unlocks missing network features required to migrate demanding media workloads, including multicast, PTP, packet monitoring and network path redundancy.
Deployed as a software solution within a physical broadcast network or a cloud tenant, cloudSwXtch lets users add high-performance networking to their cloud or edge applications, without any need for software code changes. swXtch.io has added several new enhancements that will make migration to the cloud possible for high-quality, robust media broadcasting, and help customers create one big broadcast network that is deployable, monitorable and functional both on-prem and across all clouds.
New for IBC2024, swXtch.io now supports JPEG-XS compression flows within cloudSwXtch, allowing customers to move visually lossless video at very low latency into and through the cloud. JPEG-XS compression addresses the challenge of managing the heavy bandwidth and high costs of uncompressed video in the cloud while retaining premium broadcast-quality pictures. cloudSwXtch also offers support for uncompressed ST 2110 workflows, providing customers with more options to manage their entire live production workflows in the cloud. These technologies, for example, make high-value live sports production in the cloud a reality. As these events
are produced or "mastered" from the highest quality materials possible, supporting these standards will preserve the integrity of the media moving through the cloud, ensuring the best quality and lowest latency possible for consumers.
swXtch.io has also added Lossless UDP, which provides additional protection for media streams across dynamic bridging (ground-to-cloud) and multi-cloud networking applications. Lossless UDP optimizes packet delivery for high-bandwidth streams with minimal latency for dynamic ground-to-cloud bridging, and without any SDK integrations or software changes required. cloudSwXtch also offers stream protection through SMPTE ST-2022-7 hitless merge, offering network path redundancy as parallel streams move through the cloud.
swXtch.io has also implemented Network Media Open Specifications (NMOS) workflows in cloudSwXtch networks, streamlining the management and orchestration of media workflows in the cloud and ensuring they align with on-prem systems. Benefits include a unified language for endpoints and routing control systems, and less time spent on provisioning, utilizing and upgrading network infrastructure.
Finally, IBC2024 marked the global debut of enhancements to swXtch.io's wXcked Eye monitoring UI that provide visibility down to the packet level for enhanced monitoring and Quality of Service (QoS), making it easy to identify packet drops and audit overall cloudSwXtch network performance.
The technology foundation of cloudSwXtch enables customers to demand more of their cloud networks, allowing users the freedom to turn features on and off, quickly relocate to new data centers at low cost, and manage their entire live production workflows in the cloud. With availability on all major public clouds along with its intrinsic multicloud networking abilities, cloudSwXtch is a platform that users can grow with well into the future and without compromise.
TAG Video Systems TAG Operator Console
The TAG Operator Console is a powerful, operatorfocused control panel application designed to streamline management of the TAG multiviewer system in control room and NOC environments.
Optimized for operational efficiency, the console removes unnecessary complexity, providing only the essential functionality needed for the day-to-day routine. The TAG Operator Console displays the live multiviewer output with status visibility on an interactive, touch-controlled interface. Operators can instantly switch between predefined layouts, control the displayed source, channel or stream on each mosaic tile, edit UMDs and manage alarm visualization — all with a simple tap. This intuitive design reduces time spent navigating through menus, allowing operators to focus on the core tasks necessary for maintaining smooth broadcasts.
The console's flexibility extends to audio management, enabling operators to verify sources effortlessly, ensuring every aspect of the broadcast is under control. The application
can be deployed on a desktop, tablet or handheld device, offering maximum flexibility, and allowing operators to roam as needed. With role-based access and permission settings, the console also ensures that only authorized personnel can execute critical functions, adding a layer of security to operational workflows.
By focusing on the operator's daily needs, the TAG Operator Console enhances system flexibility, allowing operators to adapt quickly to the unexpected challenges that arise in their day-to-day routines. This ability to respond on the fly ensures that even the most unpredictable situations are managed efficiently, enabling operators to maintain control and keep operations running smoothly. The console's design simplifies routine tasks and empowers operators to handle unforeseen issues confidently, ensuring the business remains agile and operationally effective.
In summary, the TAG Operator Console is an essential tool that transforms operational complexity into streamlined, efficient workflows, driving both operator effectiveness and business success.
Techex
tx darwin
WHAT IS TX DARWIN?
Techex is a live video and cloud specialist with in-house software development. Designed for next-generation software-defined hybrid and pure cloud workflows, tx darwin offers a modular, highly secure and ultra-flexible live media processing, transport and monitoring platform for deployment anywhere within a contribution/delivery chain. tx darwin is part of Techex's work to bridge the gap between the rigorous standards of traditional broadcast and the agility of cloud-native infrastructure.
WHY IS IT DIFFERENT?
Modular and microservice-based from its inception, tx darwin introduced the market-leading ability to switch seamlessly between dissimilar MPEG Transport Streams allowing cloud workflows to be not only switched seamlessly and intelligently, but also changed, repurposed or regenerated in multiple ways.
MONETISATION
New to the tx darwin platform, requested from global broadcasters, is SCTE manipulation at scale. Monetisation is more important than ever. Uniquely, tx darwin brings comprehensive SCTE-35 generation and manipulation into a single, lightweight platform downstream of playout with the flexibility to conform incoming SCTE-35 messages into any format required by multiple distribution platforms. Our customers have told us that tx darwin opens up new channels to monetisation because it can combine information about the stream itself and the SCTE-35 inputs to drive live, complex decision-making to create, alter or withhold SCTE-35 triggers as needed.
CLOUD PRODUCTION
Large media organisations are already using tx darwin for uncompressed audio and JPEG XS video workflows in the cloud. NDI is still needed for some products and tx darwin plays a vital role in bridging the painful gap between AES67 to NDI audio. tx darwin's new high-performance SRT I/O modules are an essential part of delivering JPEG XS safely into the cloud. With more than 400 mbps per module, this is an unheard of throughput for cloud-based SRT workflows within a comprehensive media processing platform.
• Time Delay: Another module with a unique ability is our time-delay module. Able to delay by milliseconds to hours, the transcode-less module retains all of the RTP information of the incoming stream allowing its playback to be fully SMPTE ST 2022-7 compliant so doesn't break your reslience model.
• Repair Transport Streams: An efficient and secure cloud-native solution that corrects timing issues and cleans your transport streams making them look hardware-generated yet with a light software footprint. tx darwin's is able to add, remove or change program identifiers, perform PCR domain correction, manipulate PSI/SI and elementary stream descriptors and perform rate adaptation.
WHO IS TX DARWIN FOR?
tx darwin is designed as an API-first, modular toolset for easy integration by tier-1 media organisations that demand full control over all parts of their workflow and is also available as part of managed service offerings. tx darwin is for anyone who wants flexible, cloud and on-prem friendly licensing for full finops control.
In an industry that is moving to software-only workflows, tx darwin's adaptability and diversity is its greatest asset. By integration of logic and processing blocks it opens doors for the world's largest broadcasters and telecoms organisations to explore new content formats & delivery mechanisms whilst addressing emerging workflows.
Telestream Live Capture as a Service
With live content, every second counts and customers need to capture every live moment with precision — that is why at IBC2024 we showcased our new offering: Live Capture as a Service. Delivering on the powerful media ingest and live capture capabilities Telestream is known for in a dynamically scalable, cloud hosted solution, reducing infrastructure costs and making production faster and easier.
Achieving seamless live capture is an art that demands precision and efficiency. Flawless ingest enables every global headline, sports match and local story to be delivered with the clarity it deserves. Live Capture as a Service aggregates and records streams from anywhere, on demand, reducing infrastructure costs and making production faster/ easier. It's perfectly suited for remote production and real-time collaboration.
With our newest offering‚ Live Capture as a Service, we are bringing the powerful media ingest and live capture capabilities that we are known for, into a real-time, on-demand, native, cloud-based service offering. Live Capture as a Service accepts and reads media streams reflected into the cloud, writes them to as many output formats as the user requires, and forwards those streams to media workflows in real time.
Production companies have told us that they struggle with the ability to set up and maintain capture systems in distributed and remote environments, because of unpredictable costs, resource allocation issues and limited format support. Live Capture as a Service allows for dynamic and full control of cloud resource/ instance provisioning, resulting in more predictable costs and supports any output format clients may need.
Because Live Capture as a Service is a hosted cloud-based offering, it enables the setup for remote production environments with no up-front cost or build-out required. Its cloud-native flexible recording capacity adds robustness and reliability to the service, thus making sure that customers never miss a critical event (newsrooms, sports arenas, etc.). The service offers combined Probing
and Recording to control content quality while ingesting, thus addressing the audience demand for high-quality, timely media experiences.
As the demand for live content intensifies, media companies are under mounting pressure not only to quickly capture and process live feeds for real-time broadcasting, but also to navigate the complexity of setting up and maintaining live capture systems within distributed and remote work environments. Telestream is introducing Live Capture as a Service to simplify the live capture of content from any location in real time, allowing production teams to bypass the traditional hurdles of remote setup and maintenance. With this solution, media companies can overcome the limitations of traditional physical infrastructure, facilitating a faster transition from live capture to broadcast and optimizing production workflows. This new flexibility not only accelerates content delivery but also empowers companies to capture and monetize additional content, unhindered by the physical constraints of onsite infrastructure.
Telos Alliance Jünger Audio flexAI Platform
ünger Audio‚ represented globally by Telos Alliance, introduces exciting new features to the renowned flexAI platform with the release of flexAI software version 2023-12r4. This major software update for the Jünger
Audio AIXpressor and flexAIserver introduces a plethora of enhancements, bringing an abundance of performance and convenience features to these already indispensable products.
One of the standout features of V2023-12r4 is the comprehensive overhaul of AoIP integration, which now includes full support for SMPTE ST 2110-30 and -31
Level A, B and C, as well as Livewire+ AoIP streams, while compatibility with NMOS IS-04 and IS-05 further enhances its utility in ST 2110-based workflows.
On the production and processing front, improvements include an enhanced voice-over automixer for optimal dialog balance, while Jünger
Audio's latest upmixer technology introduces center channel replacement capabilities for immersive audio programs. Additionally, the introduction of a matrix mixer with up to 16 channels and multichannel delay expands flexibility and functionality for users; native browser-based audio
monitoring enables monitoring from any patch point in the processing chain.
FM broadcasters will appreciate Jünger Audio's renowned MPX and Pre-Emphasis Limiter, available on the flexAI platform as the new FM Conditioner, providing transparent loudness management in the composite domain in regions adhering to the ITU-R BS.412 MPX Power regulations.
Television broadcasters benefit from support for the Serial ADM professional metadata format, facilitating seamless exchange of metadata in next-generation immersive audio workflows. Security enhancements include an HTTPS web interface, while convenience features such as real-time CPU monitoring and an integrated web-based console terminal streamline system setup and maintenance.
With the addition of all this power to an already impressive family of audio handling products, the Jünger flexAI Platform sets a new standard of excellence in audio processing technology, empowering broadcasters with unparalleled performance and convenience.
Telos Alliance
Telos Infinity Virtual Commentator Panel for Infinity VIP Virtual Intercom System
The Telos Infinity Virtual Commentator Panel (VCP) is a new, unique extension to the popular Telos Infinity VIP IP-based virtual intercom platform. VCP enables remote contributors to access commentary or announcer stations from anywhere in the world via the VIP intercom platform on any PC, tablet or smartphone with a modern HTML5 browser, or via the Telos Infinity VIP app for iOS and Android devices. VCP replicates the physical commentator panels and announcer stations familiar to voice artists, commentators and radio and television reporters, making VCP instantly usable with virtually no learning curve. Features include the ability to mute the on-air microphone feed when using comms or speaking "off air," and to create a custom monitor mix of IFB, program, mix-minus and aux sources — each with individual gain controls — all while still using dedicated VIP intercom keys.
FEATURES INCLUDE:
• Full-time on-air output path with separate on-air and intercom mic channels
• Automatic muting of on-air channel when comms channels are in use
• Programmable inputs with individual rotary gain controls for IFB, program, mix-minus and aux sources for easy creation of custom monitor mixes
• Simple add-on to existing Infinity VIP installations
• Seamless integration with Elgato Stream Deck + through a dedicated Stream Deck plug-in, for applications where a hardware panel is desired
Telos Infinity Virtual Commentator Panel is available as a licensed option on Telos Infinity VIP systems running v3.0 software or later, in 4, 8 and 12-button configurations.
ThinkAnalytics
ThinkContentIntelligence (ThinkCI)
Operators and content owners face a significant challenge: they invest in thousands of hours of content that often goes under- or un-consumed.
Operators end up with vast content libraries that are underutilised, alongside older/back-catalogue content that is not generating revenue.
ThinkConentIntelligence (ThinkCI) addresses this issue. It allows operators to efficiently discover and present content to viewers, maximizing the value of their content.
ThinkCI is the first solution in the industry that harnesses the power of AI, metadata and discovery to unlock hidden value in video providers' and content owners' libraries.
Using the industry's most granular metadata, ThinkCI automates content enrichment. By eliminating the time spent on manual tagging and organisation and by better managing their assets, content owners, curators and editorial teams achieve huge operational efficiencies. At the same time, customers increase audience engagement by surfacing hidden gems that will appeal to viewers based on their previous viewing behaviour.
Using AI and machine-learning algorithms, ThinkCI goes beyond traditional content management systems to understand context, identify themes, and even predict content performance. Its AI-driven insights enable smarter decision-making and keep operators abreast of market trends.
Its intelligent search, based on ThinkAnalytics' 20 years in this space, and use of more than 40,000 metadata tags streamlines the process of finding hidden gems in a content library.
While this breathes new life into older and niche long-tail content, it also makes it easy to create new thematically linked collections and curated picklists for recommending to viewers.
New content collections — such as "1990s comedies" — can also be packaged as AVOD services or FAST channels that increase viewing time and ad impressions. These packages of content can also be licensed to other platforms or broadcasters.
Furthermore, by exploring franchises and themes, editors can quickly assemble collections, like a "Superhero Summer" promotion, to drive viewership.
The platform's strength lies in its versatility, offering a comprehensive suite of tools for diverse monetization strategies: enhancing AVOD offerings, facilitating content licensing and enabling dynamic content bundling. Its ability to quickly identify market trends is another strength, allowing content owners to stay ahead in a competitive marketplace.
FEATURES INCLUDE:
• AI-generated thematic collections: explore automatically curated content groups
• Curated picklists: create targeted recommendations to boost engagement
• Increase watch time and ad impressions with highly relevant collections like "Feel-good Comedies" or "Edge-of-Your-Seat Thrillers."
• Content licensing optimization: package and license content for other platforms or broadcasters. Discover forgotten series that align with current trends and license them to streaming platforms.
• Long-tail content activation: breathe new life into older or niche content. Identify and promote old documentaries relevant to current events.
• Content bundling: create seasonal or trend-based bundles, such as holiday-themed collections.
PD14PMiD Intelligent Power Distribution Unit
In an era when guaranteed uptime and efficient power management are essential, TSL introduces the PD14PMiD Intelligent Power Distribution Unit (PDU) at IBC 2024. This cutting-edge PDU is engineered to meet the evolving demands of broadcast and data centre systems managers who face the challenge of maintaining continuous operations across diverse locations while minimising costs and environmental impact.
MEETING THE CHALLENGES OF MODERN BROADCASTING
Broadcasters face significant challenges in ensuring reliable operations across multiple locations while managing energy consumption and adhering to new sustainability regulations. The PD14PMiD addresses these challenges with its advanced, networked power distribution capabilities, providing real-time remote monitoring and control of rackmounted equipment. This intelligent power management ensures equipment stays operational, minimises downtime and reduces the overall carbon footprint.
INNOVATIVE POWER MANAGEMENT AND CONTROL
The PD14PMiD stands out with its ability to offer precise, reliable power management. With 14 individually fused outlets and dual inlets with automatic changeover, it provides continuous power protection, ensuring uninterrupted operations even in the event of a primary power failure. This is crucial for live broadcasts, where every second counts. The PD14PMiD also supports sequential or delayed outlet power-up, protecting against circuit overloads and further enhancing reliability.
STREAMLINED OPERATIONS AND COST EFFICIENCY
By allowing remote monitoring and control, the PD14PMiD reduces the need for on-site interventions, cutting maintenance costs and improving operational efficiency. Its ability to manage power usage intelligently means broadcasters can power down equipment during off-peak times, leading to significant energy savings — up to 30% — measurable when combined with TSL's monitoring software Insite. This approach not only reduces operational costs but also supports compliance with emerging energyefficiency standards.
A STRATEGIC ASSET FOR FUTURE-PROOF BROADCAST OPERATIONS
The PD14PMiD Intelligent PDU is not just about distribution; it's a strategic solution that enables broadcasters to manage power intelligently, reduce costs, and comply with regulatory requirements. By offering a powerful combination of reliability, efficiency and sustainability, the PD14PMiD helps broadcasters maintain a competitive edge in a fast-paced industry.
In a broadcast landscape where operational efficiency and environmental responsibility are increasingly important, the PD14PMiD sets a new standard for intelligent power management. Learn more about the future of broadcast power management with TSL's PD14PMiD and ensure your operations run smoothly, efficiently and sustainably.
TSL
PAM2-12G
TSL's PAM2-12G Audio Monitoring Unit redefines audio monitoring for modern media environments, offering broadcasters an unparalleled blend of flexibility, precision, and ease of use. Building on the trusted legacy of TSL's renowned PAM (Precision Audio Monitoring) range, the PAM2-12G introduces transformative innovations that cater specifically to the evolving needs of today's broadcast landscape. For broadcasters striving to stay ahead of the curve, the PAM2-12G is not just an upgrade; it's a necessity. This 19-inch x 2RU unit is designed to seamlessly handle the full spectrum of audio and video signals, from legacy AES and analogue formats to cutting-edge UHD-SDI and advanced IP sources like SMPTE ST 2110/2022-6 and Dante.
While Tier-1 broadcasters have long relied on the PAM series for its reliability, intuitive interface and robust design, the PAM2-12G elevates these attributes, offering groundbreaking features that seamlessly integrate with UHD broadcast demands and IP-centric workflows. This latest model doesn't just keep pace with industry changes — it anticipates them, providing a future-proof solution that blends the best of past reliability with cutting-edge advancements.
KEY INNOVATIONS DRIVING THE PAM2-12G:
• Unmatched Versatility: The PAM2-12G supports a comprehensive range of signal types — including SD/ HD/3G/12G-SDI with 16-channel audio de-embedding, ST 2110/2022-6 and Dante IP streams, as well as balanced and unbalanced AES and analogue audio. This all-inone approach reduces the need for multiple devices, streamlining operations and enhancing efficiency.
• Advanced Display and Monitoring: Dual high-resolution 3.4-inch LCD displays allow for simultaneous monitoring of video, audio and metadata, providing an at-aglance overview that's essential for live broadcasting environments. The intuitive "scroll-to-hear" function, along with 16 audio bargraphs and customisable user-presets, facilitates swift navigation and faster response times. A configurable PPM display allows users to select the most popular international metering scales.
• Comprehensive Compliance: The PAM2-12G's loudness monitoring is compliant with ITU BS1770 and EBU R 128 standards, ensuring broadcasters can meet both quality and regulatory requirements effortlessly. Optional Dolby E, D and D+ decoding with metadata display and downmixing capabilities further extend its utility, making it a versatile choice for diverse audio applications.
• Seamless IP Integration: Designed for the demands of IP-centric workflows, the PAM2-12G features dual 10 Gig/E interfaces for seamless connectivity with COTS IP fabrics and supports NMOS and Ember+ IP control protocols for both in-band and out-of-band management. The inclusion of primary and redundant 1 Gig/E Dante/ AES67 ports enables monitoring of up to 64 AoIP channels, making it an ideal solution for complex, multichannel environments.
• Enhanced Remote Management: With a built-in web interface for remote monitoring and control, the PAM2-12G offers real-time system status updates and IP flow monitoring. This capability not only reduces downtime but also enhances reliability, ensuring broadcasters can maintain optimal performance even in high-stakes situations.
By integrating versatile signal support, user-friendly operation and robust IP-based monitoring capabilities, TSL's PAM2-12G Audio Monitoring Unit sets a new benchmark for excellence in broadcast audio technology. It delivers the innovation, flexibility and reliability that today's broadcasters need to navigate an ever-evolving media landscape and maintain a competitive edge.
TVU Networks
TVU MediaHub
TVU MediaHub introduces a new paradigm in live video broadcasting by flawlessly merging cloud-based and on-premise video routing, a feature unique in the broadcasting industry. Designed for the dynamic scalability and resilience needs of live broadcasting, this hybrid video router is capable of handling an unlimited number of inputs and outputs across both IP and SDI formats. Its standout feature is the ability to manage any video format input and route any format output.
At its core, TVU MediaHub is built around a cloudcentric architecture that eliminates the traditional constraints of hardware-dependent routing. This approach enables broadcasters to manage complex on-premise and digital signal matrices easily through a cloud user interface, which is a first in cloud broadcast routing. The system supports a vast array of video formats, including SDI, NDI, SRT, SMPTE ST 2110, etc., and ensures seamless routing to various destinations such as SDI, RTMP, HLS, MPTS, Youtube or Facebook.
Chosen by AWS as its flagship routing solution to manage all feeds on their IBC2024 booth, TVU MediaHub stands out as the only solution capable of routing anything in, anything out, with unlimited inputs and outputs, from anywhere to everywhere. This distinction underscores its ability to bridge all AWS partners' formats into one cohesive ecosystem.
One of the key advantages of TVU MediaHub is its unified resource management capability, which allows for the management of both on-premise hardware and cloud-based IP signals from one single cloud interface. This capability is critical for broadcasters looking to adapt to the rapidly evolving landscape of content consumption, where flexibility, efficiency and scalability are paramount. The platform's cloud-based approach allows users to manage existing onpremise resources at only $35/month and easily scale additional inputs/outputs in the cloud for occasional live events or peak traffic without the need for further hardware infrastructure.
TVU MediaHub empowers operators with an intuitive interface where complex tasks like
decoding, scaling and encoding are automated through simple drag-and-drop actions. This AI-driven automation allows broadcasters to quickly adapt and scale operations, letting engineers focus on more complex projects. The platform also offers a Full Frame Live Preview for real-time decision-making and a flexible API for customization.
Integrating UDX for cloud-based color correction, white balancing, chroma and vectorscope monitoring on all inputs and outputs, TVU MediaHub now features comprehensive health checks, providing detailed visibility and monitoring of signal decoding and switching processes to enhance transparency and reliability in cloud-based broadcasting.
Furthermore, TVU MediaHub eliminates the need for physical PCs, HDMI to SDI converters, and extensive cabling, leading to a more adaptable setup. It uniquely positions itself for sustainable growth in the digital era by bridging the gap between traditional hardware and digital workflows. Economically, it starts at just $35 per month when active, with no charges during downtime, providing a cost-effective solution for broadcasters.
By addressing the complex workflows of modern broadcasting with its interoperability and transition to cloud solutions, TVU MediaHub stands out as a practical solution for today's broadcasting challenges, particularly as IP sources become more prevalent.
VIDA Connect2
VIDA Connect2: Redefining Media Asset Management for Content Rights Holders
VIDA Connect2 is a revolutionary self-service media asset management (MAM) application tailored for content rights holders and their licensees. By integrating rights management with content inventory, it empowers buyers to self-serve, accessing all relevant marketing materials and broadcast masters from a single, secure location.
Designed for film and TV content owners, Connect2 transforms media asset management, delivering secure, efficient and intuitive access. Built into the wellestablished VIDA Content OS, Connect2 enhances operational efficiency and significantly reduces administrative burdens, allowing content owners to focus on creating exceptional media.
Connect2 has been developed in close partnership with Seven.One Studios International, the international TV distribution arm of Seven.One Studios and part of ProSieben. They will be the first company to implement Connect2 with the objective of reducing administrative overhead in their content operations and simplifying the everyday challenges within their complex media supply chain, which supplies award-winning TV shows to over 200 buyers and end destinations globally.
KEY FEATURES OF CONNECT2
What sets Connect2 apart is its powerful, user-friendly interface that enables even non-technical users to easily navigate and retrieve assets. Advanced security protocols ensure that only authorised users have access to sensitive materials.
With real-time updates synchronised with VIDA Content OS, licensees are always working with the latest media files and metadata. Its advanced search features make finding assets quick and efficient, while comprehensive management tools allow users to preview, download and organise media files, all within a streamlined workflow.
TAILORED USER EXPERIENCE
Content owners can customise permissions,
assigning different access levels to various licensees. Public and private portals offer a flexible, tailored experience, catering to a range of audiences. This ability to create specialised user environments adds an additional layer of control and personalisation to the Connect2 platform.
SEAMLESS INTEGRATION AND COST EFFICIENCY
Connect2's integration within the VIDA Content OS guarantees smooth, secure, and scalable performance on a robust, cloud-native platform. By allowing licensees to manage assets independently, Connect2 reduces the administrative load on content owners, leading to significant cost savings by minimising the need for manual support.
INNOVATIVE AND SECURE
Built to meet the highest standards of technical excellence, Connect2 delivers advanced management capabilities, real-time synchronisation and robust security features. These ensure that valuable assets are safeguarded while remaining easily accessible to authorised users.
SCALABLE AND FUTURE-PROOF
Connect2's scalable design ensures it can grow with the evolving needs of content owners, offering long-term value and reliability. The application's self-service model provides unmatched flexibility and control, streamlining workflows while maintaining efficiency.
TRANSFORMING MEDIA ASSET MANAGEMENT
VIDA Connect2 is more than just a MAM solution — it is a transformative tool that addresses the complex needs of today's content owners. By simplifying workflows, enhancing security and delivering a user-friendly experience, Connect2 sets a new benchmark in media asset management. Its innovative approach offers a futureproof solution, empowering content owners to manage their media assets with ease and confidence
Vidispine — an Arvato Systems brand Web Render Engine
oin the Renderless Revolution — Be the first on air with the Power of Vidispine's Web Render Engine
In the fast-paced world of news production, being the first to break a story is paramount. However, the technology currently in use is inherently designed in a way that introduces delays.
These delays can result in missing critical windows for impactful reporting, allowing competitors to break stories first and potentially leading to a loss of market share. In the news industry, speed, agility and efficiency are essential for success.
THE CHALLENGE OF RENDERING TIMES:
Using the current technology, rendering content with nonlinear editing systems (NLEs) poses a significant challenge in modern news production workflows. Even basic changes, such as cutting clips, adding transitions or adjusting audio levels, often require time-consuming rendering processes to finalize content for broadcast.
THE RENDERLESS SOLUTION: VIDISPINE'S WEB RENDER ENGINE
Vidispine's Web Render Engine (WRE) revolutionizes news production by allowing content to be stored and managed efficiently without creating new files for every sequence, thus removing the need for rendering processes. While most modern MAM systems store sequences, edits, etc., purely as metadata, they still require the content to be rendered before the final review, approval and layout. With the WRE, this final render stage is not required. When users edit a sequence, cut a clip or adjust audio levels, only metadata is generated, eliminating the need for file movement or rendering.
KEY BUSINESS BENEFITS OF RENDERLESS PRODUCTION
• Be the first to break the news and gain more market share
• Enormous Time Savings: By enabling immediate playback and quick access to sequences, Renderless Production significantly reduces review and
approval times, accelerating the entire production process.
• Optimized Storage for Complex Projects: Our advanced media repository, VidiCore, allows you to store intricate sequences effortlessly, ensuring you can access and organize your projects quickly and efficiently.
• Enhanced Collaboration Across Editing Platforms: Seamlessly integrate with popular NLEs like Adobe Premiere Pro, allowing for smooth sequence exchange and improved collaboration among team members.
• Streamlined Transition to Broadcasting: Play sequences directly on video servers, simplifying the path from editing to broadcasting and ensuring a faster delivery to your audience.
• Real-Time Publishing During Recording: With Vidispine you can publish sequences even while recording is still underway, enabling you to provide immediate updates and content delivery, keeping your audience engaged.
CONCLUSION
In the fast-paced world of news and sports production, every second counts. Vidispine's Web Render Engine revolutionizes the industry by eliminating the delays caused by rendering. This breakthrough enables immediate playback, seamless collaboration and the flexibility to make last-minute changes, ensuring your content reaches the audience faster and with greater accuracy.
By integrating Vidispine's WRE into your workflow, you gain efficient storage, direct playout capabilities and the ability to publish while recording. This renderless approach is a game-changer, empowering production teams to deliver highquality, real-time content that keeps audiences informed and engaged.
Embrace the future of media production with Vidispine's Web Render Engine and stay ahead in the race to be the first on air.
Vimond
VIA App Builder
The VIA App Builder, which made its debut at IBC2024, is a groundbreaking new tool that effortlessly creates dynamic streaming front-end applications. It provides a low barrier to entry for content creators, broadcasters and corporate entities looking to launch their streaming services. By removing the complexities associated with application development and content management, VIA App Builder allows creators to focus on what truly matters: delivering compelling content to their audiences.
VIA App Builder is designed to simplify the creation and management of front-end video streaming applications. This easy-to-use platform accelerates the development process by offering dynamic building capabilities and seamless Content Management System (CMS) integration, courtesy of its connection to the Vimond VIA platform. It enables creators and broadcasters to swiftly launch their streaming services, reducing the focus on technical complexities and allowing more time for content creation. VIA App Builder supports real-time updates and ensures compatibility across various devices, facilitating direct content delivery and monetisation to a diverse audience.
At its core, VIA App Builder bridges content creators and editorial teams to craft engaging, curated video streaming applications quickly. Operating atop the Vimond VIA video CMS, the platform dismantles the technical barriers
traditionally encountered in streaming service development. It simplifies the creation, management and monetisation of digital content, thus empowering users to concentrate on producing engaging content for their audiences.
VIA App Builder's intuitive interface allows users to design comprehensive streaming applications without extensive coding knowledge. Features like pages, menus, carousels and sliders enhance user interaction and engagement, while its integration with Vimond VIA facilitates efficient content management and delivery. The builder's device-agnostic approach guarantees accessibility and optimisation for all devices, extending the reach of content creators to a broader audience.
VIA App Builder revolutionises streaming service development by making it accessible to creators of all sizes. It levels the competitive field, enabling smaller content producers to contend with larger organisations by eliminating development hurdles and fostering creativity. The platform's emphasis on real-time updates and universal device compatibility ensures content remains current and widely accessible, enhancing viewer experience and enabling creators to forge stronger audience connections.
In today's content-driven landscape, VIA App Builder empowers creators to take charge of their digital footprint and revenue streams. Its comprehensive approach to streaming service development combines ease of use with efficiency, significantly lowering the barrier to entry for a wide range of creators. By simplifying application development and content management, VIA App Builder allows creators to focus on delivering compelling content, making it a deserving candidate for recognition in the streaming technology domain.
VITEC MGW Diamond-H
The MGW Diamond-H is a compact and portable 4K HDMI Encoder providing best-in-class video quality over industry standard video/audio connectivity. It boasts impressive encoding capabilities of up to 4x channels from 2x sources at HD resolution, or 1x at 4K60p, empowering users to capture and stream content with unparalleled quality and minimal latency.
With 2x HDMI with loop-through and 1x 12G SDI inputs, the MGW Diamond-H is designed to seamlessly integrate into existing setups, suitable for both IPTV distribution or site-tosite streaming within corporate, sports or broadcast industries. The device can be powered via Power over Ethernet (PoE) making integration as seamless as possible, and creating a robust, secure and reliable solution for any IPTV distribution needs.
The MGW Diamond-H features Ultra High Definition and High Dynamic Range (HDR) support, enabling it to capture 4Kp60 HDR10 or HLG video. The encoder also possesses the next generation of HEVC (H.265) compression support, reducing network bandwidth utilisation by up to 50% compared to H.264. On top of this, stream protection is installed within for a reliable video/audio and metadata transmission.
The versatile Diamond-H can be applied to many applications such as IPTV distribution, streaming desktop/
laptop/camera content to devices on the local network, IP video contribution and inter-site video streaming. It is also ideal for field-based applications thanks to its portability and impressive SWaP (size, weight and power) characteristics.
Corporate, sports and broadcast markets are continuously looking for the best end-to-end solution to provide customers with the utmost reliable and high-quality streaming and IPTV distribution, and the Diamond-H is VITEC's answer to this market need.
VITEC'S MGW Diamond-H's requires minimal setup, so can be used straight away and provide high-quality results from the get-go. It operates with a secure web-based remote management interface (HTTPS) that is password protected, which creates a private solution ideal for any operation. The encoder boasts self-efficient features with autostart mode for recovering saved configuration after power cycle as well as status LEDs for power and activity indicators. Setting a new standard within the market, the MGW Diamond-H exceeds any existing market needs and creates a solution for high-quality results.
TriCaster Vizion
Imagine a world where creativity flows unhindered by technical barriers, and where every production, no matter the scale, can deliver the highest possible quality. This is the world TriCaster Vizion opens for its users; it's a solution that empowers broadcasters, event producers and media organizations to redefine what they can achieve.
In an industry where change is the only constant, TriCaster Vizion is designed to adapt. It's the first TriCaster to offer perpetual and subscription licensing options, making it accessible to organizations of all sizes ensuring it remains a viable choice as budgets and needs evolve. The flexibility doesn't stop there; with two distinct hardware platforms, users can choose the deployment that best fits their current and future requirements.
This adaptability is crucial in a time when media production is increasingly distributed, requiring tools that can seamlessly integrate across different environments. Whether it's a stadium broadcast, a multistage event or a corporate webinar, TriCaster Vizion offers the scalability and versatility needed to handle any scenario.
TriCaster Vizion isn't just about keeping up with industry standards; it's about setting them. The system's advanced AI capabilities automate complex tasks, freeing up production teams to focus on creativity rather than troubleshooting.
The integration of TriCaster Graphics powered by Viz Flowics provides access to industry-leading graphics tools that enhance the viewer experience, making content not just seen, but remembered.
These capabilities aren't just add-ons — they are integral to the system's design, reflecting a deep understanding of what today's media professionals need. With capabilities designed to optimize efficiency and elevate production quality, TriCaster Vizion enables users to deliver results that consistently exceed expectations.
At its core, TriCaster Vizion is about giving users control. It's built to be intuitive, allowing teams to get up and running quickly, regardless of their previous experience. The IP-based technology supports seamless integration with existing workflows, enabling organizations to enhance their capabilities without overhauling their infrastructure.
TriCaster Vizion also anticipates the future of production, where cloud-native, web-based platforms become the norm. Its comprehensive connectivity options ensure that teams can produce content from virtually anywhere, opening new possibilities for remote and hybrid productions.
In a growing market, what sets TriCaster Vizion apart is its focus on real-world applications. It's a solution that understands the challenges media professionals face and addresses them head-on with thoughtful, user-centric features.
With flexible licensing options that cater to all production needs, AI-driven automation that simplifies complex tasks, and adding advanced graphics, TriCaster Vizion is designed to empower users to push creative boundaries, streamline workflows and deliver polished, professional content.
TriCaster Vizion stands out because it's a complete solution tailored to the evolving demands of live production. It offers the perfect blend of flexibility, advanced technology and usercentric design, making it an indispensable tool for any media professional.
TriCaster Vizion is the ultimate investment in futureproofing production — equipping teams with the power to innovate, adapt and consistently deliver excellence in an increasingly competitive landscape.
Vubiquity MetaVU
MetaVU is a title management solution that aggregates, enriches, localizes and delivers media and entertainment (M&E) content metadata at a global scale. The cloud-native solution works with industrystandard metadata records (CableLabs and EMA), streamlines integrations (EIDR, Gracenote, IMDB), offers seamless configuration of outputs (2,000+ OTT and cable endpoints already in use), and enables rich consumer experiences while providing actionable business insights. Initially developed for Vubiquity's licensing division, MetaVU SaaS is available to customers to manage titles and normalize libraries.
METAVU FEATURES:
• Metadata Management Module: Delivers a single source of truth by leveraging tools to ingest and normalize metadata. It supports a wide range of content suppliers, along with a flexible hierarchical metadata management model and an industryleading metadata distribution engine.
• Metadata Enrichment Module: Enables access to a multitude of third-party and AI/ML enrichment sources, inclusive of audience and critics ratings, robust parental guidance ratings and advisories, social data, tags to power recommendations and marketing, originated metadata, access to localized metadata and machine learning derived tags to power search.
Thanks to its multisource data enrichment capabilities, customers achieve a faster time to market and shorter turnaround times for new schedules and launches. With detailed genres, keywords and mood categories/tags, our continuously enhanced metadata delivers more accessible content discovery, more intelligent recommendations and higher viewer engagement.
The platform comes with optional library services to clean up titles and create metadata or correct existing records.
WHY SHOULD METAVU BE CONSIDERED?
Metadata is the cornerstone of your media operations and at the heart of your media supply chain. With MetaVU, customers can consolidate all their title data, including externally sourced and linked sources, into a centralized, intuitive platform that eliminates data silos and provides a single source of truth for content licensing and distribution teams. You can leverage the power of AI for title metadata enrichment to deliver exceptional, curated audience experiences at scale.
The key differentiator of MetaVU is that it can deliver to a multitude of different endpoints, supporting thousands of unique business rules for targeted transformations for our platform or distribution partners to ingest. This delivers content to nearly one billion aggregated viewers across 2,000+ unique endpoints.
MEASURABLE BUSINESS IMPACT
The use of MetaVU had the following impact on real-world customers (names cannot be provided or published):
• Tier 1 Broadcaster: Saved up to 72 hours to publish a title from receipt to end consumers, delivering content to viewers within minutes. Up to five times faster response time to metadata gaps and errors.
• Tier 1 Content Aggregator: Fully eliminated title record duplication, reduced metadata translation costs by 11%, increased productivity by 22%, with a total yearly costs savings of $1.1M
ACROSS THE BOARD, METAVU CUSTOMERS HAVE:
1. Improved metadata delivery time (minutes after receipt).
2. Triggered enrichment and aggregation earlier, based upon available data, enabling a five-fold faster response to metadata gaps and errors.
3. Eliminated duplicate title records and reduced title servicing costs, especially duplicate translation costs.
4. Automated metadata augmentation and increased monetization opportunities.
Zixi ZEN Master: Live Events Manager
ZEN Master serves as the live video orchestration and telemetry control plane, facilitating Zixi users in managing extensive configurations and monitoring across the Zixi Enabled Network, along with Zixi's live streaming platform, devices and appliances. Through ZEN Master video management software, media entities can expand their influence, enhance production efficiency, and significantly diminish operational expenses. Within the ZEN Master software suite lies a distinctive Live Event Schedule module, meticulously crafted for live events management and orchestration. This module is tailored to streamline OU production and distribution on a massive scale, enabling organizations to execute tasks efficiently while maintaining the integrity and quality of their live events.
DESCRIPTION:
ZEN Master stands as a robust solution tailored for managing the intricacies of live event remote production workflows, which inherently pose distinct challenges requiring a versatile, dynamic and scalable operational environment.
ZEN Master streamlines these workflows by offering potent event scheduling capabilities, dynamic resource and route provisioning, comprehensive real-time performance analysis spanning from edge to edge, and seamless integration into an extensive partner network, all supporting large-scale live production and distribution endeavors.
The Live Event Manager within ZEN Master boasts unique functionalities crucial for efficient live event management and orchestration. Featuring an intuitive event scheduling dashboard, stage automation tools and bonded contribution capabilities, it empowers users to scale and align delivery requirements seamlessly. Schedule management dashboards offer enhanced visualization and construction of event schedules, optimizing live event operations. With Pre-Live, Live and Post-Live dashboards, operators can validate infrastructure readiness, manage numerous concurrent live event channels, and smoothly transition the audience from live to post-event content.
Furthermore, the Live Event Manager enables dynamic provisioning of events and resources, allowing for on-demand
scaling and routing adjustments. ZEN Master maximizes bandwidth utilization and resilience by implementing bonded live video routes across diverse geographical paths. Its unique feature of sequenced hitless failover ensures uninterrupted event delivery, adding an extra layer of protection.
ZEN Master continuously monitors all live event channels, providing operators with extensive real-time network performance telemetry. Custom alerts based on event stages can be configured, enhancing operational efficiency and ensuring prompt response to any anomalies.
In essence, ZEN Master's Live Event Manager offers comprehensive orchestration and control over the diverse components of the intricate live event chain, ensuring smooth execution and delivery of highquality content.
TECHNOLOGICAL ADVANCEMENTS:
The Live Event Manager stands out as the premier orchestration and control software for remote production applications in the industry. Leveraging the ZEN Master cloud-based control plane, users can efficiently manage and monitor complex deployments at scale, configure live broadcast channels across various protocols, and seamlessly integrate with the Zixi Enabled Network of integrated devices and applications. With ZEN Master, users gain cost-effective control over edge, network and cloud devices, complete with full telemetry and visualization of network streams. The software offers essential tools such as workflow visualization, alerting, history tracking, automation, live event scheduling, reporting and root cause analysis, catering to the intricate needs of modern media supply chains.
Adder Technology ADDERLink INFINITY 1100 Series
The ADDERLink INFINITY 1100 (ALIF1100) series is specifically designed to meet the rigorous demands of the broadcast and media industries, where precise, real-time control and high-quality video are paramount. This advanced KVM-over-IP solution ensures media professionals can manage remote systems with minimal latency, supporting full-HD video resolutions up to 2560x1600 at 60 Hz and enabling seamless USB peripheral connectivity.
Key features include dual functionality as both a transmitter (TX) and receiver (RX), facilitating point-to-point extension or integration into complex KVM matrices. This flexibility is ideal for broadcast environments, where rapid switching between sources and destinations is critical for smooth operations. The ALIF1100 series supports unlimited extension distances over IP, making it perfect for large-scale, geographically dispersed production sites.
Security is a critical aspect of the ALIF1100 series, with AES256/RSA2048-bit encryption and two-factor authentication protecting sensitive media content and production workflows. The series can be managed through the ADDERLink INFINITY Manager (AIM), providing a centralized interface for configuration, monitoring and real-time control. AIM simplifies complex broadcast setups, ensuring efficient management of multiple devices and connections.
The ADDERLink INFINITY 1100 series deserves recognition for its exceptional contribution to the broadcast and media industries, excelling in key areas:
• End User Benefit and Impact: The ALIF1100 series provides ultra-low latency control, essential for live broadcast and post-production environments, ensuring that content cre-
ators can work in real time with high-resolution video and responsive control.
• New Opportunities Created: The ability to configure pointto-point or matrix setups allows broadcasters to efficiently manage multiple feeds and control systems, enhancing flexibility in production workflows and enabling dynamic content delivery.
• Relevance: In an industry where precision and timing are crucial, the ALIF1100 series delivers reliable, high-performance KVM-over-IP solutions, addressing the needs of modern media production environments.
• Implementation: The plug-and-play nature of the ALIF1100 series, along with intuitive centralized management via AIM, reduces setup complexity and minimizes downtime, critical for live broadcasting.
• Innovation: The series features cutting-edge IP-based KVM technology, offering dual TX/RX capabilities and scalable extension options, which are innovative in enhancing media production capabilities.
• Performance: It provides consistent, high-quality video with ultra-low latency, ensuring smooth and accurate control of remote systems, crucial for editing, broadcasting and other media applications.
• Application Quality: The ALIF1100 series supports diverse broadcast and media applications, from outside broadcasting (OB) trucks to studios, providing a reliable solution for real-time video and peripheral control.
• Value for Money: With its robust feature set and flexibility, the ALIF1100 series offers excellent value, meeting the budget constraints of both small and large media organizations while delivering top-tier performance.
• Evidence of Achievement: The widespread adoption of the ALIF1100 series in broadcast facilities and media companies globally highlights its proven reliability and performance in high-pressure environments.
These attributes collectively highlight the ADDERLink INFINITY 1100 series as a premium choice for the broadcast and media industry, making it a strong contender for this award.
Adobe
After Effects
As global demand for 3D animation surges, After Effects’ new 3D workspace makes it easier for motion designers to create visually stunning effects without the hassle of constantly switching between creative applications. After Effects now supports the ability to work with native 3D objects and embed 3D animations from imported 3D models.
3D animation is on the rise, with demand skyrocketing within entertainment and marketing. Box-office revenue for animated films grew 15 percent year-over-year, while a recent study demonstrated that 3D advertisements had a 30% higher click-through rate compared to 2D ads. These metrics underscore the surging need for high-quality animation, and Adobe is empowering motion designers to stay ahead of the curve.
After Effects’ new native-3D workspace now includes enhanced workflows that support the ability to embed 3D animations from imported 3D models. The tool set also includes a more realistic shadow-casting feature, as well as depth-mapping functionality to allow users to isolate effects in 3D space. With the addition of 33 new animation presents, designers can also spend less time on tedious tasks and more
time designing in both 2D and 3D. When it comes to performance, After Effects has also gotten a speed boost, with performance on Windows up to four times faster than previous versions. Lastly, both Premiere Pro and After Effects have a refreshed, modern design that offers more consistency between programs — making it easier to translate projects between apps. Designers can spend more time creating and less time relearning how to use tools across Adobe Creative Cloud.
In the dynamic world of motion design, After Effects users must stay ahead of the creative curve to differentiate themselves in an increasingly crowded media landscape. The introduction of a native 3D workspace in After Effects gives motion designers a unified 3D environment to freely explore their creativity without the hassle of constantly switching between multiple applications. In the latest version of After Effects, users are introduced to new creative possibilities, allowing them to explore a wider range of visual concepts and add depth to their projects. Few other design tools offer a unified, all-in-one solution for 2D and 3D creation, powered by the technological excellence of the After Effects and Substance 3D application set.
Adobe Frame.io
Frame.io V4 is redesigned to meet the varied needs of postproduction video teams. With all-new capabilities spanning workflow management, file transfer, and media asset review, approval, sharing and presentation, V4 introduces a system that is fully customizable, powerful and flexible enough to facilitate any creative workflow. This marks a shift from Frame.io’s origins as a post-production review and approval tool to a collaborative platform that supports the entire video-creation process.
FRAME.IO V4 UPDATES INCLUDE:
• Workflow Management: V4 offers an all-new metadata framework, transforming how users interact with assets: Instead of relying solely on a rigid folder structure, users can now tag, organize and view their media based on how their teams work.
V4 also introduces “Collections,” a flexible, real-time and saved view of assets that allows users to dynamically select, filter, group and sort their media using metadata.
• Creative Review and Approval: V4 features a unified and redesigned player architecture to deliver a beautiful immersive media viewing experience, with consistent controls across multiple file types. Users will experience more and better ways to share feedback, including an entirely overhauled commenting system that includes over a dozen new
commenting features.
• Sharing and Presentation: V4 consolidates sharing and presentation workflows into a powerful new unified sharing function that delivers a more fluid experience for users to browse, preview and customize their shares — all from a single view.
Frame.io V4 also introduces a redesigned cloud file system and a high-speed transfer feature capable of handling bulky uploads with ease. Already the fastest, easiest and most secure way to automatically upload photos, videos, audio and metadata from a camera or cloud-connected device, Frame.io’s Camera-to-Cloud capabilities are improved in V4, with condensed workflows and support for over two dozen devices, including Fujifilm, RED and Atomos hardware, with more to come in 2024.
As the demand for video content continues to skyrocket, modern workflows have evolved, and so too have the needs of Frame.io’s community — companies, brands and individuals all need one unified platform that streamlines how teams and stakeholders come together to ideate, collaborate and create.
Creatives involved at all stages of video creation — including casting, location scouting during pre-production and dailies reviews during production — still experience disconnected and broken processes. As a result, creatives and stakeholders often suffer from slower production, artistic misalignment, increased costs and missed deadlines.
Frame.io’s new design and performance support every step of the video content creation lifecycle, from workflow management to creative review, approval, sharing and presentation.
Adobe Premiere Pro
Adobe Premiere Pro’s Enhance Speech is a breakthrough, AI-powered audio editing tool that instantly enhances the clarity and quality of voice recordings. It is used today by millions of video creatives, saving post-production pros valuable time spent on otherwise tedious dialog cleanup.
Sound design is a key part of the video-editing process, transforming visuals into an immersive, emotional experience that has the power to connect audiences to new places, characters and experiences. For editors, there’s often too much time spent thinking about which audio tool to use and searching through panels and menus to find it. This “mouse mileage” — the distance the mouse travels when using a computer — has long been a pain point that stands between editors and their edit.
Adobe’s Enhance Speech addresses this challenge head on. Recognizing the utility of Enhance Speech for video editors, Adobe’s Pro Video and Audio product managers, designers and engineers worked closely with hundreds of professional editors to understand how best to integrate the feature into Premiere Pro and improve the workflow without impacting muscle memory. Over more than a year of development, Adobe teams worked through iterating designs, testing ideas and incorporating feedback based on real-world use cases.
Editors have three easy steps to follow that will allow Enhance Speech to run in the background, so they can continue their workflow without skipping a beat. These steps include:
• Selecting an audio clip containing dialogue on the timeline.
• Opening the Essential Sound panel and selecting Enhance to start the analysis.
• A progress bar will appear in the Essential Sound panel to give an estimate of how long it will take to enhance the selected audio clip.
Mix Amount Control: Depending on the situation and specific clip, an editor may want more or less enhancement. With Mix Amount control, editors can adjust the amount of enhancement they need by dragging the slider to the left or right.
During rendering, editors can play the portions that have already been rendered and listen to them as part of the mix. After the process is complete, the audio will be clearer and the speech more distinct, improving the overall quality of dialogue recordings.
Enhance Speech sets a new standard in audio post production by automatically improving the clarity and quality of voice recordings. The tool was introduced to Premiere Pro as part of a set of new and intuitive audio workflows that make it faster and easier to edit and mix sound directly in Premiere Pro. Whether you’re an experienced pro or new to editing, Premiere Pro’s audio workflows put the right tools for the job at your fingertips so you can focus on storytelling— and get to a final quality sound mix with fewer clicks.
Since Adobe first began applying AI technology to creativity and productivity-based use cases, hundreds of millions of creators have put Adobe AI-powered capabilities to work creating and editing billions of pieces of content. Whether you work on low-budget projects or feature-length films, AI functionality is the next evolution of filmmaking.
AJA Video Systems
AJA’s newest addition to its award-winning line of color management and conversion solutions, OG-ColorBox, packages the powerful feature set of AJA’s popular ColorBox device into an openGear form factor, designed for reliable performance in critical live environments. Poised to reimagine color-management pipelines in broadcast and live-event production environments, the cutting-edge device is engineered to meet an ever-evolving range of color needs in an era where visual standards for live productions only continue to rise, and HDR and cinematic workflow adoption grows.
The OG-ColorBox delivers a robust set of tools to address diverse color needs, so professionals can be confident they can handle nearly any color request in the field, whether managing camera Log inputs, transforming SDR to HDR in real time and more. The device boasts unparalleled performance and seamlessly integrates with existing broadcast workflows while delivering ultra-low latency and high-density 4K/UltraHD HDR/WCG video processing capabilities.
One of the key highlights of OG-ColorBox is its ability to deliver real-time processing with less than half a video line of latency. It ensures that video sources remain in sync with downstream switchers, preventing delays and maintaining the integrity of live broadcasts. The device
also supports 4K/UHD HDR workflows up to 4:2:2 10-bit 60p, with 12G-SDI I/O and HDMI 2.0 output connectivity, enabling users to achieve the highest image quality without sacrificing performance.
OG-ColorBox isn’t complicated to learn or use. It features an intuitive web UI and supports industry-standard remote control panels from AJA partners CyanView and Skaarhoj. Using these integrations, professionals can control multiple ColorBox units for a more streamlined workflow and enhanced operational efficiency. OG-ColorBox also supports UltraHD to 1080p downconversion, enabling additional processing steps to be completed within a single operation, further simplifying complex workflows.
With OG-ColorBox, it’s easy to load LUTs and convert SDR/HDR with built-in color processing pipelines from AJA, NBCU and ACES, or purchase additional licenses to support other popular color-processing options. The built-in AJA Color Pipeline offers a 33pt 3D LUT processor with tetrahedral LUT interpolation, reconfigurable 1D LUTs to be RGB color correctors and reconfigurable 3x3 matrices to be ProcAmps. OG-ColorBox also offers a licensable Colorfront add-on, which unlocks features such as TV Mode with Sony SLog3, Live Mode with ARRI 35 LogC4 Wide Gamut 4 WVO support, and a new SDR to Dolby Vision Preview mode. Other license options include ORIONCONVERT, which uses floating point math to eliminate interpolation errors, and industry-standard BBC HLG LUTs.
The device’s comprehensive feature set, combined with its compact form factor, makes it an ideal solution for studios, OB trucks and flypacks where space and power efficiency are paramount. OG-ColorBox addresses the most pressing challenges in live production color management and sets a new industry standard, making it well-deserving of TV Technology Best of Show recognition.
Amazon Web Services (AWS)
AWS Elemental MediaLive Anywhere: Cloud-Controlled Live Video Encoding on Your Own Infrastructure
AWS Elemental MediaLive Anywhere: Cloudcontrolled live video enables broadcasters, content providers and live production companies to configure, control, monitor and manage on-premises hardware for video encoding and processing from the cloud. This new capability of AWS Elemental MediaLive debuts as professionals across these verticals look to take advantage of the managed capabilities cloud services like AWS Elemental MediaLive offer but have video sources are anchored on premises.
Prior to AWS Elemental MediaLive Anywhere, the operational costs to maintain on-premises appliances and keep software updated were often a barrier to unlocking new features and video quality improvements. Furthermore, ongoing monitoring of encoding workloads required interaction with multiple locally deployed tools and customized monitoring infrastructure. Customers migrating to the cloud often have hardware deployed in data centers that can be repurposed as cloud-controlled encoders. Some professionals also require encoding for last-mile delivery to a locally managed CDN or packager, and round trips to the cloud for processing add unnecessary complexity. MediaLive Anywhere provides a fully managed, cloud-connected solution that solves these challenges — especially for customers with hybrid ground and cloud workflows.
MediaLive Anywhere can be deployed to support a range of applications, from high-quality live video contribution to the cloud for processing and distribution to enabling live video encoding from on-premises sources such as SDI video inputs with broadcast-grade video encoding quality. MediaLive Anywhere also supports live video distribution to both remote and locally managed network destinations.
The new feature also streamlines live video operations by simplifying software maintenance. MediaLive Anywhere updates to the latest features and updates whenever a channel starts, and operators can easily manage hybrid or on-premises video encoding with consistent APIs, chan -
nel profiles, logs and metrics, and configure, control and monitor channels from one central location. Operators can also manage hardware in multiple regions and consolidate management in hybrid configurations. Video processing is also more straightforward with MediaLive Anywhere, which reduces the complexity and overhead involved in encoding from on-premises video sources such as SDI and avoids unnecessary round trips to the cloud when sending video to local network destinations. MediaLive Anywhere makes cloud migration easy, allowing users to repurpose hardware deployed in data centers as cloud-controlled encoders. It also increases streaming efficiency with pay-as-you-go pricing that saves on costly, large adaptive bitrate outputs and capital expenditures for software.
MediaLive Anywhere is the fourth addition to the AWS hybrid cloud-product lineup, which includes AWS Elemental Link HD, Link UHD and MediaConnect Gateway. These products share a cloud-based control plane for remote device management, AWS Management Console for configuration and management, Amazon CloudWatch for centralized monitoring, automated software upgrades, pay-as-you-go pricing, and a consistent set of APIs.
The technology is opening new doors for broadcasters, content providers and live production companies to maximize legacy hardware while also taking advantage of all the benefits the cloud has to offer. For these reasons, it is a strong candidate for a TV Technology Best of Show IBC2024 Award.
Amazon Web Services (AWS)
AWS: Cloud-Based Broadcast Operations, Monitoring and Control Solution
The AWS cloud-based broadcast operations, monitoring and control solution represents a groundbreaking innovation for the media and entertainment (M&E) industry. A first of its kind, this solution enables broadcasters to manage and oversee mission-critical, complex and geographically dispersed broadcast environments more intuitively than ever. With just a few clicks, broadcasters can create channels, monitor signal integrity, ensure playlist and schedule accuracy and verify that ads are running correctly — all within a resilient cloud-based environment designed to safeguard the signal path with multiple redundancies.
The AWS cloud-based broadcast operations, monitoring and control solution provides broadcasters with unprecedented agility, flexibility and cost efficiency while also helping to reduce their carbon footprints. Offering a repeatable framework, it simplifies the path to cloud-based operations, enabling the seamless automation of workflow components on AWS. This includes integrating solutions from Independent Software Vendors (ISVs) and AWS Partners such as TAG, TVU and M2A, as well as custom technology. It significantly reduces the complexity and engineering expertise traditionally required for such tasks.
Key workflow components of this solution include AWS Elemental MediaConnect for live video transport, AWS Elemental MediaLive for video encoding, AWS Elemental MediaPackage for video origination and packaging, Amazon Elastic Compute Cloud (Amazon EC2) for compute resources
and Amazon CloudWatch for monitoring infrastructure and performance metrics. These components are configured to feed into a Grafana dashboard, providing a straightforward visual representation of the workflow. The dashboard can connect to any solution offering an API, offering a comprehensive view of broadcast operations.
One of the most notable features of this cloud-based solution is its ability to translate the physical assurance of on-premises operations into the cloud. The integrated dashboard consolidates audiovisual monitoring, infrastructure monitoring, application monitoring and control into a single, unified interface. This “single pane of glass” experience empowers broadcasters with the same confidence they would have with on-premises systems, all while eliminating the need for specialized cloud development skills.
Despite the clear benefits of the cloud, adoption of cloud-based broadcast operations has been relatively slow, primarily because of the complex nature of traditional workflows and the specialized knowledge required to implement them. However, this innovative solution bridges that gap, providing a repeatable framework that automates and simplifies the integration of cloud-based components. It lowers the barriers to entry and accelerates adoption, enabling more broadcasters to take advantage of the significant efficiencies and environmental benefits offered by cloud technology.
This cloud-based broadcast operations, monitoring, and control solution stands out as a true game-changer for the M&E industry. Its unique combination of AWS technology, ISV and AWS Partner solutions, and user-friendly interface offers broadcasters an exciting new level of operational excellence and sustainability. For these reasons, it is a strong contender for a TV Technology Best of Show IBC2024 Award, as it sets a new standard for broadcast technology that is poised to carry us into the future.
Aputure STORM 1200x
The STORM 1200x uses an innovative new BLAIR light engine (Blue/Lime/Amber/Indigo/Red) to produce excellent full white spectrum across a wide CCT range with great fine-tune adjustability of +/– Green as well as other colors. Compared to traditional bicolor technology, the result is better quality white light and, because all the emitters are contributing, a brighter white light as well.
The Aputure STORM 1200x is a tunable white point source COB (chip on board) light. With its ProLock locking Bowens mount, it can accept modifiers to become a fresnel, a soft light, a projector or a hard open face par. Add its portable size, IP65 weather rating and CRMX/DMX/ Bluetooth connectivity, and the STORM 1200x is the workhorse do-everything light.
FULL SPECTRUM WHITE LIGHT WITH GREATER CRI & SSI
The new BLAIR Light Engine delivers a better-quality white light.
The STORM 1200x utilizes a new BLAIR five-color (Blue/ Lime/Amber/Indigo/Red) light engine to better fill out the color spectrum while offering greater adjustability. The calibrated Indigo enhances fluorescing materials, resulting in a higher quality white light that better matches natural daylight and black body sources such as tungsten quartz, CRI 95+, SSI [P3200]): 87, SSI (CIE D5600): 87.
2500-10,000K CCT RANGE
The STORM 1200x has a correlated color temperature (CCT) range from 2,500-10,000, greater than traditional bicolor.
The BLAIR Light Engine’s color flexibility allows the STORM 1200x to offer a greater CCT range than traditional bicolor
lights without compromising on the accuracy or brightness of the white light.
EXTENDED RANGE +/– GREEN CONTROL
The STORM 1200x is the first light of its kind to offer 100 percent +/– Green control, the full ASC MITC range of adjustment. Using the BLAIR Light Engine, color adjustability has both a wider range and finer control than traditional systems. No other 1200W-range light offers this flexibility.
PROLOCK LOCKING BOWENS MOUNT
The ProLock mount holds Bowens modifiers rigidly in place and properly aligned.
The STORM lights feature the ProLock Locking Bowens Mount, a more secure clamping design for attaching modifiers. Bowens accessories are held rigidly in place and precision optics light fresnels and Spotlight projectors are precisely aligned with the light engine.
IP65 WEATHER PROTECTION
The STORM 1200x provides IP65 weather protection.
Head-to-toe IP65 dust and weather protection means the STORM 1200x can work in extreme environments. Seals on every connector and around components allow not only the lamphead, but also the control box and cabling, to be used outside in the rain.
EXTREME COLOR-ACCURATE DIMMING
STORM 1200x dims accurately down to 0.1%.
An advanced mix of analog and PWM algorithms increase the bit depth control for low-end dimming. This enables the STORM 1200x to use PWM color-accurate dimming down to 0.1% without shifts or dropoff.
CRMX, DMX, AND BLUETOOTH CONNECTIVITY
The STORM 1200x offers a full range of connectivity, with CRMX, DMX, sACN, ArtNet and Bluetooth.
The Storm 1200x features all of the professional connectivity options of a modern lighting design. CRMX, DMX in and out, sACN and ArtNet over etherCON and Sidus Mesh Bluetooth.
Bitmovin
Bitmovin AI Contextual Advertising & Prediction
Bitmovin’s AI Contextual Advertising & Prediction combines the power of the Bitmovin Player and Bitmovin’s Encoding to provide hyperpersonalized adverts for audiences based on the content they are viewing and ads shown at the time they’re most likely to convert. Bitmovin’s AI Contextual Advertising & Prediction solution addresses the industry’s need to maximize revenue opportunities with more relevant and betterplaced adverts.
Bitmovin’s AI Contextual Advertising & Prediction works with artificial intelligence (AI)-driven content analysis. Bitmovin’s Encoding solutions leverage AI to analyze video content, audio content or both and use an AI model to extract the metadata from the content with the goal of enabling a more contentfocused advertising placement. Each extracted piece of content metadata is assigned a time stamp to map that information to the specific position in the content it belongs to. The metadata describes the content in sufficient detail to allow for targeted ad placement, but can also describe the scene more broadly. The Player then takes this metadata and sends it to a content-aware ad
server, which, in return, replies with contextual ads. Using a conversion heat map generated by the Player, the position of the ad is then optimized to increase the chance of a conversion. This heat map is based on AI analysis of historical data and the current user environment.
For example, if a viewer is watching content featuring motorcycles, the subsequent adverts could include helmets, motorcycles and other related products. The rationale is that by placing adverts that target audiences based on the content they are watching, it is more likely that the viewer will engage with the advert which will result in more revenue generated by advertising. Additionally, Bitmovin AI Contextual Advertising & Prediction measures conversion rates at different points of the video, combined with information about the user’s viewing environment to place the ad at the moment when the viewer is most likely to convert. Further, the solution can also help to prevent misplaced adverts. For example, if a violent scene is in a TV show or film, it won’t show adverts for things that could be used as weapons. But if a scene showed a car accident then it could place adverts relating to insurance.
What makes Bitmovin’s solution unique is the fact it is integrating its two existing products, which means it is simple to integrate for customers because there is no need for multiple third-party vendors. Bitmovin has complete control over the Encoding and Player under the same brand.
Bolin Technology
R9-418N
Bolin introduces the NDI-certified, NDI 6-ready R9 indoor PTZ camera. The R9 family has three imaging options: the R9-230NX with a Full HD Sony block and a STARVIS II sensor for extraordinary low-light performance, and Super Image Stabilization+ with a 30X zoom; the R9418N with a large 1-inch 4K30 Sony block, excellent low-light performance, Super Image Stabilization and an 18X zoom, and the R9-420N with a Bolin image module and Sony sensor delivering crystalclear 4K60 video with a 20X zoom. The broadcast quality of these imaging options enables seamless integration into existing video workflows.
The R9 indoor PTZ camera is a stellar performer. Leveraging the power of AMD FGAs, it simultaneously delivers up to dual 12G-SDI and Optical Fiber, both with Genlock, HDMI up to 2.0, and IP video streams (RTSP, RTMP, RTMPS, and SRT) with independent resolutions. Along with those IP streams, the R9 delivers NDI 6-ready High Bandwidth and HX3 video. Since the NDI video stream is NDI-certified, broadcasters can be confident in the R9’s compatibility and performance in any NDI environment.
The R9 has outstanding pan, tilt and zoom performance. The 340-degree fully proportional pan moves at a variable rate from 0.05 to 100 degrees per second, and the 120-degree Tilt range is fully proportional from 0.05 to 75 degrees. The 255
presets execute at 100 degrees per second at five different speeds with a recall accuracy of 0.01. PTZ control is smooth and precise, thanks to the Bolin-engineered motors with an industry-leading 255 VISCA steps.
To meet broadcasters’ evolving needs, the R9 fully implements the FreeD protocol, with accurate data reporting enabling AR/VR production capability. It is noteworthy that racetracks in Europe and professional sports organizations in the U.S. have chosen Bolin cameras because of the image quality and accuracy of the FreeD implementation. The R9 also supports Comms and two input and two output channels of professional, balanced XLR audio available on all the video outputs.
The R9 has an in-built grab handle for convenience when repositioning, mounting on a tripod, or mounting on a wall. It also has a modern web UI for remote access to the entire feature set and Quick Access to the most commonly used settings.
The R9’s image quality, variety of high-quality simultaneous video outputs with accurate FreeD data reporting, professional, balanced audio and other advanced features put it in a class of its own. It is a powerful indoor PTZ camera designed to meet or exceed the demands of professional broadcasters.
Bridge Technologies VB440 Canvas Display
Bridge Technologies’ new Canvas display was developed for its groundbreaking production probe, the VB440.
Designed to grant users unprecedented levels of workflow flexibility, customizability and efficiency, the Canvas display feature reinforces the VB440’s raison d’étre; to allow production specialists of all types — be they camera painters, audio technicians or network engineers — at-a-glance, intuitive access to the information they need to complete their role, on a remote, distributed and live basis, from any HTML5 web browser, anywhere in the world.
Finding ways to make such extensive tooling quickly accessible to its users has been the key challenge for the VB440. Functions were previously grouped behind tabs — and whilst this made for a logical and accessible approach, it did not yet meet the optimised potential embedded in the probe. Canvas changes this.
Canvas represents a single-screen option in which users can fully customise what is displayed to them, adding as many scopes, meters and displays to the page as they desire, resizing, grouping and arranging in a way that makes most sense for the user’s specific workflow. Users can create multiple screens with varying arrangements which are displayed as changeable tabs, each customised to a particular task or work approach. For instance, camera painters might choose to have one Canvas screen in which they display two side-by-
side real time displays for a single camera — one HDR and one SDR graded, accompanied with a full range of waveforms, vectorforms and scopes — whilst on a second tab they may have a Canvas, which displays all the production cameras, each paired with just one display scope and grouped accordingly. Audio and network engineers have similar levels of flexibility, picking out and combining only the tools they need for the task, in a configuration that makes best sense to them. This provides unrivalled flexibility — allowing the production professional to work in the way they choose, rather than being forced to conform with a workflow dictated by the product manufacturer.
Furthermore, each user can save their Canvas assemblies, importing and exporting the configuration and settings so that their workspace is ready to go regardless of which VB440 they are logging on to. This saves both setup and operational time by ensuring consistency.
Canvas display has been designed to accommodate diverse access options — be that a large screen in a fixed studio or a single iPad in the hands of an engineer on the move. The drag, drop and scale options can all be used as easily with a touch screen as with a mouse, and the infinitely combinable Canvas workspaces can be filled with as few or as many elements as makes sense based on the size of the display.
The new Canvas display serves a seemingly paradoxical benefit of revolutionising workflows in terms of what production professionals can do, when and how, whilst at the same time ensuring those workflows remain entirely familiar; giving users the potential to shape the technology to their own personal preferences, habits and needs.
BZBGEAR
BG-Commander-Ultra | 4K UHD HDMI Production Studio
Is there anything as formidable as the duo Deadpool and Wolverine?
ENTER BG-COMMANDER-ULTRA.
The Commander Ultra is a marvelous 4K Ultra HD HDMI production studio that pairs a 4K HDMI switcher interface with a PTZ joystick controller. Designed to meet the needs of high-performance video production environments, it offers a comprehensive range of features and robust connectivity options. This versatile unit excels in handling complex video and audio tasks, including camera control, video mixing, and advanced signal management.
FEATURES-IN-DEPTH
Commander Ultra supports up to 4K2K at 60 Hz resolution with both HDR and SDR capabilities, ensuring superior video quality and color depth. It is fully compatible with HDMI 2.0 standards, offering high-def video and audio output with a bandwidth of 18 Gbps (600 MHz single-link). The device also supports HDCP 1.4/2.2 for secure content protection and playback.
Commander Ultra provides comprehensive input/output options, including five HDMI inputs, four HDMI outputs, two XLR audio inputs, one USB-C output port, a firmware
upgrade port and multiple control ports (RS-232, RS-422, Ethernet) for flexible connectivity and integration with various equipment. The unit is also equipped with a Micro SDXC card slot for local recording of the PGM output stream.
For audio, it supports LPCM audio and provides analog stereo output via XLR connectors, ensuring high-quality sound management. An embedded video mixer features seamless switching with transitions such as wipe, dissolve, and DVE.
For camera control, Commander Ultra allows remote control of multiple camera settings (pan, tilt, zoom, focus, iris, white balance) using VISCA, Pelco-D, and Pelco-P protocols over IP and serial connections. The frontboard control allows users to navigate between PTZ Camera Controller and 4-Channel Switcher modes using a joystick and dedicated buttons, adjusting PTZ speeds, focus and other camera settings directly from the unit. It also manages up to four full-view outputs and one multiview output for comprehensive monitoring and production. The intuitive On-Screen Display (OSD) menu provides access to detailed camera settings and device information, supporting configuration for various control protocols and network settings. The device’s firmware can be updated via USB for the latest features and performance enhancements.
Canon Inc.
EOS C80 6K Camera
This year at IBC, Canon revealed its brand new EOS C80 cinema camera, the newest expansion in Canon’s cinema EOS line. The EOS C80 camera features a native RF mount and full-frame, back-illuminated stacked CMOS sensor and is designed for filmmakers who require a full-featured camera in a compact body. This camera proves that Canon is listening to its customers in the filmmaking space looking for ergonomic, efficient tools that provide amazing specs in a compact package perfect for doc filmmaking, narrative features, commercials and much more.
The Canon EOS C80 camera features a 6K full-frame, back-illuminated CMOS sensor with triple-base ISO, allowing the camera to deliver stunning imagery in a wide range of lighting conditions. The base ISOs of 800, 3,200 and 12,800 maximize the full dynamic range of the camera.
The EOS C80 camera also features Canon’s latest Dual Pixel CMOS Autofocus, Dual Pixel AF II. The back-illuminated stacked sensor’s positioning offers better light-capturing efficiency, which widens the area of the sensor that can be used for autofocusing. The sensor also empowers fast readout speed, as well as amazing 4K image quality from 6K oversampling.
In addition to moving to a full-frame sensor, the EOS C80 camera has also stepped up from its predecessor by adding 12G-SDI output, which enables uncompressed transfer of video signal with a secure cable connection. The camera’s design includes a variety of other interfaces including HDMI, mini-XLR audio inputs, time code, built-in Wi-Fi connectivity and Ethernet. The internet connectivity enables the camera to be controlled remotely via Canon’s XC protocol using Canon’s Remote Camera Control App or the Multi Camera Control App, providing additional flexibility for productions of all shapes and sizes.
The compact and lightweight EOS C80 camera is just as comfortable on a drone or a gimbal as it is on a tripod or in any configuration where compact size and light weight are important. The camera is ergonomically designed with a
new, lightweight handle assembly. The Multi-Function Shoe, located just above the LCD screen and the joystick controller, provides easier control and menu navigation.
The EOS C80 camera can record up to 6K 30P in Cinema RAW Light. Other recording options include Canon’s standard XF-AVC codec, which can record in 10-bit 4:2:2 with oversampling from the 6K sensor, creating rich detail and smooth imagery without the need for cropping the image from the sensor. Autofocus is enabled when recording in slow or fast motion at up to 4K 120P.
Additionally, the EOS C80 camera has two more recording codecs, XF-AVC S and XF-HEVC S. These formats feature an easy-to-manage naming system and folder structure while recording in the familiar MP4 format and preserving metadata.
Cobalt Digital
INDIGO OG-2110-BIDI4-GATEWAY
Cobalt is introducing the INDIGO OG-2110-BIDI4GATEWAY to provide a cost-effective solution to the conversion between SMPTE ST 2110 streams and SDI, in openGear form factor. The card is a bidirectional gateway between SDI and ST 2110 for applications that do not need audio/video processing. It features four SDI inputs, four SDI outputs and two cages supporting 10 Gbps and 25 Gbps SFPs. It supports simultaneous transmission and reception of up to four streams up to 1080p60 or one stream up to 4K. Its key features are:
• Support for ST 2022-7 seamless switching, up to Class-C WAN operation.
• Control via the openGear DashBoard application and/or through in-band/out-of-band NMOS IS-04/ IS-05.
EACH STREAM SUPPORTS:
• One ST 2110-20 (baseband) or ST 2110-22 (JPEG-XSoptional) video essence.
• Up to two ST 2110-30 audio essences, each capable of up 16
channels. The unit features a full audio router, allowing the user to mix and match channels between ST 2110 and SDI.
• One ST 2110-40 ancillary data essence.
• Support for reception of content that is not PTP-locked, which makes the INDIGO OG-2110-BIDI4-GATEWAY IPMX-compatible.
• “Make-before-break” stream switching: When switching from one stream to another, the INDIGO Gateway will join the new stream before releasing the old one, guaranteeing a seamless transition. This also includes audio ramp down/ramp up, to avoid pops and clicks.
The openGear form factor allows the INDIGO OG-2110-BIDI4-GATEWAY to easily integrate in existing deployments. For applications requiring high channel counts, up to five cards can be installed in an openGear frame for a total of 20 input and 20 output channels. This creates a cost-effective gateway solution for applications that do not require signal manipulation. The product is designed and manufactured in the U.S. and is supported by a five-year factory warranty.
Comprimato Twenty-One Encoder
The Twenty-One Encoder by Comprimato is a versatile ST 2110 gateway encoder, designed specifically for live broadcast production. It enables contribution encoding and decoding of live formats including H.264, HEVC and JPEGXS. Additionally, it effectively bridges traditional broadcast with ProAV environments by connecting ST 2110 networks with other live IP networks such as NDI, TS/ SRT or RTMP. This integration is facilitated by NMOS control, which allows for precise management and integration of networked media devices, streamlining operations across diverse broadcast settings.
Addressing the rapid pace of technological advancements, the encoder is both software-defined and COTS-based, which facilitates easy updates to accommodate new formats and broadcasting standards as they evolve. This adaptability not only extends its operational
life but also enhances its value, positioning it as a strategic asset for broadcast facilities looking to minimize hardware turnover while maximizing functionality.
The encoder’s dual capability to perform both encoding and decoding eliminates the need for multiple units, thereby saving space and reducing operational costs. Its design optimizes workflow efficiency, enabling the management of more channels and formats with less equipment.
Despite its compact 1RU form, the Twenty-One Encoder handles up to 16 channels, offering a high channel capacity that is ideal for high-demand scenarios and tight spaces alike. It includes ST 2022-7 redundancy, exemplifying its robust and efficient design suited for the rigorous demands of live broadcast production.
Overall, the Twenty-One Encoder meets the critical needs of modern broadcasting and ProAV, providing a forward-looking solution that ensures reliability and scalability in professional broadcast settings.
CuttingRoom CuttingRoom
CuttingRoom’s cloud video editor revolutionizes live video editing by enabling real-time manipulation of live feeds through its pioneering Growing Live Timeline feature. It is designed for seamless content creation, editing and publishing, eliminating delays in editing live footage and enabling instant content delivery. CuttingRoom’s approach significantly enhances the efficiency and flexibility of video production, making it a game-changer in the industry.
CuttingRoom addresses the critical need for speed and efficiency in live broadcast content production. Customers choose CuttingRoom to increase their output and reach a wider audience with engaging video content. With its super responsive user interface and fast ingest, upload, rendering and publishing of videos, it is the preferred cloud video platform for anyone looking for a scalable video editing platform that answers today’s video editing, working from anywhere, and collaboration requirements.
The video platform has smart features for capturing content directly from live streams, the CuttingRoom Reporter iPhone app or connected cloud services. Users can collaborate in real time when editing, adjusting aspects and frames, creating or importing multi-layer graphics, and creating video clips in any required formats. Videos can be shared directly to any connected social media channels, creating an extremely efficient workflow for distributing content to a broad audience.
As the first video editor in the world, CuttingRoom was at IBC 2024 to launch native support for the new CNAP
interoperability standard.
Out-of-the-box integrations make it quick and easy to use with footage from external cloud servers and for publishing content directly to favorite media platforms. New video assets can be saved in the CuttingRoom cloud platform or any connected MAM system, such as Mimir, Iconik or VIA, to AWS’ S3 cloud storage buckets, Wasabi or Backblaze, to Dropbox or other integrated platforms.
CuttingRoom is built for access from anywhere and requires only an internet connection. There are no limitations on the number of users, projects, incoming streams or simultaneous outputs. No installation, updates or maintenance is needed, as it is a true cloud-native Software-as-a-Service (SaaS) offering.
A typical use of CuttingRoom for a broadcaster is to cut directly from live video streams during premium events, like a world championship or e-games events. Users can efficiently and quickly create clips from connected live video sources, add branding and multilayer graphics, create the different video aspects required, and publish directly to MAMs, VOD platforms, websites and any social media platforms. Time to market is crucial, especially for ad-driven content.
The CuttingRoom team demonstrated the platform at IBC and showed a range of features, such as:
• Real-time collaborative editing from anywhere
• Editing with multiple video tracks
• Animated and keyframed pan and scan with portrait mode outputs
• Editing straight from live sources coming from MAM partners like Mimir
• A powerful graphics engine enabling full motion graphics support in the editor
CuttingRoom stands out from the crowd with its unique combination of getting started in a minute, finding the footage needed, editing with professional features, and publishing to any platform from the same easy-to-use interface.
Dina Dina
Dina is a newsroom and rundown system designed to meet the evolving needs of modern newsrooms. It provides journalists and storytellers with a platform that supports digital-first and story-centric workflows and addresses the practical challenges of modern storytelling.
From planning and resource management to content creation and publishing, Dina offers tools that enable journalists and newsroom staff to focus on producing high-quality content.
Dina bridges the gap between field reporting and newsroom operations, offering tools for story creation, editorial planning, and multiplatform publishing directly from a web browser or the Dina Mobile application. This ensures a live link to the newsroom for real-time tracking of schedules, communications, and story publication control.
Dina introduces significant innovations for news production and streamlines collaboration and content management with features like order management, multistudio support, enhanced chat functions, and AI-supported tools. Integrated with key technologies like LiveU and Mimir, it ensures immediate access to media, improving efficiency and storytelling agility.
By offering a system that supports digital-first strategies and integrates seamlessly with existing technologies, Dina helps newsrooms optimise their workflows and improve content quality. The system’s focus on enhancing communication and collaboration among teams in the field and the newsroom reflects an understanding of the dynamic nature of news
production today.
With Dina Mobile, journalists and storytellers can move their work to the field and work efficiently, directly connected to the newsroom. This ensures seamless integration with the newsroom and flexibility for journalists on the move.
Dina includes features like order management, which helps allocate resources efficiently, and multistudio support, allowing for managing complex production schedules. Enhanced communication tools within Dina support real-time team interaction, and AI functionalities aid in content management and creation.
A comprehensive planning feature, Dina Spaces, is designed to create customised planning workflows and dashboards, enabling teams to oversee projects. This feature works alongside an advanced search tool that facilitates the management of stories, rundowns, assets, and more, making it an essential tool for newsrooms focused on operational efficiency. Dina‘s delivery as a Software-as-a-Service (SaaS) model means that it receives regular updates without requiring downtime, allowing newsrooms to continuously access new features and improvements. The system’s open API and compatibility with various broadcast and digital platforms make it a versatile solution for newsrooms facing the challenges of a rapidly evolving media environment.
Dina’s strength lies in its foundation on modern web technologies. No legacy code hinders its adaptation to new technologies, such as AI. This adaptability is crucial for maintaining relevance and competitiveness in a fast-paced media landscape.
By focusing on innovation, adaptability, and the integration of digital-first strategies, Dina aims to transform newsroom operations, making it a key player in the future of news production. Its commitment to improving workflow efficiency and content management positions it as a significant contributor to the advancement of newsroom technology. The system’s integration with key technologies and platforms further enhances its value, providing users with a cohesive and efficient workflow.
ENCO
enCaption Sierra
EnCaption Sierra represents the next generation of ENCO’s groundbreaking enCaption solution for fast and accurate automated conversion and delivery of captions in broadcast and AV environments. enCaption Sierra continues the story of innovation and success in applying AI and machine learning to live captioning, with new benchmarks attained in speed and accuracy same as each prior generation. For the first time, enCaption Sierra brings ENCO’s market-leading automated captioning technology together with an SDI captioning encoder to create an all-in-one solution for on-prem environments and containerized deployment options for cloud.
ENCO continues to push the boundaries of live automated captioning in diverse applications and environments. enCaption Sierra’s ability to produce and deliver faster and more accurate captions correlates directly with new parallel processing capabilities powered through a GPU and large-language models. The
GPU architecture delivers unprecedented computational power to ENCO’s AI-based speech-to-text engine.
The same improvements are present in Sierra’s integrated enTranslate module, which uses machine translation and grammatical structure analysis to deliver captions for up to 37 languages simultaneously. ENCO also adds new languages to enTranslate’s engine on a consistent basis, with three new languages added in time for Sierra’s public debut at the 2024 NAB Show, including a bilingual language model for Spanish-English content.
ENCO has also sharpened the listenability and responsiveness of the system with the Sierra release, improving performance in challenging audio environments. enCaption Sierra also leverages GPU processing for improved speaker change detection and the ability to recognize music, laughter, applause and crowd noise. Sierra is an excellent listener and reliably computes fast speech, strong accents and problematic audio quality to consistently deliver clean captions.
enCaption Sierra can be delivered on Windows or Linux operating systems and is managed and monitored from a web browser. enCaption Sierra’s modern GUI features a simple calendar scheduler and various configuration settings, including custom dictionaries, word models, filtering and bilingual language options.
ENCO continues to lead the charge for captioning innovation as it pertains to helping broadcasters and MVPDs produce clean closed captions for their broadcast content with greater efficiency. The same philosophy applies to open-captioning needs in AV environments to help audiences improve comprehension. Sierra also addresses the need for flexible deployment options, offering complete on-prem and cloud options that align with each broadcaster’s operational model with significant performance and cost-reducing benefits.
Evergent, Mux Evergent-Mux Video QoE and Advanced Churn Management
PROBLEM
Consumer expectations are rising. High-quality streaming experiences are now table stakes and essential for subscriber retention amid a crowded subscription landscape. Any disruption, such as poor video quality or buffering during critical moments, can lead to frustration and churn, a significant concern with annual churn rates exceeding 50% in the U.S., per Parks Associates. The challenge is further compounded by the complexity of managing and analyzing video-quality data in real time. Streaming providers need a way to proactively monitor and address these issues, enhance user satisfaction and reduce churn to stay competitive in an increasingly saturated market.
SOLUTION
The Evergent and Mux QoE collaboration directly addresses these challenges by integrating advanced Video Quality of Experience (QoE) data from Mux with Evergent’s subscribermanagement capabilities. Launched on March 29, this cloud-based joint solution empowers OTT platforms, pay-TV operators and direct-to-consumer sports streaming services to enhance user engagement through deep, real-time insights into every element of the streaming experience.
• Real-Time QoE Monitoring: Mux QoE tracks video quality metrics like startup time, rebuffering and bit rate. When combined with Evergent’s subscriber data, it provides a new, 360-degree view of user experience and subscriber behavior.
• AI-Powered Churn Prediction: With real-time data feeding directly into Evergent’s churnmanagement engine, the solution predicts subscribers at risk of churning based on QoE and viewing habits, enabling targeted retention efforts.
• Proactive Issue Resolution: Platforms can detect and respond to issues in real time. For example, if buffering occurs during a live
event, providers can offer instant compensation, such as discounts or loyalty incentives, to maintain customer satisfaction.
• Seamless Integration: The solution is designed for easy implementation with existing Evergent Monetization Platform customers, minimizing disruption while expanding capabilities.
OUTCOME
The Mux QoE integration offers transformative results for streaming providers:
• Enhanced Customer Satisfaction: By proactively addressing QoE issues and offering personalized responses, platforms can significantly increase user satisfaction, leading to higher retention rates.
• Reduced Churn: Real-time insights and AI-driven churn prediction allow providers to implement targeted strategies to retain subscribers, reduce churn rates and maintain a stable customer base.
• Increased Revenue and Engagement: By ensuring highquality streaming experiences, platforms can improve subscriber loyalty, which drives long-term profitability. Media companies using Mux QoE can align content offerings with user preferences, optimizing marketing strategies and enhancing overall engagement.
• Better Decision-Making: Detailed reporting on subscriber behavior and QoE data enables platforms to make informed decisions about content and service improvements, ensuring they meet evolving consumer expectations.
Evergent’s Mux QoE offering is a pioneering, first-of-its-kind solution that provides new levels of visibility and holistic insight into every element of streaming subscriber experiences. By seamlessly integrating QoE insights with subscriber management and billing data, it empowers media companies to deliver proactive, intuitive, personalized viewing experiences, ultimately boosting retention and profitability.
Dreamcatcher BRAVO Studio
BRAVO Studio is the complete cloud-based production control suite that redefines live production today.
Using MAGNUM-OS for orchestration, BRAVO Studio enables users to schedule and automate the event preparation including routing of incoming remote feeds, allocating resources, and configuring the operator stations. This allows customers to seamlessly transition between productions with minimal effort.
BRAVO Studio is a collaborative, web-based live production platform that is redefining the creative experience for content creators and broadcasters. Providing virtual access to all the services found in the traditional control room, BRAVO Studio is a simple, reliable and cost-effective platform that accesses live video and audio from remote locations over dedicated networks, 5G networks or public Internet. The platform ingests multiple live camera feeds; provides live video and audio mixing with transitions; multiple video overlays for picture-on-picture or multibox looks; slow-motion replays; clip playout; highlight clipping and packaging; multiple dynamic graphics layers; and multi-image display of sources and outputs on the user interface. Technical directors and operators collaboratively produce live events with BRAVO Studio from anywhere in the world using a web browser.
In addition, Evertz has also brought all the power of Studer
Vista to its DreamCatcher BRAVO Studio virtual production control suite with the introduction of Vista BRAVO. This integrates a full mixing console into BRAVO Studio and gives users all the flexibility they need to enhance live productions, whether working on-premises or through the cloud.
BRAVO Studio’s new “Highlight Factory“ is an opportunity for creators to amplify their storytelling. This powerful new Metadata copilot automatically curates clips and stories that are automatically created using Large Language Model (LLM) AI technology, and published to Ease Live, where users can pick their own highlights. The “Highlight Factory” leverages access to all the angles available in the production to create unique highlights that can added to the production or pushed to social media or other platforms. This copilot for BRAVO Studio helps automate and simplify production workflows and gives small creative teams of all skill levels high quality and consistency throughout the production. BRAVO Studio is proving to be a game-changer, particularly for events that include live sports, local news, esports, entertainment, corporate and government.
RFK-ITXE-HW-DUO
Flexible Media Processing Platform
The RFK-ITXE-HW-DUO is a powerful, flexible platform that allows broadcasters to easily convert between different formats and resolutions. This next-generation media processing platform gives broadcasters and streaming applications a single point of entry for all compressed and uncompressed signals, regardless of format.
An ideal solution for hybrid broadcast facilities and broadcasters looking to transition to IP, the RFK-ITXEHW-DUO joins Evertz‘s already diverse ecosystem as the successor to the popular 570ITXE platform, which has been widely deployed by leading media companies worldwide.
The RFK-ITXE-HW-DUO, which caters to all signal-acquisition applications, offers a range of features and capabilities that make it a versatile and powerful solution for any media workflow. The platform supports multiple transcode paths, allowing users to convert between different formats and resolutions with high quality and efficiency. It also supports UHD HEVC decode and encode, enabling users to deliver stunning 4K content with low bandwidth and latency.
Additionally, the RFK-ITXE-HW-DUO supports JPEG XS encode for low-latency video compression wrapped in reliable transport offerings, including Secure Reliable Transport (SRT) or Reliable Internet Stream Transport (RIST), which makes the platform ideal for remote production and cloud-based workflows. The RFK-ITXE-HW-DUO also supports SMPTE ST 2110 (up to UHD), the industry standard for IP-based media transport, as well as SDR and HDR support, ensuring compatibility with expanding color-space normalization requirements.
Designed to meet the evolving needs of a media industry facing increasing demand for high-quality, multiformat and multiplatform content delivery, the RFK-ITXE-HW-DUO offers users a flexible and scalable platform that can be easily integrated into existing or new workflows. Each RFK-ITXEHW-DUO can support up to four transcodes where each transcode can accept any of the following inputs: compressed input, including JPEG XS, JPEG-2000, HEVC, H.264 and MPEG2 or SDI; and SMTPTE ST 2110 uncompressed inputs. Each transcode path includes a full up/down cross-conversion stage including an in-line frame sync for video, audio,
ancillary data, timing and color space-based normalization. The output of each transcode path will provide a multistage output path handing off a SDI legacy output, a parallel uncompressed ST 2110 output, a high-bit-rate mezzanine encode, a low-bit-rate IPTV encode and finally a parallel JPEG2000 (or JPEG XS) high-bit-rate, low-latency output. The agile RFK-ITXE-HW-DUO can also allow for further unique scalability as processing blocks can be allocated dynamically.
By leveraging Evertz’s expertise and innovation in media processing, IP networking, and cloud technologies, the RFKITXE-HW-DUO delivers incredible versatility and eliminates the need for bulky infrastructure.
GlobalM
SDVN Orchestration
GlobalM is the world’s most-advanced Software-Defined Video Network (SVDN). We provide a seamless, scalable and cost-efficient way to manage and distribute high-quality video content globally, simplifying workflows and enhancing operational flexibility across diverse broadcast environments.
We enable media outlets, broadcasters and event organisers to stream live, high-quality video content with minimal delay, saving between 50% to 90% over satellite and legacy networks. This significantly improves operational efficiency and scalability, making it easier to manage large events or remote productions without the need for specialised infrastructure.
TECHNICAL EXCELLENCE BACKED BY A COMMITMENT TO INNOVATION
By geolocating network resources for each origin and destination, GlobalM can precisely route video streams to the correct endpoints. Scaling to multiple takers with unique stream URIs (Uniform Reource Identifers) to ensure that each destination receives a stream that meets its specific requirements.
GlobalM’s emulation of a traditional router panel provides a familiar interface for users, replicating conventional master control room procedures. This feature simplifies the transition to a digital system, allowing users to manage video services using established workflows while benefiting from advanced automation managing complex processes in the background.
Dynamic scaling for managing and allocating resources means that our SDVN technology can adapt to changing demands, reduce latency and improve overall performance. Efficient resource management optimises usage and leverages the cloud as a commodity so users can implement the most cost-effective cloud providers, depending on the date and time of deployment.
GlobalM sits at the convergence of broadcast engineering and IT skill sets, with the orchestrator connecting the two sides and seamlessly managing the required resources and processes. This approach facilitates flexible and efficient management of video transport across global locations, enhancing service delivery and reducing operational complexities. A
streamlined experience removes the barriers to adoption for IP workflows, emulating a familiar workspace that broadcast engineers are comfortable with.
ADVANCED MONITORING AND TROUBLESHOOTING FOR RELIABLE VIDEO DELIVERY
We also enable customers to improve the accuracy and efficiency of video distribution by dynamically troubleshooting any issues with automated exception monitoring. Some examples include alarms triggered if two encoders are sending to the same endpoint, if an incorrect passphrase is entered or if link latency is too low. The system also communicates any packet loss and excessive packet retransmissions.
Our technology also improves transcoding processes with tailored processing for each receiver, including matching minimum latency requirements, codec modification and advanced bit-rate customisation. Streamlining processes through the orchestrator allows users to concentrate on delivering high-quality video services rather than managing the complexities of resource allocation. This separation of responsibilities enhances operational efficiency, reducing the risk of errors by leveraging automated and precise resource management.
GlobalM’s SVDN technology offers transformational benefits for media and broadcast organisations by enabling cost-efficient and flexible video delivery through optimised cloud resource management, reducing reliance on traditional infrastructure. This innovation is driving a major shift towards more agile, scalable and efficient broadcast operations by empowering broadcasters to reach global audiences with higher quality and lower latency.
Grass Valley Sport Producer X
All-in-One Production Solution — Sport Producer X is a groundbreaking AMPP application package that revolutionizes live production by integrating a software-based production switcher (Maverik X), replay system (LiveTouch X), graphics, audio and multiviewer into a single, user-friendly interface. Designed specifically for small to medium-sized live productions, Sport Producer X offers the power and flexibility needed to streamline production workflows, all from a single workstation.
KEY FEATURES:
• Single-Workstation Production Interface: Sport Producer X features intuitive, gesture-based controls for switcher operations, replay, and graphics, all within a customizable layout tailored to specific production needs. This design empowers operators to manage complex productions with ease, enhancing both efficiency and creativity.
• Comprehensive Workflow Integration: The platform seamlessly integrates with tactile interfaces and macro retrieval capabilities, enabling synchronized replay wipes and smooth, efficient operation. This ensures that every aspect of the
production process is cohesive, reducing the risk of errors and improving overall production quality.
• Flexible Deployment: With its Software-as-a-Service (SaaS) model and format-independent architecture, Sport Producer X provides unmatched agility and scalability. This flexibility makes it the perfect solution for seasonal broadcasts, event-based productions, and organizations seeking a dynamic production tool that can adapt to varying needs.
Sport Producer X is designed to serve a growing market segment that demands the efficiency, power, and advanced features of enterprise-class production tools, but at a more accessible price point. By combining sophisticated functionality with an intuitive, easy-to-operate toolset, Sport Producer X empowers a single operator to produce an entire program without compromising on quality or creativity.
This innovative solution is a direct response to the increasing demand for cost-effective yet powerful production tools that do not limit the operator’s creative potential. Sport Producer X sets new standards for flexibility, performance, and value in the live production industry, ensuring that even smaller productions can achieve broadcast-quality results.
Sport Producer X was demoed and launched at IBC2024, providing industry professionals with an opportunity to experience its capabilities firsthand. The solution will be available on the App Store starting in December.
IMAX
StreamSmart On-Air
The number one video technology challenge for streaming businesses is cost control, and the biggest cost for delivering video is Content Delivery Network (CDN) distribution charges. Designed to significantly reduce live streaming distribution costs, StreamSmart On-Air delivers a 15–25% bandwidth savings, translating into millions of Euros in reduced distribution costs and a better end-user experience.
The IMAX approach to reducing live streaming distribution costs is entirely unique. StreamSmart On-Air employs IMAX ViewerScore (XVS), a patented perceptual quality metric based on IMAX VisionScience that measures video quality in real time based on human vision to ensure bitrate reductions only occur when they are visually imperceptible to viewers. StreamSmart OnAir, an API-based optimization software, leverages existing encoding and packaging workflows and uses an AI-driven approach to dynamically select optimal segments from ABR/HLS/standard streams and optimize bitrates in real time.
IMAX XVS is the only video quality metric that maps to the human visual system, making it the most accurate and complete measure of how humans perceive video quality. The Emmy Award-winning technology has a >90% correlation to Mean Opinion Score (MOS), verified across various video datasets. It sees quality differences that other metrics can’t
and works for all types of content, including live, VOD, HDR and 4K.
StreamSmart On-Air takes the source content, which is encoded to the top profile, using the provided encoder configuration. StreamSmart On-Air, in parallel, generates a set of alternate, lower bitrate profiles using Advanced Machine Learning. The software then analyzes and quantifies quality for every video segment from all the profiles and selects the lowest bitrate segments that maintain the same video quality, guaranteeing an optimal viewer experience while maximizing bitrate savings. It then sends an optimized manifest file to the Origin. Because the implementation is based on manifest manipulation, it is codec-, encoder- and configuration-agnostic.
Importantly, StreamSmart has a single-ended (noreference) component to its quality measurement, which is AI-based and designed using perceptually motivated Deep Neural Networks. The goal of the no-reference metric is to predict a full reference quality score, without relying on a source file for comparison. IMAX built a database that has tens of millions of sources and designed a machine-learning approach that directly passes pixels to a deep-learning model. This, and the fact that it can run in real time at the speed of encoding and it’s computationally efficient, makes it possible to use on live workflows, where latency is a consideration, and there is no source file to compare to quantify quality.
Streaming businesses are facing enormous challenges this year. Subscriber growth is slowing, and churn is increasing, both impacting bottom lines. The focus for streaming businesses right now is capturing new revenue opportunities while significantly cutting costs. IMAX StreamSmart On-Air, with IMAX ViewerScore, offers seamless integration with existing workflows and saves 15–25% on distribution costs without compromising video quality. StreamSmart On-Air’s pricing is tied to a fraction of the cost savings, ensuring a cost-effective, risk-free solution that guarantees positive ROI.
IN2CORE
IN2CORE: QTAKE Monitor
THE CHALLENGE
The Apple Vision Pro device is a personal monitor that redefines the possibilities not only for how consumers watch content, but also the possibilities for on-set monitoring for professional film productions, which is particularly helpful in environments with space and lighting constraints and in stereographic productions. Stunning visual clarity, vibrant color reproduction, spatial audio, cinema vs. immersive mode, and more guarantee an exceptional viewing experience. In addition, the state-of-the-art technology used in Apple Vision Pro is expected to fuel more interest in 3D and immersive content, and QTAKE Monitor is perfectly positioned to support that trend.
THE INNOVATION
The QTAKE Monitor app was designed to offer film production teams advanced features for wireless live monitoring, independent video playback, collaborative metadata editing and frame-precise clip annotation. Compatible with productions of any size, it provides a full-featured experience on a local network while seamlessly extending its features to the cloud. This hybrid collaboration environment allows authorized crew members to work together effectively, regardless of their physical location.
While other film-production tools may utilize Apple Vision Pro to offer standard monitoring features, such as 2D playback and streaming, the QTAKE Monitor app stands out as the only solution capable of delivering live 3D and immersive content.
This capability is a game-changer for the film industry, providing filmmakers with the ability to view and interact with stereoscopic content in real time, with an unmatched level of control, precision and convenience.
Historically, the production of 3D content requires use of specialized 3D monitors and glasses, limiting the ability of crew members to engage with the stereoscopic content. With
the integration of QTAKE Monitor app and Apple Vision Pro, every crew member can watch the 3D content in real time and personalized to their preferences and viewing angle.
This real-time interaction is crucial for making immediate, informed decisions during production, ultimately enhancing the efficiency and quality of the filmmaking process. By reducing reliance on postproduction corrections, it not only saves time and money but also enhances the overall quality of the final content.
KEY FEATURES
• Cinema and Immersive Modes: Filmmakers can choose to view content not only on a customizable cinema screen, but also fully immerse themselves in the action with immersive mode, tailored to their preferences.
• Multiview in 2D and 3D: QTAKE Monitor supports up to nine simultaneous viewing windows, allowing a flexible mix of 2D and 3D content for a comprehensive, real-time analysis.
• Ultra-Low Latency: The app delivers the content in a near-instant way, enabling real-time, interactive monitoring.
• Up to 4K Resolution: High-definition streaming, with resolutions up to 4K, ensures exceptional detail and clarity, preserving the highest visual standards throughout production.
• On-Set and Remote Collaboration: QTAKE Monitor delivers a consistent experience for both on-set and remote crew members, ensuring seamless coordination.
CONCLUSION
QTAKE Monitor’s Live 3D Stream to Apple Vision Pro represents an unparalleled monitoring experience in production of stereoscopic and immersive content and sets a new benchmark for the industry.
Interra Systems ORION Suite
Interra Systems’ ORION Suite provides real-time monitoring of linear and IP-based infrastructures, addressing all aspects of video streams, including QoS, QoE, closed captions, ad-insertion verification, reporting and troubleshooting. The suite — comprising ORION for real-time content monitoring and ORION-OTT specifically for verifying QoS and QoE for ABR videos — is the only software-defined platform that enables media companies to check for video and audio quality, ads verification, closed-caption monitoring and end-to-end quality tracing from a single dashboard.
ORION is designed to meet the ever-changing demands of media delivery, with the latest evolution curated for all cloud deployment and intelligent root-cause analysis needs, enabling streamlined media operations so that video service providers can deliver the highest-quality video.
KEY UPDATES INCLUDE:
• Improved monitoring density via the latest processors.
• Added support for enhanced live stream monitoring, including live service playback to expedite issue resolution and content quality verification.
• Enhanced video quality checks to detect blank frames and chroma errors.
ORION leverages high-performance CPUs to deliver higher monitoring density. By optimizing the use of server resources, ORION enables the monitoring of a greater number of streams on a single server, significantly reducing the hardware footprint while enhancing operational efficiency. This innovation translates into cost savings and streamlined operations for media companies, allowing them to manage a larger volume of content with fewer resources.
Moreover, ORION has made substantial advancements in root-cause analysis (RCA), to enable proactive and swift resolution of issues. Whether it’s video freeze, macroblocking, audio discrepancies or other quality-related challenges, ORION’s sophisticated analysis tools ensure rapid identification and resolution, thereby minimizing downtime and enhancing the overall streaming experience.
ORION specifically offers deep-dive analysis for highpriority channels, audio language detection for live services
to facilitate content classification and personalization and the industry’s first all-frame decoding and deep-dive analysis for high-priority channels. Additionally, ORION now enables media companies to embrace the ST-2110 protocol through a powerful ORION-2110 software probe. Interra Systems provides ongoing development for a non-reference, machinelearning based quality scoring that will revolutionize media quality and delivery efficiency.
ORION-OTT NOW HAS NEW FEATURES THAT SUPPORT:
• Checks for low latency to improve the viewer experience in live
• Overhauled errors and reporting for a more streamlined RCA
• A rich XLS report for group-level KPIs to provide valuable insights into overall performance
• Improved combinational checks for accurate detection of freeze/black frames with or without audio silence
• End-to-end ad insertion monitoring, including a comparison of externally supplied ad-scheduling information with DAI markers received within content
WHY ORION DESERVES TO WIN
As a pioneering force in furthering the cause of Quality of Experience (QoE), ORION consistently sets new benchmarks in content monitoring. Its commitment to enhancing the viewer experience through comprehensive monitoring and troubleshooting capabilities has made it an indispensable tool for media companies striving to deliver flawless streaming services. ORION’s innovations have been pivotal in delivering high-quality content to audiences, fostering a seamless and enjoyable viewing experience that is critical in today’s ultracompetitive streaming landscape.
Magnifi Magnifi
In the fast-evolving global sports AI market, Magnifi has emerged as a trailblazer. With the market projected to grow from $2.1 billion in 2020 to $16.7 billion by 2030, Magnifi’s AI-powered innovations are leading the charge, revolutionizing sports content creation and engagement. Our platform’s ability to intelligently capture and process content across over 70 sports, and distribute it with unmatched efficiency, has set new industry standards, positioning it as a strong contender for the IBC Best of Show award.
INNOVATION AND CUSTOMIZATION
Magnifi’s journey is characterized by relentless innovation, deeply rooted in an understanding of the unique challenges faced by the sports broadcasting industry. Our cutting-edge technology, which integrates vision analysis, natural language processing (NLP), and optical character recognition (OCR), ensures unparalleled accuracy and comprehensiveness, even in data-scarce environments.
A key feature of Magnifi is the editorial control it offers clients, allowing them to customize and fine tune outputs across social channels while maintaining high standards of quality and accuracy.
AI-ENABLED TRANSFORMATIONS
Magnifi has fundamentally transformed the traditional production model through innovative AI applications. Our AI-enabled ball-tracking technology has eliminated the need for manual content analysis and resizing for different platforms, streamlining the video-content creation process.
This year, Magnifi introduces a suite of cutting-edge features designed to enhance user experience, streamline workflows and enable content creators to deliver highquality, engaging sports content with greater efficiency and precision:
• Geogating: This feature allows clients to restrict content access by geography, ensuring seamless compliance with copyright and local laws.
• Tenant Management: Streamlined authorization ensures that the right content reaches the right stakeholders effortlessly.
• In-Built Editor: Our new editor enables intuitive predictive
editing, with features like overlays, transitions, templates and more.
• Automated Subtitles: Generate subtitles in over 80 languages, expanding global reach and accessibility.
REAL-WORLD IMPACT
Magnifi’s AI solutions have delivered outstanding results for its clients. For instance, the Vietnam Pro Basketball League (VBA) faced challenges with limited resources and the need for accessible content. Magnifi provided a tailored solution that included AI-powered highlights, dynamic graphic overlays and social media publishing. The results were impressive, with a 773% increase in video production and a 76% rise in Facebook viewership.
Another success story involves a leading sportscontent provider in India that aimed to enhance viewer engagement for Euro 2024. Magnifi’s technology enabled auto-identification of key moments, clipped them into bite-sized formats and optimized them for various digital platforms. The content was then translated into local languages, ensuring coverage of key markets. The results were remarkable:
• 50% increase in users.
• 64% rise in total views.
• 350% increase in live views compared to Euro 2020.
• 560% growth in users aged 25-44 during the opening weekend.
CONCLUSION
Magnifi is more than just a pioneer in AI sports technology; it is a transformative force setting new standards of excellence in the industry. Our innovative solutions and proven realworld impact make it a worthy nominee for the IBC Best of Show award in the TV Tech category.
Matrox Video
Matrox Avio 2 IP KVM
Matrox Video launched a tech preview of Avio 2, the world‘s first open standards-based, IPMX/ST 2110 IP KVM extender, at the 2024 IBC Show. The newest addition to Matrox Video’s successful IP KVM product line, the Avio 2 HDMI IP KVM extender provides unparalleled video quality and performance, with support for up to 4K60 4:4:4 resolution, delivering real-time remote performance without compromise.
Avio 2 marks a significant advancement in IP KVM technology by embracing open standards to increase quality, interoperability and flexibility. Delivering audio, video and USB control over IP in an industry-standard format unlocks new possibilities for workflows in broadcast studios, OB vans, high-quality media production and live events. System integrators, consultants and end users will be able to optimize their workflows, reduce cost and complexity and future-proof installations.
Avio 2 integrates with existing infrastructure and scales efficiently for installations of any size. Enabling secure, real-time KVM-over-IP extension and switching over 1 GbE and supporting true uncompressed 4K 4:4:4 over 10 GbE networks, the Avio 2 IP KVM extender is suitable for the most demanding applications, such as the handling of detailed content that requires high-color fidelity and image quality.
A uniquely versatile IP KVM technology with multiple codec options, including support for JPEG XS, Avio 2 raises the bar for performance and enhances both productivity and operational efficiency in mission-critical environments. As part of the IPMX standard, Avio 2 supports NMOS, an open API for device discovery and connection management, facilitating signal routing and control via third-party systems in broadcast and live-event workflows.
Maxon
ZBrush for iPad
In the evolving landscape of film and TV production, there is continuous demand for high-quality 3D modeling and digital sculpting tools. Maxon’s Academy Award-winning ZBrush has established itself as the premier tool for digital sculpting, widely adopted by VFX artists across the industry. Its robust feature set allows artists to create intricate and detailed 3D models that bring characters and environments to life with unprecedented realism. ZBrush’s powerful capabilities cater to a variety of workflows, enabling seamless integration with other software and tools used in production and making it an essential asset for VFX artists aiming to deliver high-quality visual effects. For these and many more reasons, ZBrush for iPad is a highly anticipated product launch from Maxon.
By merging the power and capabilities ZBrush is highly revered for with the portability and intuitive capabilities of the iPad, this game-changing product allows artists to break free from their desktop and be creative wherever they feel inspired — no matter if they create high-quality 3D models on set, whilst traveling or simply just outside of the confines of their studio. Traditional constraints of desktop systems will be a thing of the past. This mobility translates
to unparalleled productivity and empowers artists to push the boundaries of creativity.
One of the standout features of ZBrush for iPad is its highly intuitive interface, designed to accommodate both seasoned professionals and beginners. The touch-based controls leverage the iPad’s unique capabilities, making the sculpting process feel natural and engaging. The customizable user interface expands on the desktop version’s QuickMenu, allowing artists to arrange tools for a personalized experience. Uses with the Apple Pencil and Pencil Pro can utilize the double tap and squeeze functions, tailoring them to perform specific actions.
ZBrush for iPad sets itself apart from other 3D modeling apps through its advanced sculpting tools and exceptional precision. The app offers over 200 of the same proprietary digital sculpting brushes found in its desktop counterpart, enabling artists to achieve intricate details with ease. Key features such as ZRemesher, Sculptris Pro and DynaMesh are included to ensure that artists can focus on their creative vision without being hindered by technical limitations. ZRemesher generates evenly distributed polygon meshes, preserving essential surface details, while Sculptris Pro dynamically adds and reduces triangle tessellation with each brush stroke. DynaMesh allows for continuous retopologizing, enabling artists to stretch and add volume to their digital clay effortlessly.
One of the most impressive aspects of ZBrush for iPad is its ability to handle high polygon counts (dependent on iPad memory/version). For instance, the latest M4 iPad with at least 1 terabyte of storage can reach up to 92 million polygons per mesh, ensuring that even the most complex models retain their detail and integrity. ZBrush for iPad seamlessly integrates with the desktop version, with GoZ enabling one-click file transfers, allowing artists to share ZTool (ZTL) and ZProject (ZPR) files between devices effortlessly and transition between platforms without missing a beat.
MediaKind
Fleets and Flows
CHALLENGE
Managing and securing video distribution for large-scale live events such as internationally renowned sports competitions and leagues involves handling video feeds from multiple venues, managing a network of edge devices and ensuring high-quality, secure delivery across various network paths. Traditional systems often lack the comprehensive control, real-time monitoring and flexibility needed for successful live sports production, leading to quality issues, security risks and increased operational costs.
SOLUTION — A FRESH APPROACH TO LIVE PRODUCTION AND ITS ECONOMICS
Recognizing the above limitations, MediaKind has developed its Fleets and Flows solution to offer a more adaptable and scalable media production and delivery framework, offering unparalleled flexibility and control.
The solution includes:
• MK.IO Beam: Leveraging containerized software on commercial off-the-shelf (COTS) servers, Beam replaces single-purpose hardware with a flexible and reprogrammable solution for video edge processing. Each unit shares a common clock reference and media timestamping approach, enabling asynchronous processing and delivery over jittery IP networks while facilitating hybrid cloud/ground workflows.
• MK.IO Fleet Management: Beam devices are managed through MK.IO’s unified cloud control plane, allowing operators to easily upgrade, configure and monitor their entire production network.
• MK.IO Flow Technology: This intuitive API simplifies feed management, enabling operators to define feeds with a few clicks while the system automatically handles intermediate routing, configuration and processing.
• MK.IO AI-Assisted Operations: Combined with Flow, this
natural language interface assists operators in setting up, configuring, monitoring, and dismantling complex workflows without requiring advanced system knowledge.
MediaKind’s innovation goes beyond addressing the technical limitations of existing systems by focusing on economic challenges:
Using a reprogrammable platform and a cloud control plane reduces the need for fixed infrastructure investment and eliminates the cost of developing bespoke control interfaces
With AI assistance, operators can automate routine tasks and easily access expert support.
The solution is cost-efficient, as users only pay for the minutes they use.
OUTCOME
Implementing MediaKind’s Fleets and Flows technology enhances live sports events by improving efficiency, video quality and scalability:
• Operational Efficiency: Automation reduces manual intervention and human error, streamlining the management of complex, large-scale events.
• Video Quality and Reliability: Centralized control and real-time monitoring ensure high-quality, reliable video delivery over dynamic network paths, maintaining audience engagement.
• Scalability and Flexibility: Supports commercial off-the-shelf (COTS) hardware deployment, allowing quick adaptation to changing requirements for both temporary and permanent setups.
MultiDyne MDoG Series of IP Gateways
The MultiDyne MDoG Series offers two modular OpenGear solutions that help customers build stronger bridges between the SDI and IP worlds. The MDoG Series includes SMPTE ST 2110 network interface and low-latency JPEG-XS codec modules to efficiently manage uncompressed and compressed signals as they move between legacy systems and IP networks, with full redundancy to optimize performance. Both products offer multichannel and bidirectional conversion between SDI and ST 2110 as signals move on and off networks.
The MDoG Series was developed to fulfill critical needs for broadcasters and media businesses as they navigate living in two disparate worlds. MultiDyne brings harmony to the situation by unifying these two worlds through seamless interconnection. MultiDyne’s thoughtful engineering packs many pertinent features required for seamlessly moving SDI legacy signals to and from ST 2110 IP networks onto space-saving openGear cards.
MDoG-6060 ST 2110 IP Gateways are available in multiple I/O configurations for 3G-SDI (3x3, 6x2 and 2x6) and 12G-SDI (1x1, 2x0 and 0x2) systems. Each configuration offers two 25 GB network interfaces to support redundant transport streams, strengthened through SMPTE ST 2022-7 compliance. The ST 2022-7 standard provides hitless merge functionality, which means that the two network streams will borrow packets from each other to maintain signal integrity as they move through the network. This ensures that streams are repaired along their journeys, and arrive reassembled and intact upon reaching their destinations.
Each channel within the MDoG-6060 ST 2110 Gateway processes one video, 16 audio and one ANC data flow, and SDI signals are frame-synced within the module. MultiDyne has also incorporated NMOS workflows for in-band control and configuration, while allowing openGear’s DashBoard application direct access to remote monitoring functions and firmware upgrades.
JPEG-XS encoders and devices support TR-08-compliant, lowlatency compression or decompression of up to two channels of 12G-SDI, with the added benefit of ST 2022-7 redundancy over dual 10 GB network interfaces.
Offering a module to support JPEG-XS compression is highly valuable as broadcast and production teams increasingly navigate hybrid operations. SDI signals are uncompressed, and the ST 2110 suite of standards is structured to support uncompressed workflows. While this provides the same broadcast-quality video resolutions that broadcasters have long valued in the SDI world, uncompressed transport over IP networks is simply not a viable option for everyone today. Throughput is limited; for example, a broadcaster working with a 25G ST 2110 pipe has bandwidth for two uncompressed transport streams.
JPEG-XS compression establishes 20 times the efficiency over that pipe, using a fraction of the bandwidth with no noticeable latency of loss of visual quality. The optimized bandwidth usage translates to much lower costs for broadcasters, while content creators benefit from the codec’s low latency encoding and decoding capabilities for their live productions. This solution essentially allows broadcasters to deliver the same experience as with SDI.
The MDoG-6061 JPEG-XS codec module addresses the need for high-density signal transport in modern workflows where uncompressed transport isn’t yet a reality. The MDoG-6061
MwareTV
TVMS App Builder
Streaming television is a popular route to new revenues, subscriber retention and enhanced consumer understanding. The expectations of such a service, though, are very high: Consumers have long experience of excellent and varied content, high quality delivery and intuitive ease of use.
MwareTV has developed a platform which provides a onestop offering, delivering against all the key challenges and encompassing everything needed to establish and maintain a service.
A particularly demanding issue is the consumer user interface. On the back end, this has to initiate a lot of complex activities, validating the subscriber and request, selecting the right formats and resolutions to deliver, and identifying appropriate advertising and marketing to surround the stream.
At the front end, it needs to provide simple and intuitive operation, so consumers are not frustrated or need support. This is particularly challenging, as subscribers will move freely between smart TVs, computers, tablets and phones from all the leading vendors. Within the limits of each device type, the user interface must be identical if the subscriber is not going to be confused and frustrated.
On top of this, the user interface must exactly match the branding guidelines of the organisation providing the service. It must look and feel like part of the overall offering, whether that is a telco or a housing association. Lose the
branding and you lose the consumer loyalty.
All this complexity makes developing and maintaining multiplatform user interfaces a demanding and time-consuming task. That is why the MwareTV platform incorporates a unique and remarkable no-code App Builder.
This remarkable tool transforms user interface design into a simple drag-and-drop operation. Zero coding skills are required. Zero.
To create a user interface for any platform, you simply assemble the consumer functionality you need. Make the buttons, menus and lists look the way you want. Brand it in corporate colours and add graphics to match your identity.
The underlying intelligence generates all the required links and calls to transform actions in the user interface to streaming the right material at the right time. Design the perfect user interface and App Builder creates compact, accurate, reliable, highly efficient code automatically. App Builder is a core part of the MwareTV Television Management System: there is no need for additional integration effort.
App Builder 2.0 was introduced at IBC2024. This adds support for all the latest trends, such as serving shorts and other content with an extremely high refresh rate and FAST channels with its parameters for tailored advertising management.
It operates beyond conventional streaming television, integrating educational games for children, podcasts and video courses. You can design user interfaces which draw on rich metadata from public sources, as well as your own asset management, to add to the experience and keep consumers engaged with the service.
With App Builder 2.0, your service looks the way you want, delivers to the expectations of subscribers and is online and earning revenues as fast as possible. Technically and commercially, it is transformative.
NEP Group Mediabank
Mediabank is a cloud-based media asset-management (MAM) solution that offers comprehensive features engineered to meet the advanced workflow needs of enterprise clients:
• Intuitive Interface: Mediabank’s intuitive archive cataloging system, with its familiar folder view, makes organizing and retrieving archived content straightforward and efficient. This feature is particularly beneficial for the large media libraries of enterprise clients.
• Live Ingest: Mediabank’s browser-based application allows users to upload content seamlessly, assign unique content IDs, and automatically link assets with metadata, streamlining content management and distribution in accordance with digital rights guidelines. Mediabank can record live feeds, synchronize multiple angles, add metadata and execute automated workflows, like auto-clips, in real time.
• Collaboration Tools: Mediabank’s intuitive, real-time collaboration tools enable teams to share updates and comments quickly and clearly, facilitating an efficient approval process that saves time and money.
• Robust Security: Mediabank is ISO 27001-certified and adheres to rigorous security protocols, protecting against unauthorized access, data breaches and other security threats.
While Mediabank has been trusted by many of the world’s largest live and near-live productions since 2008, our development team recently completed a major refresh that further modernizes the platform. Unveiled this summer at a major
football tournament in Europe, these updates mark a significant milestone in the platform’s evolution.
This update wasn’t just about adding new functionalities; it was a complete overhaul of the underlying code, making the platform more agile and responsive to the ever-changing needs of its clients. The refresh of Mediabank’s code base has future-proofed the platform, ensuring it can swiftly adapt to new demands and technological advancements. This agility is crucial in the fast-paced broadcast industry, where the ability to respond quickly to changes can be a game-changer.
Another standout aspect of this launch was the introduction of an entirely redesigned user interface. Developed with user experience at its core, the interface is more intuitive and allows users to quickly find the right content more easily and efficiently without the need for extensive training. The streamlined navigation and improved search capabilities have drastically enhanced user satisfaction and productivity.
These enhancements had an immediate impact on the football tournament’s success and Mediabank played a pivotal role in shattering the previous record for content downloads (110,262 files/3,486 hours). The success of these updates has also had tangible financial benefits for the tournament organizers. Seeing the improved access to a greater volume of content, users have expressed a willingness to invest in higher-priced packages for future competitions. The perceived value of these packages has risen, driven by the enhanced features and improved user experience provided by Mediabank.
These latest developments have not only elevated the platform’s capabilities but also significantly boosted the value to our clients and platform users. The refreshed codebase, new user interface, and resultant user satisfaction have set new benchmarks in the industry, solidifying Mediabank‘s position as a leader in media asset management and make it a deserving candidate for a Best of Show Award at IBC2024.
Net Insight Nimbra Connect iT
BRIEF SUMMARY:
Nimbra Connect iT revolutionizes live internet transport workflows with its seamless deployment and intuitive user experience. It ensures effortless operation without heavy management overheads. Providing a scalable solution from a simple entry point, Connect iT can be used anywhere from a small, single-venue live event to global events with a large number of streams. Featuring a transparent business model, it offers cost-effective workflow integration, enabling content creators and broadcasters to extend their reach and discover new revenue opportunities.
DETAILED SUMMARY:
Nimbra Connect iT is a groundbreaking platform designed for primary contribution and distribution transport workflows of up to 100 concurrent streams while simplifying mediaworkflow integration, operations, maintenance and scalability. Its standout feature are its resilience and observability features, allowing cost-effective internet media transport to be used for premium content tiers.
Deployment is straightforward, building from small virtual software modules, allowing users to start with just two modules and manage content delivery. This modularity enables quick deployment and expansion, offering unmatched ease of entry into the market with advanced features. The integration layer, powered by an Open API, ensures Nimbra Connect iT not only deploys easily but also harmonizes with various systems and tools, facilitating streamlined and efficient workflows.
Maintenance is minimal, promoting trouble-free operation without constant oversight. This is crucial for entities aiming to minimize operational distractions and focus more on content creation and delivery. The solution’s architecture is inherently scalable, designed to grow with the business, enhancing network capabilities without the complexities of scaling up.
Nimbra Connect iT is future-focused, providing tools and features to help users manage content and expand their audience reach. It opens new revenue avenues by enabling access to ubiquitous connectivity on demand, targeting wider audiences, and exploring new market opportunities without the traditionally associated high costs.
Operative Adeline AI
Media companies are rushing to capture value from the popularity of CTV and FAST channels, particularly as audiences move from linear to streaming, creating unprecedented complexity for sales, analytics and operations teams. Operative’s Adeline uses the power of AI to take the manual work out of revenue management for these critical teams and deliver new insights to the company, opening the door for strategic growth in CTV and beyond.
To attract advertisers and satisfy their demand to reach audiences, media companies are testing a variety of new ad types from traditional 30-second and 15-second spots on live sports events to interactive ads allowing for audience targeting, much like a digital display ad. These new ad products add new inventory, audience data and technology to an already overloaded ad revenue business. Sellers are under pressure to deliver for advertisers who want it all, from premium brand opportunities at scale to highly targeted audiences all at competitive prices, resulting in proposals thousands of lines long. Media companies want to deliver compelling solutions, but also need to maximize their yield, requiring access to real-time data and detailed models. Additionally, operations and support teams are dealing with the fact that every new ad product for CTV creates hundreds of hours more work building
proposals, setting up and delivering campaigns and pulling reports.
With these challenges in mind, Operative developed Adeline, an AI solution that streamlines the complexity of revenue management at critical points in the process. From proposal building to operations to reporting, Adeline dramatically reduces tedious or manual work, making it much easier to deliver strategic solutions to advertisers and get valuable insights for the business.
Adeline‘s first module tackles the complexity of media planning. Adeline instantly builds infinitely optimized plans and proposals. The user simply describes the details of the RFP, and Adeline delivers a detailed optimized campaign plan that adheres to all requirements, following the pricing and product availability in the system. Adeline will also identify and select inventory when creating a plan, maximizing yield. What used to take hours or days can be accomplished almost instantly. Adeline sets the stage for so much more. Rather than spending hours looking up pricing, inventory or copying/pasting, teams can spend that time crafting proposals better tailored to an advertiser RFP and more profitable for the company.
Adeline provides a GPT-enabled assistant that deeply understands Operative’s tech and a client’s unique instances, speeding up processes across the business. Using chat to access data, product catalog and processes, users can ask questions to quickly and easily navigate the platform and improve workflow. Adeline provides business intelligence and can be used as an analytics tool to help users quickly get information and data-driven answers about everything from deal status, product pricing, inventory availability and more.
With Adeline, sales teams can sell faster and be more responsive to buyers, creating proposals quickly and easily while automating tasks. Operations teams accelerate the speed of work with generative and data-driven intelligence. Companies can supercharge campaigns and customer data with predictive analytics that deliver impactful outcomes for advertisers.
Proton Camera Innovations
PROTON FLEX
Proton Camera Innovations — a German innovator in the field of miniaturised cameras — submits for consideration its new PROTON FLEX, a unique miniature camera that uses a flexible design to maximise performance in any live production that requires heat and space management.
With an overall profile of just 28 mm by 28 mm, the PROTON FLEX separates the lens from the main body using a flexible flat-ribbon cable. By separating the lens with the sensor on one side from the video processing chip with the power supply on the other side, the PROTON FLEX maintains a discreet profile that can be flexibly integrated in a range of operational contexts. For instance, in the case of motor sports and helmet-mounted applications, the divided lens/video processing construction power construction allows for better weight distribution, whilst for applications where operational temperatures are a concern, the PROTON FLEX can withstand outside temperatures of up to 75 degrees. Resultantly, the mini cam provides a level of operational flexibility which significantly increases the creative potential of the camera, offering broadcast operators options limited only by their imagination. All of this comes with no compromise to visual clarity and sharpness. The lens offers a 97-degree view, whilst the 12-bit sensor and camera body operate to deliver exceptional image quality. Incorporating two separate PCBs, the PROTON FLEX leverages the power of a VEGA microprocessor to deliver crisp, full-HD 1080p images. What makes the PROTON FLEX a prime candidate for the Best of Show award is not just the technological particulars of the camera, but the design philosophy which underpins it. Expanding upon the flagship PROTON CAM — the smallest broadcast camera currently available on the market — the PROTON FLEX is a result of a European R&D process which is entirely controlled by Proton. This allows for an uncompromised approach to design — particularly in relation to the development of the chip set and hardware integrations — and a constant process of incremental improvement through ongoing software updates.
Building upon this proprietary base, Proton Camera
Innovations have developed an approach in which they rework the physical design of each new camera in order to adapt it to a specific need. Thus, whilst the original PROTON CAM is ideal for situations in which space is the ultimate consideration, the PROTON FLEX focuses on applications where weight distribution, reduced space and temperatures are key concerns.
It is thus submitted that the PROTON FLEX maintains award-winning potential because its innovative split lens/ video processing design provides a level of flexibility ideally suited to an extensive range of applications and allows users much higher levels of creative potential than other comparable minicams — granting cinematographers to opportunity craft unusual, exciting and visually progressive framing/shot types. Crucially though, this flexibility does not compromise technological capability, and the PROTON FLEX delivers exceptional imagery and a high degree of intuitive usability.
QuickLink QuickLink StudioEdge
QuickLink, the leading global provider of multicamera video productions and remote contributions, launches QuickLink StudioEdge. With StudioEdge, broadcasters and production teams can seamlessly introduce remote guests from every major video conferencing platform, including Microsoft Teams, Skype and Zoom into productions, optimized with ground-breaking AI technology. Through broadcast-grade outputs, including SDI, NDI, and ST 2110, StudioEdge enables seamless integration into video production systems, such as Ross, Grass Valley, Vizrt, Avid and many more.
Utilizing built-in, industry-best QuickLink StudioCall technology, StudioEdge allows users to create real-time group conversations, panel discussions and live interviews, as it can manage any combination of remote guests from around the world with direct communication and chat support for guidance. The system offers a browser-based remote-control interface via QuickLink’s cloud platform from any global location. QuickLink StudioEdge technology takes remote guest integration to unprecedented heights by utilizing QuickLink’s renowned remote-control expertise over production elements.
StudioEdge simplifies the workflow of introducing high-quality callers to create engaging productions while cutting down on costs. Production teams no longer need several solutions, computers and converters, as StudioEdge takes the pain out of integrating disparate, competing technologies, and harmoniously unites them into a simple, single, elegant and
easy-to-use production management system. For over 21 years, QuickLink has been an industry leader in remote guest solutions with QuickLink TX (Skype TX) and StudioCall being integrated into virtually every production suite globally. StudioEdge unites this expertise to create the Swiss Army knife of remote guest solutions.
StudioEdge supports four channels of broadcast-quality discrete audio and video from Microsoft Teams, Skype, Zoom and QuickLink StudioCall remote guests. Seamlessly integrated into a production suite via IP and hardware inputs/outputs, this powerful solution supports direct, off-air communication between operator and remote guest(s), offering complete control over your production.
In addition, QuickLink recently launched ST 2110 support for StudioEdge, as well as its StudioPro solutions, by adopting Matrox DSX LE5 ST-2110 network interface cards. QuickLink’s ST-2110 compatibility ensures that StudioEdge hardware and software systems can work together seamlessly with equipment from various vendors without compatibility issues, allowing uncompressed video, audio and ancillary data to be transported over standard IP networks. With ST 2110, production resources can be distributed across different locations facilitating remote production and collaboration, allowing for flexible and scalable production workflows that can be easily adjusted to meet changing needs.
SLXD Portable Digital Wireless Systems
Shure’s new SLX-D Portable Digital Wireless Systems deliver the superior digital audio, reliable RF performance and intuitive setup features of SLX-D in new, durable form factors designed for the film, electronic newsgathering (ENG), broadcast and video industries.
Shure’s expanded SLX-D Digital Wireless family includes SLXD5 Portable Digital Wireless Receiver and SLXD3 Plug-On Digital Wireless Transmitter.
SLX-D Portables are built for quick, out-of-thebox installation and intuitive setup features so users can start recording immediately. With SLX-D Portables, end users will experience the scalability, high-performance wireless and convenient power management of SLX-D in a new and convenient form factor.
The SLXD5 Portable Digital Wireless Receiver is an SLX-D wireless receiver in a flexible, miniaturized design. The SLXD5 Portable Single-Channel Digital Wireless Receiver can be
installed on-camera through the provided cold-shoe mount as well as in an audio bag, enabling full-featured use in any location. It automatically scans for the best frequency and with IR Sync, users can easily pair receivers to transmitters in seconds for instant single-channel configuration. For higher channel counts, SLXD5 includes MultiMic Mode, which facilitates easy deployment and monitoring of multiple sound sources from a single receiver.
The SLXD3 Plug-On Digital Wireless Transmitter transforms any XLR source into a wireless transmitter—including dynamic and condenser microphones. The SLXD3 provides phantom power and is ideal for wireless transmission from boom-mounted shotgun mics. Paired with the Shure SM63L Dynamic Microphone or with the iconic Shure SM58 SLXD3 makes for the perfect handheld interview microphone.
Both SLXD5 and SLXD3 are available as individual components for use with any compatible SLX-D digital wireless system.
Additionally, both SLXD3 and SLXD5 feature high-luminosity OLED screens where users can monitor battery life, meter audio and RF quality, as well as frequency tuning.
Regardless of where professionals need to be for the job, the new portable systems guarantee customers stress-free audio capture and industry-leading digital wireless that will perform in the toughest locations.
Sound Devices LLC A20-SuperNexus
Sound Devices Astral wireless technology is a game-changer for the audio industry, introducing never-before-seen features that advance the art and science of sound. The A20-SuperNexus receiver, the new flagship product in the Astral line, continues that tradition, packing unprecedented features and up to 32 audio channels into a single 19-inch rack unit — something no other comparable product on the market today can do.
A 16-channel receiver, the A20-SuperNexus can be expanded to 24 or 32 channels via software licenses — no additional hardware required. No other receiver has that channel density in a single rack unit.
The A20-SuperNexus receiver also has HexVersity (2-, 3-, 4-, 5- or 6-antenna diversity) and built-in antenna distribution with six antenna inputs, allowing for antenna configurations like Diversity, 4Versity and HexVersity or Antenna Matrixing, along with simultaneous audio outputs, including analogue, AES3, MADI and Dante.
With the widest 169-1,525 MHz global tuning range courtesy
of SpectraBand, the A20-SuperNexus receiver is a convenient choice for operation anywhere in the world. The A20-SuperNexus receiver comes with other features inherent in Astral-family products, such as Sound Devices’ patented wide dynamic range audio input GainForward; Real Time Spectrum Analysis; automated transmitter assignment at the receiver via AutoAssign; and remarkably long-range remote control via Sound Devices’ proprietary NexLink architecture. In addition, the A20-SuperNexus receiver’s RF Mirror Mode for full redundant operation delivers peace of mind.
The A20-SuperNexus receiver features native Dante, MADI, AES and Analog outputs, and when Sound Devices’ companion product, the A20-Opto expansion box, is connected to the A20-SuperNexus receiver, additional Optocore audio inputs and outputs and full redundant power supplies are added.
With Sound Devices A20-SuperNexus receiver, the future of RF is now.
Spherex SpherexAI
At IBC2024, Spherex launched its new flagship product, SpherexAI. The first-of-its-kind platform provides the most comprehensive solution for generating age ratings and ensuring video content complies with local regulations and cultural norms for any global market.
Preparing content for global distribution is complex, requiring significant time and resources. SpherexAI automates this process, providing a cost-effective, efficient solution that analyzes content for global compliance in minutes. SpherexAI delivers unparalleled accuracy, depth and speed in assessing film, television, social video and advertising content for worldwide distribution.
SpherexAI Age Ratings provides compliant age ratings and advisories for video content in 200-plus countries and territories. The system evaluates content against regional regulations, platform standards and cultural norms, ensuring global compliance while mitigating legal and brand risks. SpherexAI generates detailed reports and Edit Decision Lists (EDLs) on content elements such as violence, language and sexual content, offering edit suggestions to avoid censorship and lower age ratings, maximizing potential audience reach. Spherex delivers its ratings directly to regulators for inclusion in national classification databases and manages all related inquiries. As the only commercial provider with worldwide
regulatory approval, Spherex age ratings are accepted by official boards across 200-plus countries and territories. Its regulatory acceptance eliminates the need for multiple submissions, saving critical time, money and resources while ensuring brand safety.
SpherexAI Video Compliance revolutionizes content analysis for worldwide distribution. It meticulously evaluates film, television, social video and advertising, ensuring compliance with local regulations and cultural norms. Analyzing more than 1,000 attributes, SpherexAI provides unparalleled depth in content assessment. Its geolocation-sensitive technology interprets visual, audio, dialogue, and on-screen text within specific regional contexts, precisely identifying and time-stamping contentious material. This comprehensive approach offers a nuanced understanding beyond simple detection, which is crucial for navigating the complexities of global content distribution. By automating the review process and providing culturally sensitive insights, SpherexAI streamlines compliance workflows, safeguards brand integrity and maximizes global audience reach.
INDUSTRY PARTNERSHIPS
Innovative media and entertainment companies, including Cineverse, All3Media International,and VA Media, are utilizing SpherexAI to distribute their content worldwide with confidence, knowing their content adheres to regulatory rules, compliance standards, and optimizes audience engagement and monetization.
By choosing SpherexAI, clients gain a strategic partner in global content distribution. Its unique combination of geographic coverage, cultural intelligence and regulator relationships empowers clients to confidently expand their global reach while minimizing risks and maximizing audience potential.
Strada Strada
One year ago, brothers Michael and Peter Cioni launched Strada at IBC2023 with little more than a slide deck. This year, Strada was an official IBC exhibitor and demonstrated the industry’s first AI-powered search and collaboration platform. Recently the team at Strada have:
• Cultivated a beta waitlist of over 2,500 international customers.
• Recently launched a highly successful private beta.
• Assembled a team of 13 across design, engineering and marketing.
• Produced over 30 episodes of their award-winning YouTube series.
With Strada, creative teams can collaborate in the cloud and search for specific moments within their audiovisual libraries in a completely intuitive way: by faces, locations, objects, words and even emotions. Strada is taking AI to a place few in the industry are going, and the response from creatives has been overwhelming.
Built for all types of creative teams, Strada connects to third-party clouds, analyzes thousands of assets simultane-
ously, then transcribes, translates and tags every moment. All of this metadata unlocks a whole new level of search fidelity, reducing slow and laborious searches into instant results. As a versatile cloud platform, Strada benefits active productions, both big and small, as well as audiovisual media libraries.
With a specific focus on interoperability and workflow, Strada can be connected with nonlinear editing platforms, including Adobe Premiere, Apple Final Cut Pro, Blackmagic DaVinci Resolve and Avid Media Composer, so editors in cutting rooms can also take full advantage of the creative work done by collaborators in Strada.
In mid-June, Strada was released in private beta and has been met with exceptional feedback from hundreds of creatives who are putting the new tool to use. George Edmondson, director at Emmy Award-winning Seed Creative, said, “Strada is so fast and versatile, it saves us multiple days on every project, which frees up our team to focus on more creative tasks.”
Telos Alliance
Linear Acoustic AERO-Series DTV Audio Processors
AERO.20, AERO.200 and AERO.2400 are the brandnew DTV hardware audio processors from Linear Acoustic. They represent the latest and most advanced offerings in the long-running AERO lineup and are the direct replacements for the trusted AERO.10, AERO.100 and AERO.2000 products.
Like their predecessors, AERO.20, AERO.200 and AERO.2400 provide broadcasters with many unique benefits, including integrated loudness control, upmixing, Nielsen and Verance Aspect watermark encoding, Dolby AC-3 encoding, SAP audio, downmixing and automated EAS and text-to-speech audio insertion. However, these products introduce several new and important features, including dual, independent 3G SDI I/O paths to support 1080p signals, userselectable metering and logging options for EBU R 128 or ATSC A/85, and an updated Windows 10 IoT operating system to provide additional security features.
• AERO.20 is a PCM-only processor for applications where Dolby coding and audience measurement watermarking are not required; it provides a single instance of AEROMAX loudness control and UPMAX 2- to 5.1-channel upmixing.
Two SDI I/O paths with up to 16 audio pairs and four pairs of AES3 I/O, are available.
• AERO.200 is similar to AERO.20, but offers either one or two instances of AEROMAX loudness control and UPMAX 2- to 5.1-channel upmixing, each user-selectable as AMX5.1 (5.1+2), AMX 2.0 (2+2+2) or AMX5x2 (2+2+2+2+2) configurations. AERO.200 also offers optional Dolby Digital Plus transcoding as well as Nielsen or Verance Aspect audience-measurement watermarking. AERO.200 can make use of both 3G SDI I/O paths with up to 16 audio pairs, plus four pairs of AES3 I/O, enabling a variety of combinations and configurations — such as one AEROMAX processing engine on the first SDI input and a second AEROMAX engine on the second SDI input — or to insert local affiliate program audio from a different SDI input than the network audio feed.
• AERO.2400, a 2RU-device with a large colour front-panel LCD display, controls and headphone output, delivers all of the advanced features found in AERO.200 with even more I/O versatility, providing dual 3G SDI I/O paths with up to 16 audio pairs, plus eight pairs of AES3 I/O.
Telos Alliance
Telos Infinity Virtual Commentator Panel for Infinity VIP Virtual Intercom System
The Telos Infinity Virtual Commentator Panel (VCP) is a new, unique extension to the popular Telos Infinity VIP IP-based virtual intercom platform. VCP enables remote contributors to access commentary or announcer stations from anywhere in the world via the VIP intercom platform on any PC, tablet or smartphone with a modern HTML5 browser, or via the Telos Infinity VIP app for iOS and Android devices.
VCP replicates the physical commentator panels and announcer stations familiar to voice artists, commentators and radio and television reporters, making VCP instantly usable with virtually no learning curve. Features include the ability to mute the on-air microphone feed when using comms or speaking “off air,” and to create a custom monitor mix of IFB, program, mix-minus and aux sources — each with individual gain controls — all while still using dedicated VIP intercom keys.
FEATURES INCLUDE:
• Full-time on-air output path with separate on-air and intercom mic channels
• Automatic muting of on-air channel when comms channels are in use
• Programmable inputs with individual rotary gain controls for IFB, program, mix-minus and aux sources for easy creation of custom monitor mixes
• Simple add-on to existing Infinity VIP installations
• Seamless integration with Elgato Stream Deck + through a dedicated Stream Deck plug-in, for applications where a hardware panel is desired
Telos Infinity Virtual Commentator Panel is available as a licensed option on Telos Infinity VIP systems running v3.0 software or later, in 4, 8 and 12-button configurations.
Vizrt
TriCaster Mini S
In the rapidly evolving landscape of live video production, accessibility and quality often seem at odds. TriCaster Mini S bridges this gap by offering a professional-grade solution that democratizes high-quality production. It combines the power and prestige of the TriCaster brand with a flexible, software-based approach that can be deployed on hardware that suits any budget. This unique blend of capability and flexibility makes TriCaster Mini S unique offering unmatched in the market.
TriCaster Mini S is designed for a diverse audience — streamers, podcasters, corporate communicators, educators and more. What these users have in common is the need for a reliable, professional production tool that doesn’t require a significant upfront investment or steep learning curve. TriCaster Mini S meets this need by offering a subscriptionbased model with an unparalleled customer experience that includes support, updates and a powerful graphics system. This ensures that users, whether they’re just starting out or are seasoned professionals, can deliver high-quality content users expect, no matter what streaming platform.
What sets TriCaster Mini S apart is its user-centric design. Unlike traditional hardware-based solutions that can be cost-prohibitive, Mini S is a purely software-based platform that can be deployed on certified hardware or even existing systems. This flexibility means users aren’t locked into
expensive hardware, making it easier to scale production capabilities as needs evolve.
Furthermore, TriCaster Mini S includes features typically reserved for high-end systems — such as 4Kp60 quality, eight live inputs and advanced switching capabilities—all available at an accessible subscription price.
This positions Mini S as a powerful tool for those who need high-quality production but don’t have the resources to invest in a full TriCaster system. The seamless integration with social-media platforms and the ability to produce interactive, engaging content further enhance its value.
TriCaster Mini S isn’t just another production tool: It’s a revolution in making professional video production accessible to all. Its software-based design breaks down financial barriers, helping more creators access the tools they need to produce content that rivals productions with higher budgets and resources.
By including top-tier features and industry-leading support as part of the subscription, TriCaster Mini S ensures users aren’t just equipped to meet today’s challenges but are also prepared for the future.
The TriCaster Mini S redefines the balance between quality, accessibility and affordability. It’s a product that acknowledges the diverse needs of modern content creators and offers a solution that’s both powerful and practical.
In a crowded market, the TriCaster Mini S stands out for delivering professional results without the professional price tag. More than just a product, it’s a strategic tool designed with the user in mind, empowering them to achieve top-tier production quality regardless of budget or experience.
Offering a flexible, affordable, and feature-rich solution, the TriCaster Mini S leads the next generation of live production tools. By focusing on the needs of creatives, it enables them to push their work to new heights, delivering the highest-quality results with a powerful and adaptable solution — all without compromising on cost.
Wheatstone Corp. Strata Mixing Console With Remote Virtual mixing
WHEATSTONE MIXES IT UP WITH VIRTUAL AND FIXED CONSOLE COMBO FOR TV NEWS
At IBC2024, Wheatstone paired its fixed Strata console surface with the portability of its Remote Strata virtual mixer for today’s highly collaborative television news team. With full interactivity between tactile mixing in the studio and virtual mixing on a touchscreen, including real-time fader tracking between the two, the combination offers an independent-yet-shared user experience for teams in split locations.
Strata is equally at home in newsrooms, remote vans or sports venues as a compact IP-audio console with 32 physical faders that can be layered for 64 channels, as well as features like automixing, multitouch navigation for adjusting EQ curves, filtering and more.
Remote Strata is a full-featured virtual mixer for the laptop or touchscreen that is similar in feel and function to the Strata
fixed console with familiar buttons, knobs and navigation. Remote Strata is a software replication of the Strata console to provide full remote control over mixing functions from a laptop.
Both Stratas provide direct connectivity into major production automation systems with all mixing and processing done by a powerful audio mix engine, the Blade 4TVE. The system is AES67/SMPTE ST 2110-compatible as part of the WheatNet IP audio network and interfaces with existing intercoms or wireless microphones.
An optional 4RU StageBox One is available for centralizing and adding I/O anywhere in the studio facility, effectively replacing hardware and/or having to run microphone feeds over long distances.
Specialized WheatNet IP I/O Blades are also available for managing MADI I/O or for de-embedded audio from HD-SDI streams.
Witbe Mobile Automation
Witbe’s new Mobile Automation technology allows video-service providers to directly test and monitor their video streaming apps running on mobile devices from anywhere in the world. Developed with the company’s commitment to accessibility and a simple installation process in mind, Witbe’s Mobile Automation setup enables providers to plug in real mobile devices and begin automatically testing any video app on them. This innovative plug-and-play technology empowers companies to accurately measure and improve their Quality of Experience (QoE) on the same devices their viewers use, allowing them to deliver optimal video quality and stand out in today’s highly competitive video-streaming market.
Witbe’s Mobile Automation technology is the only product on the market capable of accurately measuring video quality and traffic usage for any video app or stream, including DRM-protected content, directly on unmodified real devices without relying on reference-based testing. It can analyze video quality as a human tester would, evaluating picture and audio by the same standards used every day. With the results from these automated tests, video service providers can adjust and improve the QoE their viewers receive. It has already been instrumental in improving quality and network efficiency for Telefónica, Vodafone and Meta.
The technology is housed within a sleek, industrial box that includes a Witbox and two mobile devices. It can automatically test any smartphone, whether running on Android or iOS operating systems, and is future-proofed with enough space for extra-large models. The zero-friction installation process consists of simply plugging the Mobile Automation setup into power and a network. No external cabling is required, and the entire system can be running in minutes. These innovations empower video-service providers and mobile-network operators to continuously test their services in the field without having to dispatch costly resources in different markets.
Witbe‘s Mobile Automation has paved the way for the company’s new Quality of Experience and Video Traffic Optimization technology, designed to strike an ideal balance between streaming video quality and bandwidth usage. With
data consumption significantly rising and set to expand in the future, mobile-network operators need to effectively manage limited bandwidth resources right now. At the same time, video-service providers need to stream their videos at a quality that satisfies viewers without being throttled by networks or consuming too many resources.
Witbe’s technology helps teams resolve this issue by providing an accurate measurement of a video app’s data usage across video resolutions, app builds, devices and networks on its own and when compared directly to compeitors. With this valuable information, teams can optimize their app’s video settings and bandwidth usage to stay competitive, reduce energy consumption, and offer their viewers the best QoE possible. It has already proven instrumental by delivering tangible improvements between operators and providers, as seen by Telefónica/Meta and Vodafone/Meta’s results.
Witbe’s Mobile Automation setup and Quality of Experience and Video Traffic Optimization technology empower video-service providers to navigate network throttling with ease while maintaining a strong mobile app performance. In a crowded video market, that’s a must.
Wohler Technologies
MAVRIC
WOHLER Technologies introduces MAVRIC, a cuttingedge, cloud-based remote monitoring solution that builds on the legacy of their renowned in-rack monitoring products. MAVRIC empowers operators with the ability to monitor, manage and maintain the highest quality studio feeds from anywhere, utilizing an innovative iPhone/Android app or web browser interface.
This advanced solution allows production teams to seamlessly monitor alerts and alarms, watch and listen to audio and video services, and communicate with one another to resolve issues swiftly and effectively.
MAVRIC is designed with the goal of ensuring production teams can maintain optimal quality and performance across all studio feeds.
WOHLER Technologies is synonymous with reliability, offering agnostic I/O and formatmonitoring solutions that support an extensive range of standards, including ST 2110, ATMOS, Dante, Ravenna, AES67, Analog, MADI and NDI. MAVRIC continues this tradition, providing comprehensive support for every I/O and format, making it an indispensable tool for modern production environments.
Xperi/TiVo TiVo Broadband
Over the span of 25 years, TiVo has turned challenges into triumphs through strategic innovation. A pivotal moment in this journey was the team’s response to the burgeoning threat of cord-cutting, a dynamic shift in consumer behavior. Viewing cord-cutting as an opportunity rather than a setback, TiVo strategically realigned its focus, resulting in the creation of TiVo Broadband. This cuttingedge video solution, tailored for broadband-only consumers, showcases TiVo’s forward-thinking approach. TiVo Broadband boasts a turnkey platform with a visually rich interface, offering seamless access to a myriad of streaming content, effectively addressing the needs of cord-cutting audiences.
THIS SOLUTION OFFERS:
• A unique opportunity to own the home screen and generate new revenue streams through targeted advertising and enhanced user engagement.
• A comprehensive entertainment solution that addresses a key pain point for operators: Reducing broadband subscriber churn.
With TiVo Broadband, broadband only customers receive an enhanced video streaming experience with personalized super-aggregation that seamlessly integrates content from TiVo+, a free content network with over 160 channels and more than 100,000 free movies and TV shows including local, live television, and free ad-supported television (FAST) channels like those offered by Pluto TV — and favorites like
Netflix, Amazon Prime, Disney+ and more.
In addition, this new solution addresses two major challenges for service providers: boosting revenue and customer loyalty and reducing subscriber churn.
Leveraging TiVo’s acclaimed universal content search, discovery and recommendations interface, our end-toend customizable solution outshines competitors, offering unmatched flexibility and functionality that competitors simply can’t replicate. With TiVo Broadband, service providers can now personalize their service’s user interface, highlight their exclusive channel lineup and seamlessly integrate their IPTV offerings.
Pay-TV providers can utilize TiVo Broadband to own the home screen by delivering a solution that allows consumers to seamlessly transition back and forth between TiVo Broadband and IPTV through one home screen experience. Service providers can also help consumers resubscribe and prevent users from cutting the cord altogether.
Cord-revivers, or consumers who resubscribe to their pay-TV service, make up almost 30% of pay-TV customers. They come back because they couldn‘t find all the content they wanted in one place and are looking for that bundle experience. By giving them this flexibility with TiVo Broadband, pay-TV providers can easily capture an even higher percentage of this audience.
In fact, more would opt in and stay with a pay-TV service if they didn’t have to sign a long-term contract. If pay-TV providers look to meet these customers where they are, they‘ll be opening more doors to users coming to their service and prevent subscribers from cutting the cord altogether.
By giving users this flexibility, providers will keep more consumers happy and unlock a new market they may have not previously reached. Pay-TV providers will be able to stay even more relevant in the streaming wars. They’ll provide immense value to consumers and benefit their bottom line as a business by creating a new monetization axis on the home screen in this next frontier of television.
Zixi
Software-Defined Video Network (SDVP)
The Zixi Software-Defined Video Platform (SDVP) offers a cost-effective, flexible and modular solution for live video delivery, empowering media organizations to efficiently manage broadcast-quality live video across various networks, protocols, clouds and edge devices. With unparalleled performance, ultra-low latency, carrier-grade reliability and comprehensive security, the SDVP is trusted by leading media brands like Paramount+, Fubo, Apple TV and others, along with over 400 integrated OEM and service-provider partners. The SDVP integrates five primary components to enable resilient delivery of broadcast-quality live video workflows:
1. Zixi Edge Compute (ZEC) and Zixi Protocol provide enhanced end-to-end delivery and visibility, leveraging the award-winning Zixi protocol with support for over 18 industry protocols and containers.
2. Zixi Broadcaster serves as a universal video gateway, enabling live video transport over any IP network while managing protocols, transcoding, format conversion and advanced transport analytics.
3. ZEN Master is the unified cloud-video orchestration and telemetry control plane, orchestrating infrastructure, managing scale and providing real-time monitoring through optimized dashboards, enhancing operational efficiency.
4. The Intelligent Data Platform (IDP) leverages advanced analytics and machine learning algorithms to deliver multiobject correlation analysis and automated incident responses.
5. Zixi Live Transcode integrates high-performance live transcoding workflows into the delivery process, automatically adjusting processing to maintain quality and latency priorities for high-profile customers.
TECHNOLOGICAL ADVANCEMENTS:
1. High-Performance Networking: Leveraging DPDK to achieve remarkable throughput efficiency improvements while utilizing only a fraction of processing power. This allows a Zixi Broadcaster to deliver three times the amount of Zixi traffic versus previous versions, and as a result the associated costs of compute is reduced by two-thirds. Open-source alternatives require three times the compute and therefore nine times the compute/costs versus Zixi.
2. Dynamic Latency: The SDVP incorporates Dynamic Latency functionality, enabling the optimization of live IP video in real time based on prevailing network conditions. This adaptive feature automatically adjusts latency to attain the lowest possible levels while ensuring dependable stream delivery.
3. Error Concealment: Zixi introduces Error Concealment, a mechanism that replaces erroneous video frames with skip frames while maintaining audio synchronization. This enhancement significantly improves downstream decoding and playback of video streams, particularly beneficial for users deploying IRDs when faced with packet-loss issues.
4. Timeline Normalization: The SDVP’s Timeline Normalization feature rectifies discrepancies in the transport stream introduced during upstream encoding and processing. This optimization is particularly valuable for sensitive IRDs, vastly improving stream ingest and downstream decoding and playback.
5. Smooth Live Midroll Insertion: Enabling seamless insertion of advertisements, fixed-length slates or blackouts directly into live-stream output, this feature simplifies the process for users seeking to incorporate ads or promos in the cloud during live event streaming.
6. Dynamic HTML Overlays: With Dynamic HTML Overlays, broadcasters can effortlessly overlay graphics like scoreboards or news tickers onto live video streams in the cloud, eliminating the necessity for additional on-premises hardware. This feature facilitates a smooth transition to the cloud, providing broadcasters with flexibility and simplicity in overlaying graphics without the complexities associated with extensive production switching and editing software.
AEQ SOLARIS
AEQ's new Solaris is a high-density multi-channel audio codec, specially designed for multiple STL links, broadcast networks and remote contributions. It's available with eight bidirectional stereo channels, and can be upgraded in groups of eight channels, to reach a maximum of 64 channels — all in a single 1U 19-inch rack.
Audio I/O is provided via IP using Dante/AES67 with the possibility to add redundancy using a second Ethernet port where required. It has five network ports: they allow for traffic control, encoded audio (WAN) and local audio I/O separation through AES67/DANTE IP and DANTE network redundancy.
Audio encoding algorithms include state-of-the-art OPUS in various flavours, uncompressed PCM audio (16 bit), plus optionally and under license, several MPEG-4 AAC modes plus apt-X. Legacy G.722 and G.711 are available to ensure any VoIP communications compatibility.
Exclusive connection tools. In addition to SIP (either server-based or serverless) or RTP, using AEQ audiocodecs, you get the advantage of the Smart RTP communications protocol that simplifies your connections, even though DDNS (Dynamic domain name service) allows connections without the need
to know destination IP addresses, even if dynamic.
Control via web-browser and other applications. SOLARIS is controlled through a web interface with which the station staff can remotely operate the unit. A set of diagnostic tools are included, such as Syslog, SNMP, CPU load and memory usage, etc.
Also, in order to achieve unified control with other AEQ audiocodecs, it can be controlled from the Phoenix Control software.
It also incorporates:
• Phone-book or Contact List: Calls can be made manually, using the internally stored callbook or from the list of last calls per channel.
• Presets: Allows operator to save configuration details for of each channel and quickly apply changes to any equipment configuration parameter or a group of these at any moment.
• Other tools: Configuration of installation values such as IP ports settings, local AoIP AES67 / Dante channels, maintenance, diagnostic, user management, etc.
DEVA Broadcast
DB7002 — Professional FM & DAB/DAB+ Monitoring Receiver
DB7002 is a revolutionary solution that combines in a single, high-precision tool the most important functionalities of two separate types of product. To achieve that, our engineers have equipped this model with two independent tuners — one for FM and one for DAB/DAB+, allowing for the device to perform both FM and DAB/ DAB+ monitoring in parallel. It provides detailed and accurate measurement of all of the most important signal parameters and also boasts two loggers with 24 channels apiece for FM and DAB/DAB+. The recorded information is stored in log files on an SD card for easy future analysis and can easily be downloaded via a standard FTP client.
As a product that incorporates DAB/DAB+ monitoring capabilities, DB7002 fully meets the ETSI EN 300 401 DAB standard and offers support for Program Associated Data (PAD), as well as all standard bitrates and VBR, and automatically displays live metadata.
As an FM monitoring tool, DB7002 also provides powerful signal processing capabilities through sophisticated DSP
algorithms and offers a Loudness Meter, which allows for measurements to be shown as defined by ITU BS.1770-4 and EBU R128 recommendations.
To make sure the transmission is not interrupted, there are local alarm options and online notifications via rear-panel alarm GPOs, email and SNMP in case there is audio loss or a change in the signal.
DB7002 can also be managed without effort. It is easy to program via the front-panel menu or remotely through your PC, tablet or smart phone via a standard web browser; iOS and Android devices are also supported. The high-resolution OLED graphical display and the ultra-bright LED bar graph indicators allow for easy reading of the main signal parameters. A user-friendly front-panel menu in addition to the set of four soft buttons provides easy navigation and quick access to each function.
DB7002 is definitely a must-have. Its two independent tuners set it apart from any other product, while its effectiveness and precision make it the perfect monitoring tool.
ENCO
SPECai Ad Creation Platform
SPECai is a game-changing ad creation platform that allows broadcast sales teams to create compelling, localized spec ads on demand within seconds in front of clients. SPECai's responsive workflow immediately delivers multiple script options for each spot and provides account managers with all the essential creative tools to build out professional sounding spec ads, including an array of male and female voice options, voice tones and an integrated music bed library.
SPECai leverages generative AI models to create audio content and synthetic voice engines to produce natural sounding voices for new spots. SPECai's workflow offers a straightforward path that is quick to grasp and easy to use, with an uncluttered and uniform front-end interface that is accessible on any web browser or mobile device.
The user, typically an account executive within a media organization, first answers a few questions to determine spot length (15, 30 or 60 seconds), writing style (casual, humorous, uplifting, serious, explanatory) and sponsor details (advertiser name, subject, creative notes, call to action), using a web
browser or mobile device. SPECai instantly generates three scripts to choose from, which users can manually edit or regenerate. The user then selects their preferred AI voice and can listen and adjust phonetics or tone as desired. Lastly, they search and select from multiple music beds before downloading the final down-mixed audio file. The process can be completed in seconds.
At IBC, ENCO, in collaboration with SPECai partners Benztown and Compass Media Networks, introduced a first-of-its-kind voice cloning feature that gives users even more customization, diversity and choice in creating high-quality, high impact spec spots for clients. The feature allows users to easily clone voices they have the authorization to use in creating a spec spot. SPECai replicates the voice of the client, on-air talent or other presenter upon analyzing a user-uploaded audio sample, which users can then customize to suit the intended tone or emotion.
SPECai's quick-producing workflow means that clients can approve ads in the moment, eliminating the time-consuming creative, editing and approval processes associated with traditional ad production processes. SPECai flips the script of that time-worn model to deliver a highly efficient workflow that delivers immediate results.
Since the SPECai application lives in the cloud and is easily accessed through a standard web browser, account managers can bring SPECai's creative powers directly to the client, produce multiple options for consideration, and even actively involve the client in the creative process. SPECai simply allows radio broadcasters to perform their jobs faster and in a much simpler manner, while offering greater potential for success and customer satisfaction.
Jutel Oy RadioMan Sports Radio and Lamppu
utel introduces a new innovative Sports Radio solution with RadioMan6 and ClipperAI cloud-based services. This solution simplifies the creation of sports events and teamfocused radio stations for FM, web broadcasts and team websites. It's ideal for individuals, teams or media service providers managing operations for entire leagues.
Built on the cloud-based RadioMan6 system, Sports Radio allows simultaneous use from multiple locations via browserbased interfaces with audio devices. The station can operate continuously or be activated specifically on game days, as demonstrated by Ilves Radio in Finnish ice hockey, which provides content to fans during game days.
The Sports Radio solution also fosters collaboration between teams, enabling them to share real-time programs, audio clips and publications, such as recordings of exciting moments, from other teams to enhance their broadcasts. The system can be deployed as a cloud-based solution or in-house setup with IP-based studio complexes while still supporting mobile and remote operations.
The Sports Radio solution is compatible with laptops, tablets and the RadioMan Lamppu mobile connection. The lightest audio setup can be a headset with a microphone and a USB-based audio device. Each user has access to a comprehensive, full-range suite of tools for radio production, from the browser-based multitrack ClipperAI audio editor to program planning, music selection, media publishing and On-Air broadcasting.
athletes using the Lamppu app. When the single-button broadcast function in the Lamppu app is activated, the music automatically fades out, seamlessly transitioning to the interview. After the interview, the music resumes automatically.
Sports Radio integrates with the ClipperAI mobile recording and editing environment, allowing journalists to capture and edit interviews directly in the field. These recordings can be edited on the spot using the ClipperAI app on their phone or with the browser-based ClipperAI multitrack editor. Edited audio files are saved into the RadioMan database and are ready for immediate broadcast. The system's publishing features also enable instant publication of audio files and related text on team websites.
RadioMan Sports Radio supports multiple production points operating simultaneously, with OnAir controls allowing seamless communication and participation. The RadioMan Lamppu mobile app is a recent addition, enabling journalists to contribute live to broadcasts, such as conducting live interviews with athletes. For example, a journalist can play music through the RadioMan On-Air application and then move to the sidelines to interview
ClipperAI includes AI-based features that simultaneously convert audio into text, streamlining browsing and editing within the ClipperAI multitrack editor.
As you navigate the text, the cursor on the audio track follows the cursor's position in the text, ensuring seamless synchronization. Additionally, the audio editor supports automatic speech generation, enabling the addition of relevant comments to the audio story through speech synthesis.
Broadcasting and audio mixing are managed by the RadioMan Media Node, which can be hosted in the cloud or a physical studio. The latest Media Node version includes an integrated audio processor that provides audio compression and level adjustment, ensuring high-quality sound for both FM and streaming broadcasts.
ORBAN ORBAN OPTIMOD 5750 HD
Cost-effective independent processing for HD2 channels is now available with the introduction of the OPTIMOD 5750 HD from Orban Labs. Based on the popular OPTIMOD 5750 processor, this new 1RU hardware unit offers processing for FM analogue as well as DAB+, streaming, HD1 and HD2. FM/HD1 channels can be blended or independent, and HD2 is also be managed independently. Four processing structures are available in the
OPTIMOD 5750: Five-Band; Low-Latency Five-Band; Ultra-Low-Latency Five-Band; and Two-Band. The product also provides intelligent two-band windowgated AGC; RDS/RBDS generation; remote control/ monitoring via HTML5 web browser; SNMP V2; diversity delay; silence detection, Orban's exclusive Less-More control along with multiple factory presets and more.
RFE Broadcast
XC-Series
XC-Series is RFE's most recent Automatic Changeover Unit for N+1 Transmitters system manangement. It can be factory-configured in a structure from 2+1 (XC- 02) up to 6+1 (XC-06) transmitters; the system monitors the transmitters and, in case of the fault of one of the Main Transmitters, replaces it with the Reserve one, according to priority rules. An example of 5+1 configuration (XC-05) is shown in the picture attached.
One of the most innovative trait of the RFE XCSeries is its full compatibility with most common RF coaxial switches, thus making XC-Series suitable to be used with non-RFE-branded systems as well.
This Automatic Changeover Unit features Linuxbased software and a large 7-inch LCD color graphic display with touchscreen for intuitive operations. A captivating peculiarity of the touchscreen menu is the Quick-Menu-Quick-Settings (QMQS) feature, which, using the main page, provides direct access to the most recurrent setups without wasting time searching for functions in submenus.
In the main page users can also find the N+1 system
synoptic to immediately detect the structure working condition and can also manage the priority policies for reserve transmitter response.
As all RFE products, the XC-Series is provided with a remote management system via web server, thereby facilitating distance control as well, including software updates, which can be done via USB port too.
The SNMP v2c option is included in this changeover unit, featuring improvements in performance, flexibility and security.
Concerning the hardware, the XC-Series is pretty compact. All its versions are 4RU on a standard 19inch chassis, making it easier for users who want to upgrade a system; weight is only 3 kg.
At the core of the interior layout there is the same concept behind all RFE products — simplicity and state of the art. Better than ever, this new automatic change over also carries an onboard micro UPS system in order to protect the unit even from the smallest power interruption or voltage spikes.
RFE — Making Broadcast Smarter.
Shure
MoveMic Wireless Microphone System
Shure's MoveMic Wireless Microphone System represents the pinnacle in mobile audio capture technology, delivering the world's smallest and best sounding, dualchannel direct-to-phone wireless lavalier solution. Through a completely bespoke design, MoveMic shatters limitations by encapsulating Shure's legacy of sound excellence into a microphone no larger than a USB drive. For the first time, users can effortlessly capture dual-channels of high-fidelity, broadcast-quality audio with minimal background noise directly onto their smartphones. With content creators, videographers and mobile journalists at the heart of its design, MoveMic makes broadcast-quality audio accessible to anyone, anytime and anywhere.
The genesis of MoveMic was rooted in the desire to deliver broadcast-quality audio through a seamless, direct-to-phone wireless solution. In the early stages of product development, we placed dozens of 3D-printed model microphones in the hands of mobile journalists and videographers to determine the most preferred size and shape. The feedback was unanimous: they craved a lavalier microphone with the tiniest footprint, free from the bulk of traditional receiver packs and tangled wires.
Determined to meet this challenge, Shure's engineering team leveraged the brand's almost 100 years of experience and set out to integrate cutting-edge audio capture technology and dual-channel recording capabilities into a device 46mm x 22m. After five years of rigorous design and engineering, Shure achieved the extraordinary. MoveMic's patent-pending wireless software and bespoke acoustic design deliver high-fidelity, broadcast-quality voice recordings with minimal background noise, effective up to 100 feet from the receiver.
Life happens fast, and MoveMic's simplicity and versatility are designed to keep up. MoveMic's direct-to-phone capability and automatic device reconnection give users the power of instant, professional-grade audio recording in the palm of their hand. With an optional MoveMic Receiver that extends device compatibility and the free MOTIV Audio and Video apps offering advanced audio settings and live-streaming capabilities, Shure provides users unparalleled control over their content creation. Paired with MoveMic's custom acoustic design and an IPX4 rating, users are guaranteed uncompromised audio quality, even in the most dynamic and demanding environments.
Moreover, consistent with Shure's values, MoveMic was designed with the real-world needs of its users in mind. Offering direct-to-phone connectivity in both single- and dual-channel configurations, users have the freedom to tell their stories in the way they want to. This flexibility ensures that every voice, every story, is given the platform it deserves. Designed to be practically invisible, MoveMic offers unprecedented concealment and portability, eliminating the distractions of historically bulky wireless microphones. With MoveMic, battery life is not a concern; each microphone offers eight hours of recording on a single charge, plus two additional charges in the carrying case, giving creators an entire day of premium audio capture in their pocket.
In a world where the clarity of the message is as crucial as the story itself, MoveMic by Shure stands as a testament to a century of audio innovation.
Telos Alliance
Axia StudioCore and StudioEdge
Telos Alliance debuted two new in-studio AoIP products at IBC2024: the Axia StudioCore integrated console engine, and StudioEdge I/O device.
StudioCore is the new integrated engine for Axia iQ, Radius, RAQ and DESQ mixing consoles, replacing the longrunning QOR.32 engine. It combines a 24-channel mixing engine, built-in Ethernet switch, internal power supply and CAN bus card with multiple channels of digital, analogue, Livewire+ AES67 I/O.
StudioEdge is a high-density I/O device designed to complement the Axia xNode family of products. It can be used as an all-in-one I/O solution in control rooms of any size, as a compact endpoint in Quasar SR- and XR-equipped studio, or as an ingest station or routing and monitoring solution in TOCs and machine rooms.
StudioCore and StudioEdge share a common hardware platform designed for maximum flexibility and I/O capacity.
FEATURES INCLUDE:
• Built-in 5-port Ethernet switch with PoE
• 5-inch color IPS LCD touchscreen display
• Internal power supply with optional second internal PSU available
• Four selectable mic/line inputs, eight line inputs, eight line outputs, three digital inputs/outputs configurable as AES/EBU, USB Audio, or S/PDIF, two amplified headphone outputs with independent DACs and four GPI/O ports.
• Built-in audio file player via USB data port
• Eight-output monitor matrix system
• Livewire+ AES67 stream capacity of 32 inputs and 32 outputs
StudioCore and StudioEdge will be available beginning in Q4 of 2024.
Telos Alliance
Omnia Forza FM Audio Processing Software for FM+HD
Omnia Forza FM, the latest member of the Forza family of software processors, made its European debut at IBC2024. Like Omnia Forza HDS for HD, DAB and streaming audio, Forza FM is delivered as a software container that can be run on an on-premises COTS server or on a cloud-hosted platform such as AWS, bringing Omnia-grade five-band FM audio processing to the virtual realm. Forza FM is also available pre-installed on the Telos Alliance AP-3000 audio platform for turnkey deployment of single or multiple software instances.
On the front end, Forza FM provides five bands of audio processing, with all-new wideband and multiband AGCs and limiters optimized to address the unique challenges of FM. On the back end, the Frank Foti-designed Silvio clipper from Omnia.11 and integrated stereo generator ensure the clean, dial-dominating sound Omnia is celebrated
for. Outputs include composite over IP and µMPX for the analog FM signal — which also includes a diversity delay — plus a peak-limited L/R output for HD-1 or DAB. Nielsen and Kantar options provide integrated audience measurement watermarking.
Omnia Forza FM boasts a unique singlepage HTML5 user interface that is intuitive and straightforward, yet comprehensive. Its "smart controls" adjust multiple parameters with a single control, enabling audio processing novices to easily deliver stellar on-air sound to their listeners — while providing processing experts with the individual finetuning controls they expect.
Forza FM is offered with both subscription and buyout options. Subscriptions include TelosCare PLUS, which provides software and feature updates at no additional cost; the buyout license includes one year of TelosCare support.
Telos Alliance
Telos VX Duo Broadcast VoIP System
Broadcasters around the world rely on Telos VX VoIP phone systems to ensure that on-air calls sound their best. VX Duo, the latest addition to the VX product family, brings broadcasters the smooth user experience and fantastic audio they have come to expect, at a price point that puts broadcast-quality VoIP within the reach of any budget.
VX Duo is a compact, silent device designed for in-studio use. It can be placed on any studio or control room surface. Or, for mounting in TOCs or equipment rooms, three units fit neatly side-by-side on a single rackmount shelf.
VX Duo comes with two channels/hybrids and is expandable to eight channels/hybrids in two-channel increments via expansion licenses. In base form, it's the perfect two-line phone system for a single studio; with additional channels, it's an excellent solution for multiple studio facilities, multiple stations in a single facility, or split between control rooms and production studios.
BENEFITS OF VX DUO INCLUDE:
• Easy installation and configuration, with simple, straightforward connection to high-quality cloud VoIP providers.
• Dual Ethernet ports: one for direct connection to Livewire+ AES67 studio infrastructure; a second for connection to VoIP services.
• Simple user control via familiar, full-featured Telos VSet phones, integrated console call controllers, and VX-compatible call screening software.
• Integrates with powerful Axia Pathfinder Core Pro broadcast controller software to facilitate HTML5based control panels using in-studio PCs or tablets.
• Telos VX Duo debuted at IBC2024 in Amsterdam, and began shipping in Q3, 2024.
Telos Alliance
Telos Zephyr Connect High-Density Codec Gateway
Telos Zephyr Connect is a new software-based multicodec gateway that enables broadcasters to transport multiple channels of stereo, mono and dual-mono audio across IP networks. Zephyr Connect provides the functionality and features of the Telos iPort hardware platform but with increased flexibility and scalability. In base form, Zephyr Connect provides two bidirectional codec channels; it is expandable in onechannel increments up to 64 channels per instance, with each codec independently configurable.
Zephyr Connect is ideal for transmission of multiple linear or MPEG channels over VPNs, satellite, Ethernet radio and Telco/ISP services such as SD-WAN, MPLS or legacy data links. It can be used as a network distribution system, a multichannel link to remote studios, or a studioto-transmitter link. Pair Zephyr Connect with an appropriate server for streaming audio, or pass audio and GPIO with a
QoS-enabled IP link between studios on a Livewire network using Telos Alliance xNodes. An optional Content Delay feature enables delayed playout of select received audio channels plus synchronized GPIO and ancillary data.
FEATURES INCLUDE:
• Up to four redundant IP stream destinations per encoder
• Unicast UDP and TCP or UDP multicast stream types, independently configurable per WAN stream
• Genuine Fraunhofer IIS encoding algorithms, including AAC, AAC-HE, AAC-HEv2, AAC-LD, MP3 and MP2 in a variety of bitrates
• Optional Enhanced aptX (E-aptX) encoding Zephyr Connect can be deployed on-premises on a COTS (commercial off-the-shelf) server or cloud-hosted. It debuted at IBC2024 in Amsterdam and began shipping in Q3 2024.
Wheatstone Corp.
WheatNet Blade 4 AoIP Unit
Blade 4, Wheatstone's fourth-generation WheatNet IP I/O unit, continues to build on the intelligent AoIP network with the addition of new transport protocol Reliable Internet Stream Transport (RIST).
Unique to Blade 4 is its complete AoIP toolset of audio processing, codecs, mixing, routing, control and operating system in one rack unit. Included are optional dual OPUS audio codecs for streaming between studio facilities, home studios or transmitter sites and updated CPU with GPU graphics acceleration for running customized scripts, apps and virtual interfaces directly on the Blade 4 itself.
Now with the addition of Reliable Internet Stream Transport (RIST), an open-source transport protocol developed for reliable transmission of video and audio in real time, Blade 4 continues to build on the intelligent network for combining studio facilities, creating new workflows, and eliminating costly studio hardware.
RIST adds enhanced network security and reliability as well as lower latency, higher quality audio streaming across the public internet, where links are less reliable and distance adds more delay. RIST is based on established protocols widely adopted by the broadcast industry. It uses the interoperability profiles found in VSF TR-06-1 and VSF TR-06-2, the technical recommendations of the Video Services Forum (VSF), which gives us important features like link bonding, forward error correction, seamless switching and many of the specifications found in SMPTE 2022-1.
This fourth-generation Blade is fully AES67-compliant for interoperability with a wide range of AES67 devices and supports SMPTE ST 2110, including NMOS discovery.
Blades are the I/O units that make up the WheatNet IP audio network of 200+ interconnected elements to choose from, all engineered, manufactured and supported by Wheatstone.
Wheatstone Corp. Automation Plug-In for Entry-Level DMX AoIP System
INTRODUCED AT IBC2024, ENTRY-LEVEL AUDIOARTS AOIP NOW WITH AUTOMATION CONTROL PLUG-IN
Wheatstone's entry-level AoIP console system is taking plugand-play one step further with new control plug-in for existing playback and automation systems.
The DMX console system marketed under its Audioarts value brand includes console surface and mix engine with local I/O and five-port Ethernet switch for just about any studio configuration, from a standalone on-air desk to a two-studio air/production configuration on out to a large, multi-studio network via WheatNet IP audio.
Now with the addition of the WheatNet IP automation control protocol (ACI), the DMX console system is plug-in-ready for full control of major playout and automation systems as well as WheatNet IP elements and other third-party partner brands.
ACI is Wheatstone's proprietary control protocol for its
intelligent WheatNet IP studio network in use by large studio operations around the globe. With this, DMX studios now have full playout and automation control directly from the DMX console and can add touchscreen panels using WheatNet IP ScreenBuilder.
"ACI adds more scalability for those budget studios that are companions to a larger studio down the hall, or for those one or two studio operations that could benefit from the flexibility of ScreenBuilder," said Jay Tyler, Wheatstone's director of sales.
Available in an 8- or 16-fader frame, the DMX console surface has full EQ and dynamics functions with OLED display on every channel.
Made by Wheatstone under its Audioarts Engineering brand, the DMX console system is engineered, manufactured and supported by Wheatstone.
WorldCast Systems APTmpX Software
WorldCast Systems proudly announces the introduction of the APTmpX Software, an affordable and efficient enhancement to its renowned APT range of codecs. Built on the award-winning APTmpX algorithm for MPX/Composite transmission, the APTmpX Software represents a significant advancement in the broadcasting industry. This software is designed for seamless integration with leading third-party software sound processors, providing broadcasters with a versatile and costeffective solution.
PRODUCT OVERVIEW
The APTmpX Software is a vital tool for modern broadcasters seeking a high-performance MPX solution that is both simple to implement and economically accessible. Designed for effortless installation on Windows platforms, APTmpX Software ingests MPX over AES67 and provides seamless integration into existing studio infrastructures. The software delivers superior signal quality with the highest transparency at bitrates of 300/400/600 kbps, ensuring crystal-clear signal transmission.
One of the standout features of the APTmpX Software is WorldCast's exclusive SureStream technology. This innovation guarantees reliable and uninterrupted MPX transmission through packet redundancy and the lowest latency, ensuring
that broadcasters can maintain high-quality broadcasts without interruption.
COMPATIBILITY AND VERSATILITY
The APTmpX Software is compatible with APT hardware codecs, APT's virtual codec and Ecreso FM AiO Series transmitters ranging from 100 watts to 2 kW. This compatibility provides broadcasters with a versatile solution that can adapt to a wide range of broadcasting needs. By incorporating the APTmpX Software, broadcasters can maintain the highest signal quality across the entire MPX over IP broadcast chain, ensuring a consistent and superior listening experience for their audience.
MARKET IMPACT AND INNOVATION
"APTmpX Software represents a key change in the market by simplifying and reducing the cost of FM broadcast chains, while benefiting from APT's highest standards of quality," said Gregory Mercier, director of product marketing at WorldCast Systems. This statement highlights the transformative impact of the APTmpX Software on the broadcasting industry. By offering a high-quality, costeffective MPX compression and transport solution, WorldCast Systems is setting new standards for operational efficiency and on-air quality.
WorldCast Systems
Ecreso FM AiO Series 2 kW
WorldCast Systems proudly submits the new generation Ecreso FM AiO Series 2 kW transmitter for the Best of Show Award, recognizing its exceptional blend of cutting-edge technology, outstanding performance and cost-effectiveness.
INNOVATION AND ADVANCED SOFTWARE FEATURES
The Ecreso FM AiO Series exemplifies innovation in broadcast technology. Now available in a powerful 2 kW option, this compact 3U device caters to the medium-power needs of broadcasters without compromising on performance. The integration of advanced features such as the APT IP Decoder for audio/MPXoIP, full RDS encoder, five-band sound processor, stereo encoder and UECP capabilities, streamlines operations, reduces the need for additional equipment, and simplifies broadcasters' setups. This results in significant cost savings and a seamless broadcasting experience.
EFFICIENCY AND ECO-FRIENDLINESS
One of the standout attributes of the Ecreso FM AiO Series is its remarkable efficiency and ecofriendliness. The transmitter operates at up to 76% efficiency in its standard operations and when the patented SmartFM technology is activated, broadcasters can further reduce their energy consumption by up to 40%. These savings are achieved without compromising audio quality or coverage. Ecreso FM AiO Series 2 kW is the most cost-competitive and environmentally friendly transmitter in this power range available on the market today.
ROBUSTNESS, REDUNDANCY AND EASE OF MAINTENANCE
The Ecreso FM AiO Series offers high robustness and ease
of maintenance. The new 2 kW model features dual hotswappable power supply units (PSU), two fans and two RF pallets, ensuring maximum on-air time by maintaining operations even in the event of component failures. The intuitive GUI, RF planar design and state-of-the-art removable fans from the front panel ensure long-lasting durability and simplified maintenance procedures, which are essential factors in achieving a low Total Cost of Ownership.
ENHANCED FOR SFN APPLICATIONS
WorldCast Systems has released the 2kW in software version v3.3.0, along with the entire AiO Series range (100W–2 kW). This version brings Single Frequency Network (SFN) applications with the integration of a built-in GPS receiver for internal clock references and 10 MHz and 1PPS outputs for peripheral equipment. The APT IP Decoder integrates APT SynchroStream technology, ensuring audio content synchronization over the network. The APT IP decoder also includes a new backup mode, Hot Standby Mode, for automatic stream switching in case of connection loss, ensuring continuous broadcast.
The AiO Series 2 kW supports the latest SNMP v3 revision, enhancing the security and management capabilities of the FM transmitter fleet. "With these enhancements, the AiO Series becomes the most advanced FM transmitter for SFN applications, offering unparalleled precision, reliability and security," said David Houzé, Ecreso product manager at WorldCast Systems.
MARKET IMPACT AND RECOGNITION
By offering unparalleled audio quality, reliability and costeffectiveness, the Ecreso FM AiO Series 2 kW contributes significantly to broadcasters' success. Its unique blend of features and benefits sets a new industry standard and exemplifies a truly outstanding product, which we believe make it a prime candidate for the Best of Show Award.
Xperi
AIM Player
AIM Player is an audio platform product for radio stations from award-winning outfit All in Media, a part of Xperi. AIM Player is available on a licensed or partnership basis for the radio industry. AIM Player is designed to bring to the fore a unique model based on true partnership, a unique approach for an app developer. Moreover, for the first time, broadcasters can license a solution that supports all the key platforms including mobile apps on iOS and Android, CarPlay and Android for in-car mirroring, TV platforms such as Android TV and Apple TV, the Android Automotive app and Smart Speaker integrations.
The end-user will get to enjoy a free-to-access radio and audio app offering live radio and podcasts. The platform also supports direct integration with DTS AutoStage, the world's leading connected car hybrid radio and media platform, which gives broadcasters direct access to the award-winning DTS AutoStage broadcaster portal (analytics).
The launch partner is Nation Radio with the Nation Player app, a free-to-access radio and audio app offering live radio stations and podcasts.
AIM Player includes a series of components that can be arranged and customized to construct a premium audio product. This allows audio brands to save the time and resources usually required to launch a digital product and compete in the busy audio market.
KEY FEATURES
• Regular updates, including improvements and new features
• Live radio support using AIM's renowned streaming engine, including broadcast and digital-only stations
• Full audio on-demand support, including podcasts and catch-up
• Video playback including widescreen and horizontal videos
• API-driven layout — keep the app fresh without submitting app updates
• User registration and login using email and popular identity providers, with various gating options
• Make it yours — we can develop custom components for your needs, or use our pre-built options
• Content-focused design, giving you the flexibility needed to let your content shine
• User-driven personalization: let listeners make the app their own
• Algorithmic suggestions to power discovery and prolong engagement
• Data reporting via third-party CDPs and analytics tools, helping you understand your listeners better
• Monetization possibilities via display and instream ads or in-app subscriptions
• Engagement through tools like push notifications, competitions and curation
• CarPlay and Android Auto support as standard, letting listeners connect on the move
• Chromecast and AirPlay support included
Xperi/DTS
DTS AutoStage New Features — Gaming
DTS AutoStage is an AI-powered, global media entertainment platform that brings a content-first media experience across radio, audio, video and now gaming to digital cockpits in cars.
DTS AutoStage combines machine learning, data science and human curation to deliver personalized recommendations for end users. Metadata catalogs, a scalable cloud infrastructure, decades of media UX expertise and automotive infotainment knowledge are culled together to produce the first integrated, aggregated video system specifically designed for automotive.
DTS AutoStage operates based on viewer behaviors and natural voice selection to provide users with the curated content they desire. It enables a video and audio content consumption experience that is enriched with metadata and functionality, while making it easy for consumers to discover what they are in the mood for within thousands of entertainment options through algorithms primed to their preferences.
Respondents in a recent consumer survey deployed by DTS, especially the younger cohort, expressed a strong desire for in-vehicle entertainment that is much more than a mirror of their smartphone. Gaming is a new and very promising way broadcast radio can stay connected to consumer interests and preferences — and the DTS AutoStage platform is designed
to help broadcasters leverage this opportunity so they can generate increased engagement, extend reach and keep radio deeply relevant in today's vast panoply of in-vehicle entertainment.
Three categories of gaming have emerged for integration in new cars — Driving Games, Passenger/RSE Games and Console Games. Many of the games offered in these categories are backed by well-established brands and have been developed by the world's leading game designers. At Xperi, we are expanding our DTS AutoStage technology and plan to incorporate digital games into our platform, where we are focused on audio driving games — games primarily designed around content using an audio interface that is safe for drivers to play — because we believe these are the ones poised to gain the greatest traction.
And these games represent significant opportunities for broadcasters, with boundless opportunities for creating new game content that can run alongside broadcast radio shows — just as enriched metadata is already enhancing, reinforcing and making broadcast content more immersive. For example, a "Name That Tune," or "Guess That Lyric" game could run alongside musical programming or a slate of games that incorporate local, regional and national trivia and sports quizzes can be generated. The possibilities for today's creative programmers are endless and the benefits could be enormous.
For consumers, DTS AutoStage enables discovery of new or related content without fiddling with controls or alternating between sources such as radio (analogue, digital, satellite) and podcast or music streaming services. It allows for a more expansive, immersive listening — and viewing — experience, individualized to each consumer's needs, wants and habits. All this varied entertainment content, across the globe, is continuously refreshed with over-theair updates.
Intelligent Data Platform (IDP)
Zixi's Intelligent Data Platform (IDP) employs advanced analytics, machine learning (ML) and artificial intelligence (AI) to decipher the extensive stream data traversing intricate media supply chains. This assists human operators in directing their attention towards transmission issues and gaining insight into potential problem areas before failures become apparent. IDP introduces five novel videospecific advanced machine-learning classifiers and anomaly detection models to provide actionable operational insights.
Accessible via Zixi's ZEN Master control plane, the Intelligent Data Platform (IDP) empowers media companies to craft more intelligent media workflows through the utilization of AI and ML-enabled toolsets. These tools facilitate sophisticated event correlation, data aggregation and deep learning, thereby enhancing broadcast workflows by minimizing risks in the broadcast environment. Years of research in data architecture, advanced analytics and event correlation have culminated in IDP, allowing users to effectively oversee, manage and comprehensively understand their inputs and outputs. By offering unparalleled visibility into vast data sets, IDP provides customized intelligence and data visualizations, enabling users to proactively address potential issues before they escalate.
Powered by Zixi's Software-Defined Video Platform (SDVP), IDP gains unique access to network and video quality telemetry, leveraging this data to generate proactive insights and identify patterns that may elude human observation.
TECHNOLOGICAL ADVANCEMENTS
The Intelligent Data Platform (IDP) comprises a robust data bus that aggregates over 9 billion data points daily from an extensive array of sources within the Zixi Enabled Network. These sources include 400 devices and systems embedded with Zixi technology, such as encoders, cloud services, and networks, as well as proprietary data sources like the Zixi protocol and Zixi Broadcaster, which are widely utilized by leading media organizations globally. This rich telemetry data feeds into five continuously updated machine-learning models within the IDP, where events are correlated and patterns are discerned.
Zixi's innovative incident-prediction feature harnesses these machine-learning models and AI capabilities to forecast with an impressive 95% accuracy whether an incident is likely to occur. This empowers operators to proactively intervene before faults manifest, enhancing operational efficiency and averting potential disruptions.
Furthermore, Zixi has introduced new AI/ML-driven models that are computationally efficient and enable real-time measurement of perceptual video quality without the need for reference frames. These models accurately calculate estimated PSNR and VMAF standards-based video quality scores, facilitating real-time alerts for various video impairments that may disrupt the viewer experience.
Machine-learning algorithms embedded within the IDP distinguish between "normal" or "healthy" data and "at-risk" data, alerting users to potential issues on the horizon. The data is communicated to ZEN Master, where it is depicted as indicators and graphs over time.
By leveraging billions of data samples daily from numerous media companies, Zixi's machine-learning models undergo constant training for accuracy and consistency. As more data is aggregated across the platform, these models continue to evolve and improve over time, ensuring the platform's reliability and efficacy in addressing complex media delivery challenges.
ADTECHNO Inc.
Dante AV Ultra Encoder and Decoder
ADTECHNO introduces the Dante AV Ultra Encoder (DAV-01ST) and Decoder (DAV-01SR), devices at the forefront of AV over IP technology. Developed in line with Audinate's Dante AV Ultra standard, these devices redefine audio and video distribution across a 1 GbE network, offering cutting-edge solutions for professional AV environments.
The DAV-01ST and DAV-01SR enable the seamless transmission of eight-channel audio, 4Kp60 4:4:4 video, USB, IR and RS-422 serial command signals over a standard Ethernet network. Their integration with Dante Controller software allows AV professionals to manage and route signals with precision, offering a highly flexible and efficient workflow.
The DAV-01ST Encoder features 12G-SDI and HDMI input ports, converting these signals into Dante AV Ultra. With loop-out ports for each input, the device supports signal verification on preview displays and easy integration with other AV transmission equipment. This feature ensures a reliable and efficient workflow in environments such as live events, corporate settings, and broadcast studios.
The DAV-01SR Decoder complements this functionality with dual 12G-SDI and HDMI output ports per channel. This enables simultaneous conversion of Dante signals into both formats, allowing video output to multiple displays or AV devices at once. This versatility makes it an ideal solution for venues that demand reliable multi-output capabilities, such as
stadiums, theaters, and places of worship.
Both devices are equipped with a 2.0-inch IPS display, enabling video previews and audio level monitoring. Users can easily adjust Dante settings like sampling rate and latency through an intuitive rotary knob. The inclusion of a headphone jack adds the convenience of real-time audio monitoring, making these devices essential tools for professionals working in critical environments.
Powered by the ProAV-optimized Colibri codec, the DAV-01ST and DAV-01SR deliver visually lossless 4K60 video with imperceptible latency.
This codec is specifically designed to handle the complexity of 4K graphical content, ensuring that video and audio are transmitted with the highest quality and reliability. These devices are tailored for use in demanding environments such as live events, boardrooms, and high-profile broadcasts.
Feature highlights include the ability to transport 4K HDR UHD video sources with subframe latency, advanced signal management for routing audio, video, USB and control signals independently, and perfect audio-video synchronization through a single network clock. The devices also offer autodownscaling and HDCP revision conversion on HDMI outputs, ensuring compatibility with a wide range of display devices.
Additional features include PoE+ (30W IEEE802.3at) or DC (12V) power options, 1U half-rack mountability and compatibility with Dante Domain Manager (DDM). These devices are designed to integrate seamlessly into existing AV infrastructures, making them adaptable and easy to use.
Wim Roose, Audinate's senior product manager, remarks, "ADTECHNO's Dante AV Ultra Encoder and Decoder signify a significant advancement in professional AV technology. Their commitment to highquality, user-centric solutions ensures the next-generation audiovisual experience."
Amazon Web Services
AWS: Live Cloud Production
Live Cloud Production (LCP) on AWS is revolutionizing live event production, offering a framework for live event venues and production teams to virtualize how they capture, produce, and distribute live event coverage inside and outside of the venue. With it, they can encode camera sources on-site at event venues and route them into a cloudbased production control room, where they can leverage their preferred pay-as-you-go services and AWS Partner technologies to produce and deliver live event coverage. They can seamlessly perform video and audio switching, integrate graphics, and manage replays from anywhere while maintaining the same functionality as traditional onpremises setups. The approach enhances workflow flexibility and reduces on-site infrastructure and travel requirements, with the resilience, reliability, and engineering rigor that live productions require.
Furthermore, LCP on AWS enhances sustainability, decentralizes operations, and allows production teams to collaborate from virtually anywhere. The result is an agile, innovative, and efficient workflow that sports leagues like the National Hockey League (NHL) are already taking advantage of. On March 22, 2024, the NHL executed the first fully cloud-based live professional sports broadcast in North America on AWS for a matchup between the Washington Capitals and the Carolina Hurricanes. Deviating from traditional broadcast norms, the LCP workflow enabled the NHL to create, manage, and distribute the live broadcast at 1080p resolution with a largely remote team.
Historically, productions like these relied heavily on massive production trucks to manage and distribute game feeds
from venues to fixed control rooms filled with dedicated workstations and equipment. With evolutions like LCP on AWS, this model is poised to change, as the approach supports remote collaboration across various locations, reducing the associated travel costs and carbon emissions of on-the-ground setups. NHL's LCP application involved a single on-site technician at Capital One Arena in Washington, DC. The primary broadcast was produced remotely from NHL Network studios in Secaucus, NJ, and a secondary statsenhanced alternate broadcast, "NHL EDGE Unlocked," was managed from NHL headquarters in Manhattan. More broadly, LCP on AWS facilitates quick access to archive footage and enables the rapid creation of highlights, improving workflow efficiency and unlocking new opportunities to create even more fan-tailored content to drive deeper audience connections. As David Lehanski, NHL EVP of Business Development and Innovation, put it, "Cloud production is going to be at the forefront of fan development around the world for the NHL in the very near future."
The environmental benefits of LCP on AWS are equally compelling. By reducing on-site personnel and equipment, live production teams and venues can lower their carbon footprint. For large-scale events that require multiple feeds, such as major concerts, corporate launches or sports playoffs, LCP on AWS offers an opportunity to minimize energy consumption and logistical challenges. Given all the advantages outlined above and its transformative impact on live event production, LCP on AWS is a framework welldeserving of Installation Best of IBC recognition.
Meet BG-IPGEAR-Ultra, our advanced 4K Ultra HD AV over IP kit. What makes IPGEAR Ultra truly unique is its combination of support for seamless switching, multiviewing, video walls, and USB 2.0/KVM control.
Key features include HDMI 2.0b and HDCP 2.2 compliance, support for video resolutions up to 4K@60Hz (4:4:4), and a video bandwidth of 18 Gbps. Audio formats support LPCM 2.0CH at 48 kHz. The integrated design of the encoder and decoder supports both fiber and copper connections, with a window roaming function that allows a decoder unit to process up to 16 signals for arbitrary windowing, roaming, overlaying and splicing. This AVoIP system boasts a ridiculous amount of other abilities like KVM seat management, universal H.264/265 protocol, high-def background images, point-to-point signal extension, signal distribution and multicast mode, all over a 1G network switch.
It also features integrated central control over RS-232, control via our BZBGEAR control app (available for free on Windows, Mac, iOS and Android), user rights management, KVM functionality and standard PoE support. The encoder supports HDMI local loop-out and audio embedding and de-embedding functions, along with comprehensive visual interaction modes, scene preview and environmental visualization control.
BG-IPGEAR-ULTRA
The IPGEAR Ultra transceiver is a versatile AVoIP solution offering exceptional audio and video quality with ultra-low latency. It supports both fiber and copper connections, automatically prioritizing copper for seamless switching. Capable of extending audio and video signals over a standard 1G network switch, it reaches distances up to 328 feet/100 metres. The transceiver enhances its functionality with seamless switching, video wall capabilities, multiview options, and KVM seat management. Its flexible design allows it to function as either an encoder or a decoder, simplifying installation and inventory management. No
more worrying about the correct number of transmitters and receivers needed!
When configured as an encoder, IPGEAR Ultra supports one HDMI 2.0 input and one HDMI loop-out, along with analog audio embedding and de-embedding. In decoder mode, it provides one HDMI 2.0 output with analog audio de-embedding. Also, the encoder supports an H.264 video preview stream, making it a comprehensive solution for various AV needs. Built on a Linux platform, it offers flexible control methods and integrates advanced audio and video processing, networking, visualization, centralized control and full network distribution technologies.
BG-IPGEAR-ULTRA-C
IPGEAR Ultra C complements the transceiver by providing a sophisticated video distribution and control system designed to manage the IPGEAR Ultra AVoIP system. It integrates servers, encoders and decoders, allowing users to control the system through a web-based interface on a PC. This facilitates easy management of video sources, splitscreen modes, and display configurations. The system also supports KVM functionality for controlling connected PCs and offers mobile control via our BZBGEAR control app, making it ideal for commercial displays, broadcasting, corporate settings and public venues.
Cobalt Digital SAPPHIRE BBG-2110-4H/S
PEG-XS is a lightweight video compression scheme that combines extremely low latency (on the order of a few lines of video) with good bandwidth savings, when compared to baseband video. Carriage of JPEG-XS over IP networks is defined in SMPTE ST 2110-22 and VSF TR-08:2022.
Such JPEG-XS streams can be combined with ST 2110-30 audio and ST 2110-40 ancillary data.
The SAPPHIRE BBG-2110-H/S, BBG-2110-2H and BBG-21104H/S mini-converters address the need to display received Baseband or JPEG-XS content on HDMI monitors in a simple and cost-effective way. They are designed to be mounted behind the monitor. The BBG-2110-H/S is a single-channel unit with simultaneous SDI and HDMI outputs, the BBG2110-2H is a dual-channel unit with HDMI outputs, and the BBG-2110-4H/S is a quad-channel unit with simultaneous SDI and HDMI outputs. The units have the following features:
• Dual SFP cages with support for 10G and 25G Ethernet ports.
• Additional 1 Gb copper Ethernet port for out of band management.
• Support for SMPTE ST 2022-7 Seamless Switching up to Class-C operation for WAN environments.
• Support for NMOS IS-04/IS-05 control and management, both in-band and out-of-band.
• In-band and out-band PTP support.
• Fan-less option for quiet environments and higher reliability
(H/S and 2H models only).
• Support for uncompressed ST 2110-20 video.
• Support for asynchronous (non PTP locked) signals for IPMX compliance.
• Make-before-break switching: When changing channels, the unit will acquire the new channel and lock to it before releasing the old channel, to provide a seamless experience. This also includes audio ramp down/ramp up to avoid a click.
The SAPPHIRE BBG units are the ideal choice for directly displaying incoming JPEG-XS content on HDMI monitors, including content originating from a WAN connection. The units can be mounted directly behind the monitor, will not take up any rack space, and are incredibly quiet, which makes them ideal for control rooms. JPEG-XS is the ideal technology for contributing remote production streams. It provides extremely low latency and high quality, at a fraction of the bandwidth of a baseband stream. In a remote production environment, such streams need to be routed to monitors for viewing. The BBG-2110 units from Cobalt provide this functionality — they are small, rugged units that can be installed behind the monitor and provide direct HDMI outputs, obviating the need to provide additional conversion. The units support ST 2022-7 seamless switching if desired and are designed for Class-C (WAN) operation. Finally, they do not have internal fans, making the units quiet and reliable.
When doing remote production, it is necessary to receive and display JPEG-XS streams coming over a WAN connection, typically using high-end consumer-grade monitors. These monitors are typically in a production environment, where space and power are at a premium, and noise must be kept at a minimum. The BBG-2110 units satisfy all these needs — they are mounted behind the monitor, thus saving space; they are low-power, and they are fan-less, which means absolutely no noise and improved reliability.
Glensound
Glensound European Parliamentary Broadcast System
The Glensound European Parliamentary Broadcast System represents a significant advancement in integrated audio solutions for parliamentary and committee chambers. Known for delivering reliable solutions of this type since 1995, Glensound has established itself as a trusted partner for national parliaments. The European builds on that experience, offering a solution designed to meet the specific needs of parliamentary environments.
The European allows for multiple I/O interfaces, not only for mic and speaker connections, but also for card readers for identification of MPs or for integration of voting panels. It allows for all network connections using Dante, with a central DSP core for advanced functions and control, audio distribution for broadcast and recording, software control, and integration and control with other AV requirements.
The European addresses the common challenges of historic buildings — large open spaces, with multiple mic sources and multiple speakers. Audio delay is measured from each single microphone, to every single speaker. Audio delay then dynamically switches as any mic is identified as the priority, maintaining a balanced sound through the chamber, free from unwanted delay that can colour the sound.
Glensound designed a frequency shifter as part of the European that actively prevents feedback and further increases intelligibility of the system.
By managing the audio environment, and integrating audio and video control seamlessly, the European utilises a networked audio infrastructure, and ensures both high performance and full redundancy.
What sets the European apart is its comprehensive approach. Rather than relying on a collection of offthe-shelf components, the European Parliamentary Broadcast System is a fully integrated solution currently featuring up to 20 specialised hardware devices. Each part of the system is designed to work together, providing a cohesive and reliable experience, that expands with every project to meet directly the requirements.
Hardware control panels are a key feature of the European. These panels allow for precise audio management, voting and streamlined parliamentary procedures. They are adaptable to bespoke furniture and are fine-tuned to address any acoustic
issues, improving speech intelligibility and reducing delays. Whether it is a committee room or a control panel for 150 microphones in a main chamber, a panel will be designed to suite.
The software control, developed by Squared Paper, adds another layer of functionality. Known for its collaboration on parliamentary systems, Squared Paper has integrated features for security, log-in and real-time management of audio and video feeds. The central rack manages multiple audio feeds and network switching, while the central DSP (Digital Signal Processor) adjusts audio pitch and equalises sound.
The European emphasises reliability. It includes networked audio and redundancy features that keep the system running smoothly. This reliability is crucial in parliamentary settings where continuous communication and logging is essential.
At IBC2024, the Glensound European Parliamentary Broadcast System made its international debut, demonstrating its capacity to address complex parliamentary and broadcast needs with reliability and precision. This system stands out for its seamless integration, tailored customisation, and consistent performance, setting a new standard for audio solutions in parliamentary environments.
Vizrt
Viz Connect Tetra
Live production has historically required multiple video converters and software platforms for differing workflow needs, and remote production typically requires additional complexity and conversion; especially with cloud integration. Viz Connect Tetra addresses these common issues by uniting everything needed for remote live contribution into one device.
Tetra is an ultra-compact live production workstation that enables multi-channel 4K video and audio connectivity with a simple internet or network connection, anywhere in the world. It helps meet the ever-changing demands of multi-camera, on-location, live productions with four flexible channels supporting up to 4K without compromise. Viz Connect Tetra's I/O channels can be used as up to 12G NDI to SDI, 12G SDI to NDI or even NDI to NDI — at home in a server room rack as it is on a desk, on location, wherever connection is needed.
Tetra's in-built white balance and color correction tools mean these all-important quality adjustments can be made directly at the point of conversion. It brings never-before-seen flexibility in audio, whether it's NDI, SDI, ASIO and WDM virtual SoundCards. Each of the four IO channels support 16x16 audio routing for patching on the go, making it easy to respond to any live production requirement. To put it simply: Tetra truly enables easier video and audio conversion, anywhere in the world.
Video productions also rely on collaboration. With simpler access to the cloud with NDI 6, Tetra features access to NDI Bridge (suiting almost any connectivity constraints), which means it can send and receive NDI for remote production.
The compact converter allows users to seamlessly enable
production capabilities with cloud and remote productions — without additional hardware.
Tetra helps remote content creators to securely connect, expand their sources, and create a private peer-to-peer content network. It integrates video and audio feeds from different locations into a single NDI or SDI live production environment via an NDI Bridge Host in the data center. With NDI Bridge Join mode, Tetra systems can be securely linked to NDI Bridge Hosts anywhere worldwide. Enabling bidirectional sharing of content and contribution between any location with internet or WAN access — thus facilitating the collaborative process in productions. Tetra bridges the gap between HTML 5 Graphics tools, Traditional SDI infrastructure and the world of NDI IP technology.
With seamless connectivity to multiple postproduction tools, Tetra enables access to flexible, cloud-based and remote production environments. This extends to post-production, NDI-enabled postproduction systems, editors; even producers and directors no longer need to be in the same room.
Tetra's compact footprint with 12G 4K UHD output, and the inherent secure encryption workflow provided by NDI Bridge, with the data center host in control of connections, makes it the perfect desktop companion for post-production professionals to land their program output to broadcast grade displays, wherever they are, even if operating via remote desktop systems miles away.
Tetra rolls all you need for remote live contribution into one device, revolutionizing workflow, scalability and stability for live productions. It simplifies remote, local and hybrid content creation in a portable workstation.
Wasabi Technologies
Wasabi AiR
Wasabi AiR is a new class of intelligent media storage in the cloud purpose-built for the Media and Entertainment (M&E) industry that is redefining how media archives can be used. As the industry's first AI-enabled searchable active archive, Wasabi AiR combines advanced AI and machine learning features, such as metadata auto-tagging and multilingual transcription, with Wasabi's high-performant and predictable hot cloud storage so users can transform the way they produce and monetize content. There is no additional charge for the use of the AI.
Most major media companies are sitting on a mountain of gold — their content archive. Over the years, these organizations have racked up thousands upon thousands of hours of content, but often this content cannot be used, since it would take hours of manual labor to search through. Wasabi AiR's recognition technology combs through unstructured data like video segments, images or documents to identify people, audio, emotion, logos, landmarks, objects and text with market-leading accuracy. Data ingested into the Wasabi Cloud is automatically tagged with advanced AI and machinelearning metadata so users can quickly search for any content in their storage and bring it to market as quickly and efficiently as possible, with relevancy. Rather than spend hours manually searching for the desired video clip or asset, creatives have more time to actually be creative with content at their fingertips.
Wasabi AiR is perfect for use cases throughout the M&E industry. It enables post-production editors to quickly find and assemble video for news packages, highlight reels, social media content and more in real time. In the marketing and sponsorship space, Wasabi AiR can find logos for sponsorship attribution with ease for ROI and analysis. Finally, it can tailor content to specifically meet the requirements of geo-diverse audiences.
M&E organizations are operating under tighter deadlines and narrower profit margins and are looking for ways to speed production workflow while controlling costs. Wasabi AiR's low, capacity-based pricing allows users the freedom to work with their content without being penalized with unpredictable fees for AI analysis, egress, or API requests, at a price up to 80% less than competitors. Additionally, Wasabi Air has the highest AI accuracy and rates in the industry and
because the machine learning is owned and controlled by the user, accounts and data are secure and protected from ransomware, disasters or accidental deletions.
Major names such as the Boston Red Sox and Liverpool Football Club are already using Wasabi AiR to speed up their digital processes and deliver content to audiences faster than ever. With Wasabi AiR, intelligence becomes table stakes for the future of object storage. It is the catalogue that allows the M&E industry to find exactly what it's looking for in seconds, and therefore breathe new life into their data. As an industry-first offering, Wasabi AiR is the only product available to media and entertainment companies that enables the ability to utilize and monetize content in a matter of seconds.