


The audiovisual industry is evolving at an increasingly rapid pace, bringing the risk of falling into a state of technological overexcitement. This can lead companies in the sector to take erratic paths and lose sight of their customers’ true priorities. Since falling behind is not an option, companies must find the right balance to avoid missteps on the path to continuous innovation.
One broadcaster that seems to have understood this well is Latina Televisión, one of Peru’s leading channels. Known for its strong technological focus, the network remains committed to its core mission: bringing families together in front of the screen. Recently, the company installed an advanced robotic camera system to enhance its content production capabilities, and further upgrades are already in the pipeline. In an exclusive interview for this issue, its CTO, Pavel Pacheco, shares insights into what’s coming next.
But Latina isn’t the only broadcaster worth highlighting this month. We also visited the facilities of Telemadrid, one of Spain’s leading regional channels, to hear firsthand from its technical
directors about the latest technological upgrade that has transformed its production infrastructure. The channel has now embraced an IP-based architecture for program production, positioning itself at the cutting edge of broadcast technology. This upgrade serves as a reminder that innovation isn’t always driven by the biggest players in the industry.
From major broadcaster upgrades to one of the market’s most defining trends—virtual production. This topic is a frequent focus in our pages, given its undeniable relevance. In this issue, we take a closer look at Dimension, a powerhouse studio specializing in virtual production.
Its co-founder and CTO, Callum Macmillan, walks us through their most cutting-edge implementations, including custom-built enhancements for Unreal Engine and their groundbreaking ‘Virtual Humans’ technology. He also shares his perspective on where the market is heading. We highly recommend this interview to anyone looking for key insights into the future of this exciting technology—it won’t disappoint.
Editor in chief
Javier de Martín editor@tmbroadcast.com
Key account manager
Patricia Pérez ppt@tmbroadcast.com
Creative Direction
Mercedes González mercedes.gonzalez@tmbroadcast.com
Chief Editor
Daniel Esparza press@tmbroadcast.com
Administration
Laura de Diego administration@tmbroadcast.com
Published in Spain ISSN: 2659-5966
TM Broadcast International #138 February 2025
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
News PAN SHOT
BROADCASTERS | LATINA TV
TV: tradition and innovation to reach every home
We interviewed its CTO, Pavel Pacheco, to explore this TV’s latest technology milestones and upcoming investment plans.
Effective technology, ease of production
We are used to seeing major television networks at the forefront of innovation, leading the way in implementing the most ambitious technological upgrades.
The show closed a record edition, with more than 85,000 visitors, demonstrating the growing interest in a thriving audiovisual market amidst a process of transformation in full swing.
VIRTUAL PRODUCTION | DIMENSION Dimension Studio: a window into the future of virtual production
What will virtual productions look like in the future? We certainly can’t predict it, but if there’s one company with the authority to guide us on the path this technology will take from now on, it’s Dimension.
THE MANUFACTURER’S VISION | KILOVIEW
Judy Zuo, Kiloview’s CTO:
“One of the main challenges companies face when implementing AV-over-IP solutions is network reliability”
Within the section we regularly dedicate to manufacturers to learn more about their market vision and solutions, this time we focus on Kiloview, a company specializing in providing cutting-edge AV-over-IP solutions.
In the midst of the expansion of AI, with this camera comes the time to knock down another of those concepts that until now were part of our “great immutable truths”.
ISE closes a record-breaking edition with 85,000 visitors, 15% more than in 2024
Integrated Systems Europe (ISE) closes its latest edition with record-breaking numbers. One of the global leading audiovisual events, held at Fira de Barcelona from February 4 to 7, brought together a total of 85,351 visitors from 168 countries, representing a 15.5% increase compared to last year, according to a statement released by the organization. In terms of daily figures, the event welcomed 49,981 attendees on the first day, reflecting a 16.1% growth compared to the opening day of ISE 2024. The second day continued the upward trend, reaching 59,038 visitors—an increase of 14.4% from the previous year. The final day, in turn, attracted 49,716 attendees. In total, 110,540 registrations were recorded, with 185,700 visits across the four days. Additionally, the trade show
featured the highest number of exhibitors to date, with 1,605, and the largest exhibition space, covering a net area of 92,000 m².
At the conclusion of the event, Mike Blackman, Managing Director of Integrated Systems Events, stated: “ISE 2025 has surpassed all expectations, setting new milestones in both attendance and innovation. The energy and engagement from our attendees and exhibitors have been truly remarkable. From the dynamic show floor and impactful exhibitor showcases to a content-rich programme, this year’s edition has delivered an unmatched experience. We are excited to build on this success and look forward to welcoming everyone back for another inspiring edition in 2026.”
One of the main attractions was the Esports Arena in Hall 8.1, where video game competitions took place, showcasing the latest esports technology. Equally captivating was the Robotics and Drone Arena, which featured live demonstrations in these rapidly evolving fields. The Discovery Zone served as a launchpad for emerging companies, while the AVIXA TV Studio added an interactive dimension with live broadcasts and expert interviews. The content program at ISE 2025 was well received, particularly the new Track sessions on industry megatrends, organized in collaboration with AVIXA, CEDIA, and ISE, according to the organizers. The two keynote conferences, featuring Brian Solis and Quayola, were standout moments of the event, attracting full-capacity crowds, leaving only standing room available.
“This week was a living example of connections restored. This is what drives our industry forward – professionals coming together to learn from each other and experience the latest innovations,” said AVIXA President & CEO David Labuskes, CTS, CAE, RCDD. “The AVIXA team was proud to put together a wide-ranging programme, from in-depth summits to interactive sessions at Xchange LIVE and broadcast AV on display at the AVIXA TV Studio. ISE goes by in days, but the impact is immeasurable.”
Daryl Friedman, Global President & CEO of CEDIA, said: “ISE is always a highlight of the year for CEDIA and myself. This year was no exception. One of my greatest takeaways was the opportunity to connect with our global membership at the CEDIA Annual Meeting, where we celebrated our collective achievements and discussed the future of our industry. It was inspiring to meet faceto-face with integrators and manufacturers from around the world, reinforcing the strength and reach of our community. And, of course, experiencing the latest advancements in smart home technology firsthand was truly exhilarating – seeing innovation come to life on the
show floor reaffirms why CEDIA is at the heart of this rapidly evolving industry.”
ISE 2025 had the presence of Mr. Salvador Illa Roca, President of the Generalitat of Catalonia, Mr. Jaume Collboni Cuadrado, Mayor of Barcelona, Mr. David Quirós Brito, Mayor of L’Hospitalet
de Llobregat, Mr. Jordi Hereu Minister of Industry and Tourism, Mr. Jordi Garcia Brustenga, Director General of Industrial Strategy and SMEs, Ministry of Industry and Tourism and Ms. Carmen Páez, Undersecretary of Culture. ISE 2026 takes place from 3-6 February 2026, at Fira de Barcelona, Gran Via.
Oman’s Ministry of Information, also known as Oman TV, has unveiled a project that transforms its central theater into a hybrid space, merging live event capabilities with full broadcast functionality.
The project reflects Oman TV’s commitment to innovation, addressing the dual demands of live performances and seamless integration with its 20 studios that support five national channels. This hybrid functionality positions the theater as a versatile facility, capable of linking with other studios for real-time signal sharing while supporting live broadcasts and in-house events.
The central theater posed unique challenges, as traditional auditorium designs rarely include broadcast-grade redundancy.
Oman TV envisioned a facility that would function as an additional broadcast studio, enabling seamless transmission of signals in and out of the building. To meet these demands without compromising performance or reliability, the system required a robust IP networking infrastructure.
Pixel Solutions, the Muscat-based systems integrator, collaborated closely with Lawo to tackle these challenges. “This was a remarkable project for us,” says Anas Alkurdi, CEO of Pixel Solutions. “Lawo’s solutions and expertise helped us design a system that exceeds
Oman TV’s exacting standards, while our team gained invaluable experience in broadcast audio systems.”
At the heart of the theater’s audio infrastructure is a Lawo mc²56 MkIII audio production console with 32 faders, paired with an A__UHD Core audio engine. Together, they provide an unparalleled level of flexibility and processing power, with the ability to handle more than 1,000 DSP channels. This setup enables engineers to manage the in-house audio mix for live events while also controlling signals for broadcast distribution. The console seamlessly handles over 100 audio inputs and outputs, ensuring smooth routing and reliable performance for both applications.
Complementing this setup are several additional Lawo solutions.
Two A__stage64 stageboxes ensure seamless audio signal connectivity across the venue and beyond, adhering to ST2110 standards for pristine sound and robust resilience. Lawo’s HOME management platform offers centralized and intuitive management of all connected devices, streamlining operations in the hybrid production environment.
In addition to the broadcast audio system, the central theater features a sophisticated AVL
installation designed to enhance the audience experience. A striking 11m x 5m LED video wall from LEDman dominates the stage backdrop, the largest of its kind in Oman. The lighting setup includes over 60 Prolights fixtures, expertly installed in a challenging space with undulating ceilings accessible only by hidden catwalks. Audio coverage is powered by Bose Professional ArenaMatch AM10 loudspeakers and ShowMatch SMS118 subwoofers, configured for precise sound performance in collaboration with Bose’s design team.
The venue also features commentary and translation booths equipped with Studio Technologies solutions, such as the Model 214A announcer’s console and Model 348 intercom station. These systems were smoothly integrated with the Lawo infrastructure using standard protocols, demonstrating the adaptability of the overall design.
EVS announces the release of XtraMotion 3.0, expanding its signature super slow-motion capabilities with two new replay effects: Deblur and Cinematic.
With this latest release, EVS takes its renowned slow-motion technology to another level, according to the company. The new Deblur effect eliminates motion blur from fast-moving cameras for crystal-clear visuals, while the Cinematic effect simulates a shallow depth of field for a film-like aesthetic. Offered under a single license, these effects give sports broadcasters more creative possibilities to enhance the visual impact of key moments during live action.
This latest release also sets a new standard in speed, reducing the processing time to less than three seconds for applying the super slow-motion effect, even when paired with the deblur feature, as the company claims.
Seamlessly integrated into EVS’s LiveCeption replay and highlights solution, the operational workflow for XtraMotion 3.0 remains intuitive and efficient. Operators can trigger the application of the chosen effect directly from the LSM-VIA remote controller, with a single click on a shortcut button.
By using generative AI to convert standard footage into super slow-motion, deblurred, or cinematic-style clips, XtraMotion reduces the reliance on specialized
cameras traditionally required to achieve such effects. This not only cuts production costs but also encourages creative experimentation, enabling crews to explore unique camera angles and innovative storytelling techniques.
“With its full suite of creative effects, XtraMotion 3.0 truly showcases how generative AI can be used in live production to engage audiences in a more immersive way,” said Jan Mokallai, VP Live Production Solutions at EVS. “We’re excited to offer a tool that delivers exceptional speed and quality while remaining cost-effective, enabling sports broadcasters of all sizes to create high-impact, professional-grade content that engages viewers on another level.”
Central to this project was implementing the NRCS system Octopus 11, complemented by the Aveco ASTRA Studio production automation
WestCom Broadcast Services GmbH, known for producing German news show SAT.1 NRW, has made an investment in its production infrastructure with the support of Logic media solutions GmbH. Central to this project was implementing the NRCS system Octopus 11, complemented by the Aveco ASTRA Studio production automation. Additionally, the Aveco Redwood WHITE system serves as a playout server and a graphics solution. The first live production with the new setup was conducted on 6 December 2024.
The HD-based infrastructure at WestCom Broadcast Services GmbH required a modernisation of the NRCS system and production automation to optimise workflows and meet future demands with greater flexibility. Logic implemented the Octopus NRCS system, equipped with 25 user licences to achieve this. This system offers SAT.1 NRW journalists an intuitive user interface and innovative tools such as the Octopus Journalists App, enabling mobile access to the system –a crucial advantage for real-time, on-site news production.
With the Aveco ASTRA Studio production automation and the Redwood WHITE system, WestCom Broadcast Services GmbH benefits from optimised workflows and enhanced efficiency in daily news production for SAT.1 NRW, as Logic claims. The ASTRA systems cover live production and playout control and graphics.
Logic media solutions managed the system implementation, conducted comprehensive training sessions, and continues to provide long-term support with a service package. “With Octopus and Aveco, we are leveraging powerful and reliable solutions from our portfolio, perfectly tailored to the demands of modern news production. We are delighted to support WestCom Broadcast Services GmbH on this step towards the future,” says Thomas Heyer of Logic.
Jerome Steiner, Deputy CTO at WestCom Broadcast Services GmbH, adds: “Our new infrastructure represents a technological setup that is not only future-proof but also extremely flexible. The combination of Octopus and Aveco enables us to efficiently handle the daily challenges of news production while being well-prepared for future requirements.”
Production Crew, a division of Riyadh-based AFLAM Productions, has invested in 12 Ikegami UHK-X700 UNICAM-XE 4K-UHD camera systems for integration with a new 12G-SDI OB truck. Each system includes the camera head plus a 7 inch LCD viewfinder, a BSX-100 base station, an OCP300 operation control panel and a T-430 tripod mounting plate, as Ikegami Electronics has claimed in a statement.
Designed and built by Dubaibased systems integrator El Baba Smart Technology, the vehicle focus on large-scale outdoor events in broadcast quality 4K-UHD HDR.
“We have been using HDK-73 cameras in our studios for many years and are very pleased with their performance and reliability well as the technical support provided by Ikegami,” comments Shady Mokdad, executive director of AFLAM Productions and managing director of Production Crew. “Our aim in choosing UHK-X700 cameras is to bridge the quality gap between broadcast and digital film”.
Pre-purchase tests used the three 2/3-inch CMOS 4K sensors of the camera to confirm its optical and electronic performance.
From the electronics perspective, each UHK-X700 includes high dynamic range, wide color gamut, 16-axis color correction and power-efficient ASIC.
The cameras allow operating outdoors and in studio environments. The UHK-X700 also includes optical vignetting correction, remote backfocus adjustment, plus viewfinderbased focus-assist, safe-area markers and filter settings.
“We are very pleased to be chosen as a technology partner in this high-profile project,” adds Asim Saeed, Ikegami business development manager. “4K-UHD continues to increase in popularity as the logical mastering format for productions with a potentially long commercial lifetime. The UHK-X700/BSX-100 combination chosen by AFLAM’s Production Crew team allows cameras to be securely linked up to 3 kilometres from their operating location, including power to the camera head. Simultaneous output in HD SDR and UHD HDR video formats is supported, including mixed sources. 4K-UHD video
is available as a 12G-SDI feed directly from the camera head, enabling the UHK-X700 to be integrated into a wireless system.”
UHK-X700 cameras are 3840 x 2160 RGB UHD-native and designed for use in pedestal or tripod mounted studio , OB applications, mobile electronic field production and newsgathering. Optional high frame-rate capture at up to 2x speed in 4K or up to 8x speed in HD can be performed via the BSX-100 base station to allow slow-motion display of stadium or stage events.
Ikegami’s OCP-300 controller supports Ikegami’s ICCP and Ethernet based control. It includes a touchscreen LCD with rotary encoders plus a memory card slot for full camera setup and filing capability. It also supports PoE, powering over Ethernet, so separate power supplies are not necessary.
Recently, TV Tech Inc. used Clear-Com products to support the growing demands of an episodic weekly television show currently in production and airing on a streaming platform out of Boulder, Colorado. Tasked with building the video and audio infrastructure for the show, TV Tech Inc. used Gen-IC virtual intercom, and Arcadia Central Station to connect on-site and remote teams. As the production expanded, they needed to connect off-site stakeholders with the studio team. Off-site team members looked for access to program audio and point-to-point communication with executive producers. The objective was to combine local and remote communications within the
show’s budget constraints, as Bubble Agency claims in a statement.
The show is filmed in a studio located in Boulder, Colorado, with a production team that includes includes camera operators, sound engineers, directors, and on-site producers. Additionally, executive producers and writers collaborate remotely, making it essential to have a communication system that integrates both local and off-site teams.
TV Tech Inc. selected Arcadia Central Station to manage the network, and a LQ-R series for distributing local IVC connections and integrating Gen-IC. The result was a unified system between the studio and remote users.
The team transitioned part of the network from static IP to DHCP ensuring compatibility between managed hardware with static IP addresses and dynamic devices relying on the house network’s DHCP settings. This system allowes devices outside the local network to connect seamlessly.
“Thus far, the Gen-IC system has performed exceptionally well for both TV Tech and the production,” said Derek J. Kelly from TV Tech. With these results, TV Tech Inc. is exploring the full integration of Clear-Com’s Gen-IC and Agent-IC™ into their broader ecosystem. “This project highlights how Clear-Com’s plethora of products can be tailored to meet different needs based on the production,” added Kelly.
SPOTV, South Korea’s provider of linear and digital sports content, has chosen a turnkey, fully automated playout solution from Imagine Communications to provide reliable, fast-turnaround live sports transmission for regional channels in Southeast Asia from a world-class teleport and broadcast center in Cyberjaya, Malaysia. Based on Imagine’s widely deployed Versio™ integrated playout platform and ADC™automation, the end-to-end, future-proof ecosystem simplifies operations and easily scales to meet evolving needs.
SPOTV broadcasts top-tier International and Asian sports content across the region, including Wimbledon, the U.S. Open, MotoGP™, ROSHN Saudi League and Major League Baseball among others. To ensure high-quality service and meet growing audience demands, SPOTV chose Imagine’s Versio platform, which is relied on worldwide by media companies from single channel to large, multichannel environments and master control operations.
“Imagine Communications understood our requirements and equipped us with a tightly integrated, highly reliable playout infrastructure that can grow as our needs change,” said Prakash Maniam, Head of Broadcast Operations and Engineering at SPOTV. “We know that many media companies around the
world have successfully deployed Imagine playout technology, and we are confident that our modern, new system will be able to support our channel playout requirements for years to come.”
SPOTV’s new fully redundant 6+2 channel origination workflow is built on Versio with embedded instances of ADC automation software, which integrates with more devices and business systems than any other automation product, helping to ensure a seamless workflow environment. Imagine’s industry-proven Nexio+™ AMP® high-performance video servers with automatic input format detection manage content ingest. Material is managed and workflow control is provided by Versio Content Portal, and Nexio Motion™ asset management software ensures the correct content is loaded onto the Versio instances in plenty of time for scheduled playout.
Versio integrated playout delivers tiered solutions to fit any budget and channel creation need. In a compact form factor, it offers the full spectrum of playout
capabilities — advanced graphic insertion, Key and Fill support, live master control switching, SCTE insertion, audio shuffling, loudness control, and more — enabling SPOTV to reduce the number of physical devices needed in their transmission chain. In addition, the platform’s single-user interface simplifies SPOTV’s operations and enables easy monitoring and maintenance.
“We are honored that SPOTV has joined the ranks of leading global media companies around the world who have made Versio their playout platform of choice,” says Mathias Eckert, SVP & GM EMEA and APAC for Imagine Communications.
“This implementation reflects SPOTV’s commitment to excellence in live sports broadcast in the region and will enable them to work more efficiently today and prepare for whatever their future holds.”
The new playout system at SPOTV was implemented in association with local Imagine partner and systems integrator JAA Systems (JAA.S).
Blackmagic Design has announced that World Chase Tag (WCT) is using an acquisition, control and delivery workflow for both broadcast and streaming, built on Blackmagic URSA Broadcast G2 cameras and ATEM 2 M/E Constellation HD live production switchers.
Founded in 2012, World Chase Tag (WCT) has grown from casual gatherings into a worldwide phenomenon. In a typical match, two teams compete in a best of 16 chases format. Each 20 second chase pits a chaser against an evader, with the winner staying on as the evader in the subsequent chase.
Loic Ascarino, WCT’s global licensing manager, explains how this sport’s unique requirements led to broadcast events worldwide. “It’s a uniquely fast paced sport with a rapidly growing fan base. Our YouTube channel alone has more than one million subscribers. With traditional broadcast also a consideration, we needed a production workflow capable of supporting both streaming and broadcast,” he begins. “To tackle this, we developed a software solution that integrates seamlessly with a Blackmagic hardware backbone.”
WCT developed a production workflow built around Blackmagic Design hardware to address the need. “We aim to showcase athletes in an accessible, thrilling way while delivering a top tier live experience,” explains Ascarino.
Their biggest events utilize up to three URSA Broadcast G2 cameras paired with B4 box lenses and SMPTE fiber converters to provide power, video, and communication over a single cable. On the control room side, the production relies on an ATEM 2 M/E Constellation HD live production switcher, HyperDeck Studio 4K Pro broadcast decks for recording and the Blackmagic Web Presenter for encoding and streaming.
WCT’s production process is multifaceted, involving the management of multiple live streams in different languages, realtime creation of short form content, and broadcast deliverables. The content is distributed across more than 35 countries, including the USA, France, the UK, and Australia, ensuring a global reach for the sport.
Content for television and streaming is produced in 1080p, while behind the scenes footage and social media content are all captured in 4K. “This approach balances quality with cost
efficiency, though moving to 4K live broadcasts remains a future goal,” notes Ascarino. Adopting DaVinci Resolve Studio has further streamlined post production, enabling rapid content turnaround across platforms.
Blackmagic Replay is the latest addition to its workflow, which enhances WCT’s ability to create replays and fast turnaround highlight packages for social media and live broadcasts. “It’s an accessible, cost effective solution tailored to our needs,” explains Ascarino.
At a recent event in Woodstock, Georgia, the replay setup included an ATEM Mini Extreme ISO live production switcher, Micro Converters HDMI to SDI 3G, three MacBook Pros running DaVinci Resolve Studio, and an UltraStudio 4K Mini capture and playback device. “Adding a DaVinci Replay Editor allowed our production team to trim clips, add stingers and broadcast replays in under 20 seconds,” concludes Ascarino. “It integrated seamlessly with our existing tools and exceeded our expectations.”
SMPTE , the home of media professionals, technologists, and engineers, has announced that Richard Welsh has been elected by its membership to serve as SMPTE president. Welsh formally began his term on Jan. 1 after serving as SMPTE executive vice president. His term will extend two years to Dec. 31, 2026.
“I am honored to have been elected SMPTE president, and I am excited to work with the whole SMPTE family worldwide in supporting progress across our industry,” said Welsh. “Since the Society’s inception more
than 100 years ago, bringing the moving image to audiences worldwide has been at the heart of SMPTE’s mission. With video devices in viewers’ pockets and content available to them on demand, SMPTE’s ongoing commitment to delivering the best possible integrity, quality, and experience in media to global audiences is more vital than ever. I’m thrilled to have the opportunity as SMPTE president to deliver on our vision for the media technology industries of unlimited creativity and experiences for everyone.”
Welsh is currently the senior vice president of innovation at Deluxe. He has been on the SMPTE board for more than 10 years, most recently serving a two-year term as SMPTE executive vice president. He has also served as SMPTE’s vice president of education and as governor for EMEA, Central/South America. Welsh is also on the board of IBC and the chair/co-founder of the volumetric asset management company Volustor.
Welsh began his career at Dolby Laboratories as a film sound consultant, working his way up to director of digital cinema services in the 12 years he spent there. He later became the head of operations of Technicolor’s digital cinema, localization, and sound services department. More recently, Welsh became an associate lecturer at Southampton Solent University while simultaneously co-founding Sundog Media Toolkit Ltd.
“Rich is an innovative thinker with bold plans for the future of the Society,” said SMPTE executive director, Sally-Ann D’Amato. “His unwavering commitment to expanding SMPTE’s impact on broader and more diverse audiences, along with his dedication to cultivating early-career professionals, is truly inspiring. We have a great partnership, and I look forward to collaborating with him as we turn his vision for SMPTE into reality.”
NEP Europe, a division of NEP Group, has announced new organizational structure of its northern, mid and southern Europe businesses to focus on leveraging strengths, share group resources and collaborate faster and more efficiently as part of the region’s “One Europe” strategy, as the company claims in a statement.
Effective 14 February, NEP Europe will organize into three groups, or “clusters”, of synergistic markets focused on providing innovative media solutions and local service to its customers:
– Nordics, representing NEP’s operations in Norway, Finland, Sweden, Denmark.
– Mid and South, representing NEP’s operations in Belgium, Switzerland, Germany, Italy, and Spain.
– UK, Netherlands and Ireland, representing NEP’s operations in the named countries.
NEP Europe President Lise Heidal reported that three figures will be in charge of each group:
– Marko Viitanen will lead the Nordics.
– Maarten Swinnen will lead Mid and South Europe.
– Arjan van Westerloo will continue to lead the UK, Netherlands and Ireland cluster.
The clusters will continue to be responsible for project delivery and business operations, customer engagement and growth, and, internally, employee development – all with a focus on collaboration, resource- and skills-sharing across Europe.
Lise Heidal said, “This redesign of our group into three clusters
is an important step toward building a stronger ‘One Europe’ team, enabling us to deliver the best of NEP to both our local and pan-European customers. With this new structure, we will move faster, enhance collaboration, and optimize the sharing of resources across the entire region.”
“I am thrilled to announce the promotion of two exceptional and seasoned country leaders, Marko Viitanen and Maarten Swinnen, who will head the new clusters, alongside Arjan van Westerloo who is already heading up the UK, the Netherlands and Ireland. Each of these three leaders brings extensive market knowledge and expertise, ensuring we remain at the forefront of our industry and consistently meet our customers’ needs.”, she added.
With a growing demand for broadcast-grade capabilities, Vizr announces the addition of Midwich to its expanding ecosystem of partners.
The partnership is looking forward to mark a new era for both companies in the Southeast Asian media tech market, supplying productions with technology that includes the switcher line TriCaster, video conversion tools Viz Connect, and PTZ cameras.
The collaboration will try to cover the region’s demand for live production experiences, driven by its young population. According to Redseer, the live experience market in Southeast Asia is on track to nearly double in value by 2028, with the rise in popularity in live content such as eSports, entertainment, and hybrid events.
Based in Singapore, Midwich is serving the region as a distribution company, working with over 300 dealers and supplying content creators with Vizrt – including the iconic TriCaster line.
“We recognize how powerfully the TriCaster offers the core of a production solution. At Midwich, we want to distribute the best of breed products and solutions to our customers, whichever point they’re at in their live production journey. The partnership with Vizrt means we strengthen our offering, as Vizrt’s suite of products and solutions deliver –from the accessibility of the Go products up to the Woah of the graphics!” says Dan Fletcher, Managing Director, Midwich APAC.
Vizrt’s portfolio of products and solutions looks for covering the needs of content creators, regardless of where they are on
their production journey. For instance, the TriCaster Mini S is a software-based live production system, crafted as an entry-level solution for video storytelling.
TriCaster Vizion, on the other hand, is a live production system fit for purposes such as broadcasters, sports networks, live events, corporations, education institutions, and more.
“There is an undeniable appetite to expand live production possibilities in Southeast Asia. Vizrt’s versatile video solutions empower compelling and engaging content creation at every level, while also meeting ever-evolving live production needs. Our partners are essential in delivering the latest innovations to content creators of all sizes, ensuring our mutual growth and local support to the end-user,” says Paul Shutt, Head of Channel Sales APAC, Vizrt.
BitFire announced the appointment of Jim Akimchuk as president and CEO. Akimchuk previously served as the company’s CTO. He and the BitFire team are expecting to make professional-grade production and delivery more accessible for broadcasters and production companies, as the company claims in a statement.
“BitFire was founded to solve a critical problem in broadcasting: bringing the reliability of dedicated fiber and satellite to IP video transmission,” said Akimchuk. “Over the past four years, we’ve built a transmission network that offers synchronized, low-latency live video with the service and support professionals deserve and expect. Now, we’re taking the next step and bringing that same level of broadcast-grade performance, reliability, and control to live production.”
The objective of BitFire is to introduce forward-looking services that simplify the launch of live production workflows, enhance real-time collaboration, and ensure that broadcasters and content creators can affordably scale their operations without compromising on quality or reliability.
“BitFire is here to redefine live production, and our goal is to make high-end production tools as easy to access as our transmission solutions,” added Akimchuk.
We interviewed its CTO, Pavel Pacheco, to explore this TV’s latest technology milestones and upcoming investment plans
Text: Daniel Esparza
Latina is one of the main TV stations in Peru and stands out in the media landscape for its strong technological focus. In order to adapt to the new audiovisual ecosystem, the channel has gone from being a traditional broadcaster to currently becoming a multiplatform content house for news, entertainment and fiction, even offering services to other televisions.
All this without neglecting its main goal, to create a product that sees the whole family united in their homes, since linear television remains the main mode of consumption in Peru.
In this conversation with TM BROADCAST, Chief Technology Officer, Pavel Pacheco, explained in detail the company’s most recent technological milestone, which is the implementation of a robotic system for studio cameras in collaboration with Telefónica Servicios Audiovisuales (TSA), and gave us an exclusive preview of some of its investment plans for this year and next year. We also took the opportunity to review the station’s current state of technology.
First of all, I would like you to introduce this television to our readers. What is
Latina TV’s place in the media landscape of Peru and LATAM?
Latina is one of the most important media in Peru. It has been reshaping its traditional broadcasting concept to become a multiplatform content production house for news, entertainment and fiction. The important partnerships that Latina has attained in the region have made it possible to generate local production of fiction with international quality for our own screen and reality type formats for Chile’s leading channel from our production hub. The development of a big show format as El Gran
Chef Famosos has managed to position Latina as a medium that is committed to innovation and family integration. It is a product that the whole family sees together at their homes. Latina has also grown very significantly on digital platforms.
What is the recent evolution of the company from a technical perspective?
Latina has implemented the robotic camera system to enhance its capacity to generate content for the press. A year and a half ago we implemented a major change in the news and MAM (Dalet), ingest
(Woody), editing (Adobe Premiere) and graphics (Vizrt) systems, redefining workflows to make content generation a more efficient and faster proces. We also introduced an adaptation area so that the pieces produced are not only for linear television, but also for digital platforms in their different formats. This change enabled us to evolve towards generation of cross-platform content. Last year we launched the new advertising
programming and sales system that allows us to have greater integration with both the Dalet content production system and the broadcasting system. For digital OTT video consumption platforms, Latina has been implementing the necessary tools to be able to have a presence on the Web, Mobile App and Connected TV, being aware that we must be on all platforms with the best monetization options.
An important milestone will be 2026, when we have to renew one of the switchers, also thinking about efficiencies for remote production and SaaS models
Please, tell me a little more about the robotic system you have implemented for your studio cameras with the assistance of TSA. What was it about?
Latina acquired from Vinten a solution consisting of three FHR155 heads with IP control, electronic lens control, pan, tilt and zoom; three FP-188 robotic pedestals, two with a maximum elevation of 2.05 m. and one at 2.20 m. with movement in the x-y axes, pneumatic lift column, IP control, infrared sensors for alignment with the
We have already carried out several pilot projects for generation of content with AI, managing to broadcast around 30 short news pieces on linear TV generated from reports
Target. He also opted for the Vega tracking system for two devices, which rendered positive and interesting results for camera directors.
What improvement needs did you tackle with this installation?
Latina sought to generate greater production capacity without increasing resources in the press and operations departments, in an attempt to maintain the dynamics of the screen. We saw that the tracking solution could even reduce the operation to a single person, so that the camera manager could handle the switcher and robotic cameras without requiring an additional operator; furthermore, the idea was to continue using them in combination with the augmented reality solution. It is also important for Latina that the solution can be changed to manual mode, since this facilitates the deployment of special events where more cameras are used on the press set.
What challenges have you faced during the process?
Always a change of this type finds a little resistance, but we had very good participation by the team of camera directors involved from the moment in which the solution was decied; therefore, the main challenge was that the solution would have to completely meet you needs. We had many meetings to check the functionality of each item and also several visits from Vinten to solve each issue that was identified. Finally, after a few updates, the robotics and tracking system work properly.
Finally, what benefits have you attained?
The staging of news programs, Sunday programs and sports has been successfully sorted out and become uniform.
Setting up a new show is very simple and it is something that camera directors have managed to learn very quickly and consistently. The efficiency obtained has allowed us to reallocate resources for other content production needs.
I would like to know a little more about the main elements of your sets and facilities. Which manufacturers do you usually rely on? Have you made any recent investments?
For production we use Ross switchers in SDI, Vizrt Multiplay for the handling of the screens. For the 24/7 digital news signal we use Tricaster with SDI and NDI signals. As for the sets, no investment has been made recently, we have only been substiting LED for incandescent lamps gradually. All our camera chains are Sony.
On what stage of the process are you now regarding transition from SDI to IP?
We have not started the test stage. An important milestone will be 2026, when we have to renew one of the switchers, also thinking about efficiencies for remote production and SaaS models. Some of the Sony cameras we already have with support for working on SMPTE 2110 IP.
The
growth of advertising investment in digital environments has led us to develop a strategy that will allow us to have better monetization options
What about the resolution of your broadcasts?
Generation and production of content for linear TV is in HD. In the case of fictions, 4K versions are available. We do the streaming signals in HD but we are running tests with 4K signals.
How are you embracing Artificial Intelligence (AI) and process automation within your workflows?
We have already carried out several pilots for the generation of content with AI, managing to broadcast around 30 short news stories on linear TV that were generated from journalistic reports. Latina is aware that AI will help us to have a greater volume of content from material in the historical archive; we are doing pilot test for this and automating generation of metadata. For the preparation of audience reports, we have implemented an RPA that helps to lighten the load of operational tasks.
What use are you giving to immersive technologies, augmented reality...?
Since 2018, Latina has implemented augmented reality technology for the FIFA World Cup in Russia, managing to explore new advertising spaces and greater dynamism on our screen. Ever since we have continued to use this great component that has made the difference in our broadcasts for special events. On the other hand, since 2024 we have been using virtual scenography for most of our digital native programs with physical and virtual elements for product placement.
Achieving important partnerships is what will allow us to be more efficient as advertising investment in open television decreases
What have been your most prominent recent innovations in the OTT environment?
Latina had initiated already three years ago a major change in the concept of multiplatform content generation. We are in the process of implementing platforms to have a greater presence in the OTT environment: we are launching the new Connected TV and Mobile App platforms in the coming weeks. We have already launched a FAST Channel and are preparing two additional channels.
The growth of advertising investment in digital environments is something that has motivated us to develop a strategy that will enable us to have better monetization options.
What are your plans for the future? Can you tell us about any project or renovation of interest?
In the near term, we have plans to revamp the coding and playout systems of the linear channel, but we are using this revamp to integrate signal distribution to digital platforms. Content is definitely king in our
industry, and Latina’s plans are to bring it to any device, any time, and across different platforms.
Finally, what direction do you think linear television will take in the future in Peru and LATAM?
Linear TV remains the main mode of consumption in Peru, and Latin contents such as El Gran Chef Famosos have shown that you can still watch family content in front of a television, but now there is consumption of the same content in various formats and on different social
media or platforms. The same thing is happening throughout the region and has gathered momentum in some countries. Achieving important partnerships is what will allow us to be more efficient as advertising investment in open television decreases. Those of us who have traditionally made television know that now our content has to be digital too, even digital native content is made; and that our local content is what will make us preferential over global content generators. In the case of news, you have to first tell them where people are practically “online”.
We are used to seeing major television networks at the forefront of innovation, leading the way in implementing the most ambitious technological upgrades. This is hardly surprising, as they have the greatest resources and reach the largest audiences. In fact, our issues are filled with reports exploring these kinds of case studies from leading broadcasters.
However, from time to time, we come across remarkable examples from smaller broadcasters with exceptional projects worthy of being featured in these pages. One such case is Telemadrid, a Spanish broadcaster focused on the Madrid region, which holds a significant presence in the country's media landscape. Telemadrid has recently undertaken a major technological upgrade, implementing an IP-based infrastructure in its program production area. This is a pioneering project in Spain—one that only a handful of broadcasters worldwide have achieved.
We visited their facilities to gain first-hand insights into the reasons behind this implementation, the main technical challenges they faced during the process, and an overall assessment of the results. To guide us through this journey, we spoke with the key technical experts responsible for its design.
Text by Luis Sanz
Telemadrid has finished implementing the IP infrastructure for its program production area. With the technology it has used and its way of applying it, the Madrid regional TV is probably the Spanish broadcaster that best carries out production of programs on sets.
For this, it has mainly relied on German manufacturer LAWO and integrator Telefónica Audiovisual Services. The commissioning of the IP infrastructure has also been supported by CISCO's network solutions.
José María Casaos, Director of Operations and Technology, Telemadrid, comments on the reasons for this important development:
"2024 has been the year of the implementation of Telemadrid's Technological Transformation and Renovation project, which includes many initiatives that not only have to do with technology, but also encompass cultural change and business processes, with the dual aim of, on the
one hand, adapting to the audience's new consumption habits, and on the other, helping to produce content of higher quality, and of course, including innovation and continuous improvement in all our processes.
One of the main projects has been the creation and implementation of a unique, converging infrastructure by using IP technology, with all its benefits, in terms of improving quality and cost savings in productions, as well as greater redundancy and reliability of infrastructures, which will undoubtedly contribute to the sustainability of the public service.
Our decision to move to an IP technology not only stems from the need to undertake a technological renewal, but also brings new functionalities, capabilities and improvements that fit directly in the digital transformation strategy.
When we planned our technological renovation, it was clear to us that our main priority was to maintain the current service with more than 12 hours of live
broadcasting from Monday to Friday, which cannot be interrupted. Nowadays, program production needs not only require managing the audio and video of the sets that make up the main signal of the program, but also many external sources, graphics, virtual, augmented and extended realities, etc., which also feed LED screens on the set, as well as the different broadcasts in other distribution windows.
In this instance, we have moved towards SMPTE 2110 video technology protocols, the DANTE audio protocol (of which we already had previous experience in Onda Madrid radio station), and, in parallel, we will continue to resort other IP protocols that we are currently using in all types of connections and video conferencing, such as NDI, SRT, etc.
The idea was to have two IP controls, which we have called A and B (unlike the previous ones, numbered 1 to 4) that could provide a service to any of the four television sets, but also to the entire production center where any equipment such as cameras or microphones can
be connected and, of course, anticipate future workflows with remote production capacity.
Beyond this improvement in our operations, this technology will allow us to add a new layer of innovation that will open the door to new features. For example, improved quality in our productions with new formats (HDR, 1080p, etc.), or the new workflows we are implementing based on the
Epic Games Unreal engine, which enable us to work with virtual reality, augmented reality and extended reality, all of which will grow with this transition to IP. This infrastructure allows us not only to manage the video and audio layer, but the entire data and control layer, which facilitates all processes."
Knowing the reasons, we believe it is fitting to see in greater detail what the solutions adopted
by Telemadrid have consisted of to improve the production of programs, greatly facilitating their execution.
For this purpose, we have had the information provided by the CTO José María Casaos himself, by Pablo de la Peña, Deputy Director of Engineering and Maintenance and by the super users of the IP production platform.
For the transition to IP and in the course of the latest technological renewal in Telemadrid, Sony cameras and mixers have been installed together with the HIVE studio program production and automation system, Yamaha digital audio consoles, the Avid Maestro graphics platform and its iNews writing tool (NRCS), or the monitoring system with NEC.
Telemadrid's production area now has 2 Controls A and B, 2 main sets, 1 and 2, two auxiliary sets, 18 IP cameras for all sets, 18 CCU (Camera Control Units), 18 RCP (camera Remote Control Panels), 2 Systel AEQ systems with 16 possible communications and 10 monitors in each control, with the possibility of presenting up to 64 signal screens on each of them. Sets 1.
For the transition to IP and in the course of the latest technological renewal in Telemadrid, Sony cameras and mixers have been installed together with the HIVE studio program production and automation system, Yamaha digital audio consoles, the Avid Maestro graphics platform and its iNews writing tool (NRCS), or the monitoring system with NEC.
What does the change from the SDI world to the IP world entail?
Let's look at a camera chain. Under the SDI structure, the fiber cable that connects the camera head with the control -where its CCU is located- contains all the signals that are handled in both directions: video, audio, intercom, tally, teleprompter, program return, control signal of the camera head from the CCU.
In an IP structure, all resources, video, intercom, tally, teleprompter, program return, camera head control signals, CCU, RCP (Remote Control Panel), are associated to an IP address. Those resources are not only from that particular camera, but are assignable to any other camera.
And the same happens with the rest of the resources in the production area: video mixers, video monitors, audio tables, graphics, video walls, Systel communication
system, HIVE, UMD, etc. All resources have an associated IP address and all are assignable.
Conclusion: In the SDI world everything is rigidly associated and in the IP world everything is assignable, according to the relevant needs, and that's where CISCO and LAWO come into play.
CISCO supports all the network electronics for the IP infrastructure, which allows broadcast, transport and distribution of all multicast flows; and LAWO has the core of the IP installation, with its VSM orchestrator that manages the control of IP flows. CISCO carries out the transport of the packaged data and LAWO is in charge of "talking" to the different devices, to make the crossing points between transmitters and receivers.
In an IP structure (SMPTE 2110), the transport of the
media network is carried out with fiber and the data transport with copper, category 6 Ethernet cable in Telemadrid.
CISCO's IP infrastructure for installed media is totally redundant, structured in two identical networks -the blue network and the red network-. All the sources, cameras, video mixers have been connected there...". The system is completed with the IP Fabric for Media solution, as well as with the Cisco Network Dashboard Fabric Controller, responsible for providing a real-time picture of the network and devices.
Within an IP structure, all resources have one IP address and, in a production area, the resources have two: one for the media and one for the control of the equipment. Telemadrid set up an IP address range that was split: one part for the red IP network, another for the blue IP network, for the media addresses and another part for the devices' control IPs.
In Telemadrid, TSA has set all IP, media and control addresses for all devices. The configuration of a piece of equipment is carried out by entering its control IP address through a Web browser, if it is the way the related device works, as for example, Sony equipment. Other pieces of equipment have a proprietary application that is installed to access and control its IP address.
A video signal has an IP address associated, as it is also the case with an audio signal. These media IP addresses are managed by files called Session Description Protocol (SDP). What an SDP file does is tell the receiver of the frame where that frame is, that is, if you want to send a receiver an IP 2110 signal, an SDP file is sent and the receiver tells the transmitter that it wants that signal. That signal, going through the media network, is subscribed to the receiver.
LAWO communicates directly to the equipment; no application is required, and it usually does so
through the NMOS protocol for multicast flows. Each piece of equipment has a public address in NMOS, where all senders and receivers are published. LAWO takes the information from there or from the list of SDP protocols for the media published by the equipment and configures it. For devices whose protocol is not integrated into their own server, LAWO uses a specific server: the Gadget Server.
Within Telemadrid's production area there are still devices that generate SDI signals and which, in order to be controlled by LAWO need to be converted to IP 2110, This is called "IPrize" by the technicians in charge. To achieve this, LAWO has -within the V_matrix product line- the C100 product, which is a processing module that can be configured to convert SDI to IP, IP to SDI, IP to IP signals, and also convert HD 1080i signals to 4K and even operate as a module to upload signals to a multiscreen.
VSM (Virtual Studio Manager) is a broadcast controller, a function orchestrator, a control and workflow solution in IP transmission, to control all broadcast equipment within its scope of action through visual interfaces (panels) and can communicate with the most used transmission protocols in the broadcast industry.
VSM has two servers working in parallel; if one is lost the other automatically takes charge, so every time a change is made it has to be saved and transferred to the two servers, as they have to be identical for the system to work.
VSM facilitates a dynamic allocation of resources, starting with the cameras themselves, the CCU, the camera head, the RCP, the teleprompter, the intercom, the tally... Everything is programmable, addressable and assignable, as well as all the graphics resources produced by AVID Maestro. IP 2110 sends each signal over the network electronics...
An IP camera has a choice of IP addresses from all those resources.
Within the VSM configurations, the concept of a virtual camera is born as a black box that is filled with the resources that we want to assign to it within the pool of resources available in the area. It is a shapeless camera that has all the functionalities you want it to have.
In the concept of virtual camera, for example, camera 1 in the Control will be seen on monitor 1 on the monitoring bridge, but it does not always have to be the same CCU;
There are eight electronic technical officers in Telemadrid who have had to become experts in the new technology -the Technical Studio Operators (TSOs).
it can be assigned any CCU to be camera 1 in video, but from behind it is also assigned the teleprompter for the control to which it is allocated, as well as the relevant production orders, so that the filmmaker when speaking, will address that specific camera, the video and audio return that the operator has below, the tally so that when the mixer clicks on the tally below, the intercom, etc. In one word, camera 1 is virtual, it does not exist until the resources that make it work as a camera are assigned depending on the needs that are defined in the production of the program. Controls A and B have 10 virtual cameras each.
There are eight electronic technical officers in Telemadrid who have had to become experts in the new technology -the Technical Studio Operators (TSOs), who are responsible for the allocation of resources in each Control and set, their monitoring, and detection of issues and troubleshooting at first instance.
The TSOs have a management panel in each of the two Controls, from which they can access different configurations applied to each Control.
Image 6 shows the TSO panel for Control B. It shows the configuration options that can be accessed: Control Assignment, Virtual Camera Assignment, Under Monitor Displays, IP Signal Monitoring. SDI signal monitoring, Storage groups, PRESETS, Video mixer control. Target source, NEC Monitors. Within each panel configuration, several actions that can be carried out and controls appear.
Precisely Image 6 shows us the most important configuration of the panel: PRESETS. Within this setup we can see:
> Access to virtual camera presets.
> Access to monitoring presets, with three options: general, weekend monitoring and football monitoring, depending on the TV channel's programming.
> Monitoring configuration in production, sound and CCU Lighting.
> Assignment of cues (teleprompters) for the PTZ camera that is located in the newsroom, the set and the Vmix that is used for video communications with outdoors,
> Configuration of events, when in progress.
> Allocation of labels for events.
> Actions on the Systel 12 or 16 communication systems.
> And finally SET SWITCHING, which deserves a more detailed explanation below.
The other essential figure in the production of IP programs is the" Superuser" -two of them in Telemadrid- who were with TSA from the beginning of the system's implementation and are the ones who really know all the inner workings and are able to design and implement the VSM panels. In addition to working at the Control A and B console, they have another position in the superuser room.
Image 7 shows us a view of the General Superuser Panel, where the first-level configurations made with VSM are found.
For Controls A and B, apart from TSOs and superusers, only operators that send SDI signals to video walls have a VSM panel, which are controlled by VSM through the panel.
This term, coined by Telemadrid technicians, refers to the representation of the large matrix of inputs and outputs of VSM, which resembles the game. It includes the entire set of resources available in the program production area, their inputs and outputs, with multiple representations. Image 8 shows one of them, in which the crossing points between CCUs and RCPs are established.
As can be seen in the TSO panel (Image 6), there are two buttons for SET CROSSOVER, 1-MD SET 1 and 2-TN2 SET 2. Each button transfers to Control B either all of the resources allocated in Set 1 for the MD (Madrid Directo) daily feature program or all of the resources allocated in Set 2 for the TN2 (Telenoticias 2) daily news report .
The set crossover works in such a way that a program -Madrid Directo- is being made in Control B with Set 1. Immediately afterwards, the news program -TN2being produced in Set 2 will be broadcast and all the resources assigned to Control B that are in Set 1 must be changed, the virtual cameras that are no longer the CCUs from 1 to 5, but CCUs 13 to 18. With this button, all the resources of the second set are transferred to
this Control B: cameras, microphones, listening to sets, program returns, etc., signals on the monitoring bridge, some things running under IP, other things with SDI, others with DANTE, etc., just pressing a single button on the VSM changes everything.
Taking advantage of the commercials broadcast between programs, part of the staff rotates; some are "hot chairs", as is the case
production staff. Editors have two spots in Control, one for each program. The editors of TN2 are already sitting in place before the other program -MD, for instance endsand testing the resources of Set 2 and even talking to the presenters. The same happens with the production staff, who are also seated before MD finishes and are able to test the live shows that are inserted in TN2, the Systel
This operational ease changes the paradigm of program production. It has never been more effective to work in a TV studio when it comes to producing live shows.
calls, test the audio, the microphones... The sound operator is the same for MD and for TN2, the video mixer is also the same, the camera control and the lighting remain unchanged, being also necessary to change the management of the videowall from one set to the other.
This operational ease changes the paradigm of program production. It has never been more effective to work in a TV studio when it comes to producing live shows.
For the ingest all incoming are SDI, from the UTAH matrix. They are recorded and the files generated are sent to HIVE for later editing in the format of the relevant server. HIVE produces SDI signals that are converted to IP 2110 ibn order to enter the production controls.
Of special interest is the reception of external signals from 4G/5G backpacks, in SDI format, which enter the UTAH matrix (up to 50 can be received) and are relayed to the production controls, after passing through gateways for IP conversion. The UTAH matrix reserves 30 SDI outputs (2 x 15 for the two controls) with their corresponding gateways. Once the outdoor environment for a program being broadcast is determined, production control sends a request to central control to assign in the mixer the backpacks in use to specific inputs from outside.
On the other hand, the broadcast continuity control receives the SDI
signals from the central control through fiber optics.
At the beginning of the project, Telemadrid had a DANTE digital network for Intercom, shared with radio station Onda Madrid and supported by CISCO network electronics, with very good performance.
Therefore, it was decided to maintain the DANTE network for the Intercom and expand it to the use of the rest of the audio in the production area: Yamaha digital consoles, Sennheiser microphones, audio monitoring and communications with the outside. To communicate the DANTE network with the 2110 environment, LAWO "power core" modules have been installed to convert DANTE to IP 2110.
The system for audio communications with the outside is the AEQ Systel, in two versions of 12 or 16 calls to give orders by phone to the
cameras and the editors who are doing live casts on the street. Lines 1 to 8 are for editors. In these telephone lines there is the N-1 return for the outside plus an intercom key so that both editing and production can address them. Calls 9 to 16 are for communication with camera operators.
These systems are not directly included in the IP network, as they have to do with DANTE, but VSM has an interface to manage the DANTE Domain Manager in order to change the routes from one environment to the other.
In regard to augmented reality, VSM does not take any direct action. It's the MAESTRO-controlled AVID machines that make augmented reality. These are renderings that are delivered to LAWO, within a video signal on SDI that through a gateway is converted to 2110 for management and assignment.
José María Casaos offers us a final reflection on what this profound technical and business change he has launched entails for Telemadrid.
Telemadrid has embarked on its IP stage understanding it as a transition, since there is a great deal of equipment that needs to be managed in a simpler, more efficient way. Among these devices, for example, there is the LED screen in Studio 2 being used by the news programs of Madrid's regional TV, or the new LED screen that has been installed on the set in Studio 1 to provide services to all the other programs.
Telemadrid is currently working with a system that provides "much more ease" in the configuration of elements, as well as "flexibility" in each process. This leap to IP, therefore, has been accompanied by an organic transition process with many more advantages than drawbacks.
IP, beyond the mere business convenience at the time of Telemadrid's digital transformation, has brought tangible benefits both to viewers and to Telemadrid's own technical staff.
With regard to the visual quality of the images reaching the audience, it has been made possible thanks to the extraordinary work of the lighting staff, camera control and other teams. Beyond working with this new network, we have renovated all the cameras and changed our lighting infrastructure to LED, a move that has enabled us to control in detail color temperature and other technical parameters, and also create a new staging between the production team, mixers, photography direction and camera control.
In this sense, a significant improvement has been seen in the workflows for graphics, virtual and
augmented reality, as well as all the processes to feed images and videos to the LED screens in the sets, which are nowadays a key element in the production of their programs.
IP has also been the ideal opportunity to redefine processes, achieving milestones such as an improvement in the capture of LED screens, new scenic movements and unprecedented decisions on direction and production. Thanks to this teamwork we are already on equal footing with nationwide televisions in terms of quality. People can already perceive a substantial improvement in the quality of the broadcasts in all programs, and this is not only thanks to the IP infrastructure, but to all the elements that we have taken care of in detail. The operating benefits are especially noticeable.
Telemadrid has embarked on its IP stage understanding it as a transition, since there is a great deal of equipment that needs to be managed in a simpler, more efficient way.
Above all, the flexibility provided by IP together with the Broadcast VSM Controller, makes it possible for each program to have the desired signal configuration, the equipment can be scaled to address especially complex productions, and each operator can select through the system the elements to which they may access for each production, as well as associate the production controls to any studio or even combine any sound and image source within the infrastructure. This was a basic concept of the design that we initially envisioned in the engineering team: IP should allow us to adapt to the design of each production with much greater flexibility.
Now almost everything is preproduction and live broadcasts. The 14 postproduction and graphics operators are already all great graphic designers.
Running in parallel to the IP infrastructure commissioning process, other technology projects have been undertaken, such as:
> Implementation and commissioning of the project to replace the prompter system for the studios, 6 licenses for redundant operation, 15 17-inch monitors, both fixed and wireless operating systems in any of the studios.
> Improving the reliability of the technical intercom system by acquiring a backup matrix.
> Purchase of a backup lighting control console to increase the reliability of lighting systems.
> Installation of a large LED screen on Set 1. Comprised of 6 screens, 1 fixed and 4 mobile (motorized), plus 1 on the floor that enables carrying out Extended Reality (XR).
> The Headend and multiple DTT services have been tendered and awarded for dissemination of Telemadrid and Onda Madrid programs. Services that have adapted to the new technological situation, marked by the disappearance of SD services and where RTVM has incorporated an HD HDR broadcast aimed at improving the quality of viewing for users that have state-of-the-art TV sets.
> Acquisition of an automated audiovisual production system for Studio 3. Provision of three robotic cameras and a management system that detects which person is on the spotlight and chooses the most appropriate camera shots according to predefined criteria. This system allows generation of content for the digital area in an easy, economical way.
IP has also been the ideal opportunity to redefine processes, achieving milestones such as an improvement in the capture of LED screens, new scenic movements and unprecedented decisions on direction and production.
> Implementation of a low-latency video and audio encoding and transmission system over the Internet (SRT Technology) that allows the sending of return and teleprompter signals to outdoor production media such as OBs, ENGs and others.
> Renovation of sound booth for external events. New installation with SMPTE 2110 technology and HD for the sound booth for outdoor events.
With this step, Telemadrid will now face a new era that will be more flexible for the technicians and much more attractive for viewers. It will allow us to not only reach the audience with better quality and produce with simpler internal workflows, but it will also greatly facilitate our ability to innovate in all areas of television. We have made this investment, one that will allow us to operate better over the next decade while improving the dayto-day workflows for our current production.
The AV revolution has only just begun
The show closed a record edition, with more than 85,000 visitors, demonstrating the growing interest in a thriving audiovisual market amidst a process of transformation in full swing.
By Daniel Esparza, Chief Editor of TM Broadcast
From large venues to our own home: the digital transformation of all types of spaces is a boiling market, and the latest edition of Integrated Systems Europe (ISE) proved so. The European audiovisual reference trade show brought together the main players in the sector at Fira de Barcelona on 4-7 February and achieved a record edition in terms of number of visitors, number of booths and size of the exhibition area.
In total, ISE brought together 85,351 attendees, 15.5% more than last year, and had the participation of 1,605 exhibitors, 13% more than in 2024, who distributed their booths in 8 technological halls, for a total area of 92,000 square meters (up 20% as compared to the previous edition). To accommodate the 330 exhibitors attending for the first time, the show expanded the space and made available a new hall 8.1, where the Esports Arena, one of the great novelties of this edition, had an outstanding presence.
TM Broadcast magazine attended the event to cover the main developments and explore the most outstanding technological trends, both in terms of audiovisual system integration and content production, which could be said to be the trade show's two pillars; within a technological context that tends towards convergence and synergies. All this will be covered throughout this article, based on the content of the numerous conferences that were held during the trade show, as well as on the conversations had with various exhibitors.
First of all, it was clear at the fair that the market, driven by an unstoppable technological evolution, is moving forward faster than ever. With the pandemic as a turning point in the digitization of spaces due to the demands that arose from physical isolation, the audiovisual market has followed a path that has been consolidating in recent years, with artificial
intelligence (AI) as the ultimate driver.
Every environment is liable to being digitized, and this affects leisure, the workplace and homes. Therefore, Smart Homes, stand as a growing business opportunity for the future, as was noted during the fair.
Focusing on the AI revolution in the audiovisual industry, a recurring element in the conversations -as is evident at a technology fair- there was a certain emphasis on differentiating its impact on the technical and on the creative sides, although the limits surrounding both issues are rather blurry. Regarding its application in the former area -technical issues- in everything related to the automation and optimization of processes there is consensus on the undoubted benefits it brings, while saving time and costs. As for implications in creative work, although its impact is also evident and facilitates work, there is more debate and the issue generates more dilemmas about what its scope should be, especially with regard to content creation.
Somehow, the explosion of AI and its relentless advance means that limitations are no longer, in many cases, strictly technical but of another kind, and thus have more to do with human decision-making than with the available technology. An example that came up in one of the conferences is the recently simulated Porsche commercial “The Pisanos", by filmmaker Lázsló Gaál, which imitates a large film production that took place in Italy, with making-off included, in which all the scenes -even the cameras used- are unreal and have been generated by means of artificial intelligence. The result is almost indistinguishable from a real production and took just three weeks of work.
The market is thus witnessing a context of acceleration in technological development, and this forces companies from all types of segments to significantly increase their efforts to keep up with new developments. It also forces them to sort out what is a priority from the rest of ancillary elements so as not to fall into a state of some degree of technological overexcitation, as a few speakers highlighted. Likewise, all this makes it more difficult to draw up a forecast for the future and this made the speakers cautious when voicing their forecasts beyond the coming months.
The application of AI in sports deserves a separate mention, as it is an area that acts as a spearhead
of innovation in the sector. Apart from major sports, where the use of AI is more widespread, the fair highlighted the benefits of its application to teams or minor categories of starring sports, as well as other disciplines, where it has not yet been implemented on a large scale and where it can contribute to making it easier to access content or reaching new audiences, with the benefits that this entails for advertising.
Focusing on the AI revolution in the audiovisual industry, there was a certain emphasis on differentiating its impact on the technical and on the creative sides
In this sense, the speakers who presented this topic addressed the potential of AI beyond mere live broadcasts, since it opens the door, for example, to create content that had not been generated until now, such as customized highlights of a certain player or the best features of any sport. In the case of more niche sports such as handball, hockey or water polo, this possibility of presenting new content in an innovative way can increase reach into a new audience that at first may not be interested in watching a full match but maybe the most striking plays, for example.
Likewise, AI also allows the coaching staff to access a series of real-time statistics
on players and matches that had not been available until now, or to which only major teams or sports had access, thus contributing to the aforementioned democratization and individualization of content, two elements that were stressed throughout the event.
It can also be used as a tool that facilitates the recruitment of talent, in the case, for example, of a team seeking to stand out from the rest through the implementation of a series of technological innovations related to AI. Regarding the trends in this area that will predictably shape the future, everything points to the fact that any sports facility -large or smallwill have its own content
creation equipment, according to the speakers.
One of the trends that has been consolidated with the latest market advances -although the subject of debate for years- is the convergence of the different technological areas. At a general level, there is a certain convergence of all businesses in the first place -from social media to an OTT through a music or food delivery app, while all these segments compete for the user's digital time. In the words of one of the speakers, advances in AI, cloud and other technologies have led us to a kind of digital tsunami.
On the other hand, we can refer here to the IT/AV partnership, which is pressing in integrations such as smart workplaces, or to the growing symbiosis between the broadcast/AV Pro worlds.
The latter was an especially recurring topic for discussion during the fair. In recent years, broadcasting has built bridges of understanding regarding technology and terminology with the AV world, due to the growing level of professionalization of the latter and its search for higher levels of quality, so that various manufacturers that were traditionally focused on broadcasting have been opening business avenues
into other audiovisual areas in recent years.
It is also worth adding here the increasingly common trend that television broadcasters delegate a part of their content grid to external production companies that are also dedicated to sectors such as cinema or advertising, which has also contributed to uniting these worlds and that the different segments are not so compartmentalized.
Beyond the broadcasting of concerts, conferences, etc., a buoyant example of this can be esports and the gaming industry, which had a prominent role at the fair this year, as mentioned. This sector strictly escapes the traditional broadcast Broadcasting has built bridges of understanding regarding technology and terminology with the AV world, due to the growing level of professionalization of the latter
world, but claims qualities comparable to a traditional sports production, and it therefore is attracting the professional talent of engineers from the TV industry.
From screens… to off-screen content
At the fair, the prominence of screens as an outstanding attraction and the main driver of market evolution was clearly noticeable. It was enough to walk through the different halls to see it.
Based on the impressions collected, it is worth highlighting here the change in the mindset of viewers, which increasingly demand that ‘something must happen’ when contemplating a screen. Screens thus stand as an object for interaction -beyond mere contemplation- and where there is a tendency towards customization based on physical, gender or behavioral traits, to mention just three examples, a circumstance that also opens up new business avenues for advertising.
Every environment is liable to being digitized, and this affects leisure, the workplace… and homes. Therefore, Smart Homes stand as a growing business opportunity
And from the screens... to the content outside them, a trend that arose, in the opinion of the undersigned, as one of the most noteworthy of the event. If the single screen gave way to multi-device consumption, it is now increasingly common to ‘free’ content from the limits of the screen through immersive experiences or mapping projections, for example. A paradigmatic case is museums, a sector that especially prides itself on being the grounds for tests of this type of audiovisual innovations.
Here the trend, according to some speakers, is to find a balance between immersive and physical, so that viewers live a more immersive experience than in a traditional museum without neglecting the delight of contemplating a work of art without technological artifice. In this sense, the risks of neglecting the story in favor of technological deployment were also addressed, taking into account that in the new narratives the customization of content and interaction prevails beyond the traditional scheme of introduction, climax and denouement.
They also agreed that all these new audiovisual projects -for example, a mapping projection on the façade of a historic
building- require large costs that do not always fit in the allocated budget.
Outside the world of museums, if in any sector the aforementioned disruption caused by Covid-19 was evident, that was the smart workplaces, which experienced a peak in demand in the context of the health crisis that is now becoming more stable. The key element of this type of ecosystem is usability. These are spaces where ease of use is paramount, and where device and platform compatibility is key. Another trend is cybersecurity, as the notion that -just as due attention is paid to physical security of spaces- it is also necessary to address threats in the digital space.
The pandemic also affected, albeit in the opposite direction, the staging of massive live events, concerts in particular. This forced stoppage has meant that in recent years, once normality has been restored, tours have experienced a new golden era.
An illustrative example of this, that was addressed at the fair, is Adele's residency in Munich, which became a milestone in the music industry. The artist gave a series of 10 concerts last August on the largest mobile stage to date -built from the ground up in just 22 days- which included a massive 220-meterwide LED display. A total of 730,000 spectators attended and the event became a mini festival tailored to the singer, a rethinking of the traditional model of musical tours through several cities, although it is a formula only suitable for well-established stars.
Finally, it is necessary to mention in this review on the main trends of the last edition of ISE the issue of sustainability, because, apart from the obvious benefit for the environment of observing an environmentally-friendly policy, companies in the audiovisual sector must also pay attention to this area for other reasons, as highlighted in several conferences.
First of all, because of the restrictions imposed by the relevant legislation and authorities, which prevents companies from looking the other way. On the other hand, applying sustainability criteria opens up new business opportunities for companies, for example with customers who take this into account when choosing a supplier or manufacturer. It also enables you to attract talent, differentiate yourself from the competition or improve your brand image. The main challenge here is that many times sustainability is not conceived as a priority in the business.
The last edition of the ISE made it possible to get, in summary, an x-ray of the current state of a sector undergoing a process of change, which is advancing at an increasingly accelerated pace and that does not escape the general uncertainty due to the disruption that the shock waves of the earthquake of technological innovations will generate, with AI as the main item of ultimate disruption.
Text: Daniel Esparza
What will virtual productions look like in the future? We certainly can’t predict it, but if there’s one company with the authority to guide us on the path this technology will take from now on, it’s Dimension. Over the years, this company has evolved from an immersive content studio to a comprehensive virtual production creative partner across various sectors, including film, entertainment, advertising, live events, and emerging digital platforms.
In this interview, its co-founder and CTO, Callum Macmillan, discusses the key technical challenges they face with these types of productions, their most cutting-edge implementations— such as their custom-modified Unreal Engine environment or their ‘Virtual Humans’—and answers all our questions about the future of this technology.
First of all, how has the company evolved in recent years?
In recent years, Dimension Studio has evolved from being an immersive content studio to a comprehensive virtual production creative partner for productions. These significant changes have seen us expanding
both our technical capabilities and our market reach, and has driven investments both in technology and in talent. We’re now in a position where we’re delivering the most sophisticated solutions across filmed entertainment, advertising, live events and emerging digital platforms.
As a pioneering studio in the implementation of virtual production, what major advances have you observed with this technology? What types of innovations are possible now that were not feasible a few years ago?
The most recent advancements in real-time rendering has been transformative. Right now, we can achieve photorealistic ICVFX results in realtime that used to need hours of postprocessing. Although virtual production is not just about LED volumes, LED wall technology has matured significantly, enabling better color accuracy and higher resolution. We’re now on our third iteration of panel technologies and second generation of image processing hardware which is having a huge impact.
Integration between physical cameras and virtual environments has become seamless, allowing for more natural camera movements and creative freedom.
We use all camera and object tracking systems, from large scale optical to maker-based (with SLAM) and even ultra wideband radio technologies..
Perhaps one of our most exciting innovations has been combining Virtual Humans (either puppeteered metahumans or volumetric video captures), with ICVFX virtual production environments, creating hybrid experiences that weren’t possible even a few years ago. Our recent project Evergreen
demonstrates really well how we can create directable mid-ground characters that are fully virtual. Populating real-time VP environments with virtual humans to bring life to the mid and background elements of a scene is also something we’ve recently done for the Colosseum shots that Dimension worked on for the Amazon Prime series ‘Those About To Die.’
What are the main technical challenges and difficulties you’ve
faced in recent years when developing virtual productions?
One of our primary challenges has been pushing the boundaries of real-time rendering while maintaining in camera visual effects (ICVFX) quality output. Working on productions like Time Bandits (Apple TV+) with Taika Waititi and Here with Robert Zemeckis has required us to develop custom solutions.
Right now, we can achieve photorealistic ICVFX results in realtime that used to need hours of post-processing
One of our primary challenges has been pushing the boundaries of real-time rendering while maintaining in camera visual effects (ICVFX) quality output
This led to our proprietary version of Unreal Engine, which we’ve enhanced specifically for film production. We’ve gone beyond the standard build, integrating specialized hardware acceleration and machine learning and AI libraries at a low level.
Another significant technical hurdle we’ve overcome is the implementation of
SMPTE ST 2110 standards, enabling seamless IP-based workflows for real-time production and playout.
Have you recently upgraded any of your studios or acquired new equipment to optimize your virtual productions? If so, could you provide us with some details?
Our most significant advancement has been the development of our custommodified Unreal Engine environment. This isn’t just an off-the-shelf solution; it’s an optimization of the engine specifically tuned for high-end film production. Working on ambitious projects like Masters of the Air for Apple TV+ and Those About to Die with Roland Emmerich has driven us
to continuously evolve our technical infrastructure.
The integration of SMPTE 2110 standards has been crucial, allowing us to handle complex IP-based workflows with frame-accurate precision, which is something we’ve now rolled out on our first virtual production. This architecture has significantly increased our performance envelope, giving us more compute budget to push the visual quality of our environments.
do you teach technical operators for this type of production? Do you handle this task internally?
Our approach to developing talent is comprehensive and three-pronged. We have strong ties with academic institutions across the UK and beyond, with our team members actively conducting presentations and hosting events at our volumes for students. This academic outreach is crucial for nurturing the next generation of virtual production talent.
Second, we operate a structured work placement program, partnering with training institutions to provide hands-on experience in real production environments.
Third, we’re constantly developing our onboarding process, where senior team members provide hands-on guidance across various disciplines, which makes sure we can transfer knowledge and get teams up to speed quickly with our workflows and best practices. AI is helping here too.
One of the features that differentiate you from others is the so-called ‘virtual humans’ that you’ve already mentioned. Could you explain to our readers what these are and what opportunities they bring?
Our work on digital crowds exemplifies the rapid evolution of our virtual humans technology.
The progression from our work on the Whitney Houston biopic I Want to Dance with Somebody to Wicked, directed by Jon M. Chu demonstrates the remarkable advancement in what we’re delivering.
While both projects demanded high-end feature film quality digital crowds at scale, the underlying pipeline has been transformed. For Wicked we implemented significant workflow improvements which made processes much more efficient, showcasing how quickly this technology is evolving. We worked closely with
our partners at Arcturus on both projects. Their platform provided much of the scale required to deliver in the time available.
Regarding the demand for this type of production, what trends have you noticed in recent years? Do you think virtual productions are becoming more accessible to lower-budget projects?
There’s absolutely more demand from a wider range of projects
for virtual production support. A few years ago, it was predominantly reserved for the biggest projects with significant ‘blockbuster’ budgets, but now we’re working with more independent projects thanks to the way virtual production technologies have become more efficient, and therefore more cost effective.
We’re heading towards a world in which more people will want to setup and operate virtual production studios, something that Dimension is really looking forward to seeing, and being a part of.
Could you share any recent project that you are particularly proud of due to the technical challenges it presented?
Our work on Robert Zemeckis’ Here represents a particularly significant milestone in virtual production innovation and was a project on which we really pushed the boundaries of what’s possible with realtime technologies.
The project required us to create an evolving virtual world viewed through a single window across
different time periods. We implemented sophisticated vehicle simulation systems with 55 physics-enabled vehicles, developed complex weather systems including rain, snow, and seasonal changes, and pioneered new approaches to interactive lighting. The use of realtime depth cameras based on laser systems (nLight’s HD3D) provided unprecedented depth information integration, enabling seamless blending between physical and virtual elements. This project exemplifies how far virtual production has evolved
from simple LED volumes to sophisticated realtime production tools.
What can you tell us about artificial intelligence and how it’s transforming virtual production workflows?
AI is fundamentally integrated across our entire operation. We use large language models to optimize our production workflows and project tracking, while also maintaining comprehensive departmental knowledge bases. Our artistic teams
We’re
will
heading towards a world in which more people
want to setup and operate virtual production studios
use text-to-3D and mesh-to-3D tools, and generative AI accelerates our pre-visualization processes through fast 2D imagery generation.
One of our most significant implementations is in procedural content generation, which has revolutionized our environment and layout building processes. This multifaceted AI integration demonstrates how machine learning can enhance both technical and creative aspects of virtual production.
What direction do you think this technology will take in the future?
How will it impact the user experience when watching a movie or live event? Do you foresee significant changes in this area in the coming years?
The future of virtual production will be an increasingly seamless integration between physical and virtual elements, as demonstrated in projects like Here, Time Bandits, and Masters of the Air.
production is reshaping post-production workflows. The traditional linear progression from production to post is evolving into a more integrated, parallel process
We’re seeing a convergence of realtime technologies, AI, and traditional filmmakingsystems will become more modular, volumes will become more flexible and dynamic be that for led production for the performance capture or for volumetric capture, they all exist in a volume - that’s revolutionising how stories can be told. The viewer experience will become more immersive as the line between practical and virtual elements continues to blur.
We anticipate even more significant advancements in realtime rendering quality, enhanced by AI and machine learning, leading to even more photorealistic results. The integration of technologies like SMPTE 2110 and our custom-modified Unreal Engine implementations suggests that virtual production will become increasingly accessible while maintaining the highest quality standards.
In particular, how do you think virtual production is affecting the world of post-production?
Virtual production is fundamentally reshaping post-production workflows.
The traditional linear progression from production to post is evolving into a more integrated, parallel process. Our experience on Here demonstrates this shift
perfectly - we were able to capture final-pixel quality imagery in-camera while maintaining flexibility for creative decisions. The integration of realtime depth information and sophisticated environmental controls means that many traditional post-production tasks are now being handled during principal photography.
This doesn’t eliminate post-production but rather transforms it - we are working hard on
the turnover process to visual effects, how do we aggregate all the data that we have handled and pool that in the most meaningful way for comp if required - allowing teams to focus on enhancing and refining rather than building from scratch. The result is a more efficient, creative, and collaborative process that maintains the highest quality standards while enhancing efficiency and reducing costs in post-production.
“One of the main challenges companies face when implementing AV-over-IP solutions is network reliability”
Within the section we regularly dedicate to manufacturers to learn more about their market vision and solutions, this time we focus on Kiloview, a company specializing in providing cutting-edge AV-over-IP solutions.
Kiloview stands out as one of the very first adopters of the NDI standard due to an alliance with NewTek, and has also partnered with Audinate to integrate the Dante protocol. In this interview, its CTO, Judy Zuo, outlines the company’s latest developments and future plans, while providing an analysis of the impact that AI, 5G, and the cloud are having on the sector, among other topics.
First of all, can you share with our readers what Kiloview specializes in?
Kiloview specializes in providing cutting-edge AV-over-IP solutions that enable reliable, high-quality, low-latency, and costeffective video transmission over IP networks. We offer a wide range of products and solutions including video encoders, decoders, wireless bonding solutions, and multi-protocol media gateways as well as multiview, routing, recording, centralized management and control, providing a complete workflow for live and remote transmission and production, remote transmission, etc. for broadcasters, live event producers, and content creators with high quality, low latency and reliable AV-over-IP setups.
Tell us about the evolution of your company. What are your most significant recent milestones?
Kiloview was founded with the mission to provide AV-over-IP technology,
and since then, we have consistently been at the forefront of industry advancements. Significant milestones include:
> Pioneering NDI Integration: Our integration of the NDI can be back to the very first days when NDI was first announced due to our partnership of NewTek. Kiloview is one of the very first NDI adopter since 2018 and all those years we have been dedicated to develop and provide a complete NDI-based solution that has made it easier for customers to adopt and scale their workflows. Kiloview is the only provider that offers products that can get all kinds of NDI, encoding and decoding on the same unit, as well as providing solutions for Multiview, recording, routing, both software and hardware based.
> Partnership with Audinate: We also entered a strategic partnership with Audinate, bringing Dante AV-H support to our products, which has opened up new avenues in broadcast and professional audio-video
environments. It’s a significant move for Kiloview showing we are open to all IP technology.
> Recent Developments of RF02: We are working on integrating all the existing Kiloview products into a compact rack mount 2RU unit which not only gives customer the opportunities to customize their workflow of chosing the necessary encoding card, decoding card, media gateway card, Multiview card in it, it also providing the possibilities of offering the internal network switch which significantly makes the setup easy while providing the streamline streaming experiences based on a high-performance processor running the Kiloview patented KiloLink Server Pro, making it easy to manage all the products, videos routes and destinations with intuitive UI. This product marks our milestone of providing highly integrated products and solutions which will greatly simplify customers’ setup, making customers easier to enter into the IP world.
Focusing on AV-over-IP, what level of adoption do you perceive this technology currently has in the industry?
The adoption of AV-over-IP technology has grown significantly in recent years. While initially it was primarily used by large broadcasters and specialized event producers, it is now becoming more mainstream across various industries. We’re seeing AV-over-IP solutions being increasingly adopted for applications in corporate environments, education, sports events, and for streaming platforms, which is a common feedback after the recent ISE as well. The industry is moving away from traditional baseband video solutions to more flexible and cost-effective IP-based workflows due to the current development of the ecosystem, of course the powerful capabilities. As a result, we expect continued growth in this area as businesses realize the advantages
of AV-over-IP in terms of scalability, and future-proofing their infrastructure.
How do you think the AV-over-IP industry will evolve in the near future?
In the near future, we expect AV-over-IP technology to continue to evolve towards greater interoperability, higher reliability and efficiency and easier to start with. With the development of the industry in the recent years, the capabilities of AV-over-IP systems will expand, enabling even higher-quality, low-latency video delivery across global networks. There will also be a significant shift towards remote production, allowing teams to work from anywhere, leveraging cloud-based solutions for streaming, mixing, and distribution, it will bring AV-over-IP to more possible applications. Also with more new products coming, getting into the AV-over-IP industry is never easier with all the products and solutions covering all the possible customer
needs, making it easy to start with and it’s just plug-and-play.
What are the main challenges companies face when implementing these solutions, according to your market perspective?
One of the main challenges companies face when implementing AV-over-IP solutions is network reliability. Since AV-overIP relies heavily on IP One of the highlights was our involvement in the 2024 Macau Grand Prix, where we provided a NDI solution to support the whole venue for the live video transmission and production
networks, ensuring that networks are robust enough to handle highbandwidth video streams without latency or packet loss can be a hurdle. Additionally, compatibility with legacy systems is often a challenge, as many companies still rely on traditional hardware-based video systems. Training and educating personnel on how to use and maintain these advanced systems is also crucial, as the learning
curve for AV-over-IP systems can be steep without the right support. Lastly, security concerns related to transmitting sensitive content over IP networks must be addressed, particularly in industries like broadcast and live events.
What advantages do you think the NDI standard brings in this context, as applied in your solutions?
The NDI standard provides immense advantages in terms of high image quality, low latency, interoperability, ease of use, and flexibility. By enabling IP-based video transmission with ultra-low latency, NDI makes it easier for users to stream, share, and manage video content without the need for complex hardware setups. It simplifies workflows by allowing multiple devices to communicate over the same network, reducing Kiloview Building.
the need for dedicated wiring and signal routing equipment. The integration of NDI into Kiloview’s products enhances the ability to distribute highquality, low-latency video streams across broadcast networks, event production environments, and live streaming platforms, enabling users to adapt quickly and efficiently. Another biggest advantage of NDI is the big ecosystem on the market, now from the cameras to displays, as well as network switch, production system, almost everything is natively NDI-enabled which makes it so easy to set up a workflow without any extra cost or effort.
And regarding the Dante technology, which you have also integrated into your solutions, what benefits does it offer?
Dante technology is a game-changer for audio-video professionals, providing highperformance, low-latency and high image quality. The integration of Dante into Kiloview’s solutions
allows seamless integration of the best audio technology and the most popular audio technology, enhancing the overall user experience. Kiloview is launching the Dante-Ready products to the market, which can greatly takes the advantage of Dante simplifying audio routing, offering greater centralized management in complex AV installations. This technology enables users to connect and manage audio devices with ease, even across large venues or multi-location setups, and for Kiloview users, combining the existing video technology with Dante-Ready, it greatly simply their setup with less devices and less cables as well.
How do you think AI, 5G, or cloud technologies will impact the future of these markets?
AI, 5G, and cloud technologies are poised to revolutionize the AV-over-IP market by enabling real-time content analysis, remote production, and global video distribution.
AI-powered tools will allow for enhanced content management, realtime analytics, and even automated monitoring and quality control, making it easier for operators to manage large-scale video workflows. 5G networks will provide ultra-low latency and higher bandwidth, making it easier to stream high-quality video from virtually anywhere, even in mobile or remote environments and now Kiloview is already offering products and solutions based on 5G with our P3 bonding encoders, making it possible for anytime anywhere professional live streaming. Meanwhile, cloud technology will enable light-weight and scalable infrastructure for video storage, editing, and distribution, allowing teams to collaborate seamlessly across different locations and devices, greatly saving the travel costs while improving the effiency. Together, these technologies will help streamline operations and reduce costs while unlocking new possibilities for content creators and broadcasters.
Kiloview’s contribution to the industry
You were recently at ISE. What was your experience? What is your overall assessment of the exhibition?
ISE (Integrated Systems Europe) was a great success for Kiloview. This year it was our first time to showcasing the latest RF02 in working status, giving all the visitors the idea of what solution and ecosystem Kiloview is offering. While this ISE was also the most successful one until now in terms of the attendees, significiant we see the increase of the adoption of AV over IP especially NDI, so this was really a big success. It’s also a bigger merge of the broadcast industry with the Pro-AV industry. The level of interest in our NDI and Dante-enabled products was outstanding, and we received great feedback on the P3 & P3 Mini encoders as well as our new decoders. Overall, ISE was a huge success.
What do you believe are the key differentiating features of your AV-over-IP solutions?
Kiloview’s AV-over-IP solutions stand out for their completeness and reliability. Unlike other vendors in this industry of providing single products, Kiloview offers a complete ecosystem based on our own technology which brings a better user experience and adding more value to our partners. Further not only we are providing the complete workflow, based on our technical background and solid understand of customer needs, we will focus more on thoroughly solving the existing problems of network reliability, check/ control/diagnose as well as improving the latency, image quality etc., making it more reliable and easy to use by offering intuitive interfaces and simple setup processes, while maintaining high-quality video performance in demanding environments. Kiloview’s focus on innovation will surely keep bringing us differenciating features.
Could you share with us any recent standout success stories?
We’ve had a few key success stories recently that really shows the differentiating features of our solutions, also because of our complete workflows and the reliability. One of the highlights was our involvement in the 2024 Macau Grand Prix, where we provided a complete NDI solution to support the whole venue for the live video transmission and production, based on the powerful N60 NDI converters, the KiloLink Server Pro for centralized management, the Multiview and NDI CORE for routing and more, providing a high quality, low-latency and easy-to-use solution for an iconic motorsport event.
Another success we are particularly proud of the deployment of our 5G/4G bonding encoders, the P3 Mini, also in sports for both car-racing and motorcycle racing, due to the compact design, which can easily be fit into the limited blackboxes or inside of the car, while based
on our patented bonding technology, it enabled real-time, high-quality video transmission even in areas with unstable network conditions.
What are your plans as a company for the future?
Kiloview plans to continue innovating and expanding our product portfolio to meet the growing demands of the AV-over-IP market.
Security concerns related to transmitting sensitive content over IP networks must be addressed, particularly in industries like broadcast and live events
As we mentioned, the mission of Kiloview is to provide the foundational AV over IP technology, offering versatile, reliable, and easy-to-use transmission and management products to support our partners to achieve an optimized and simplified AV over IP workflow. So as a company, we will keep moving towards this and further Kiloview will keep
providing more and more products that can easily help our customers to enter into IP world easily, with better price, better user experiences and more flexibility to bridge all the possible technologies.
At Kiloview, we are more than just a technology provider; we are
passionate about helping our customers and partners achieve their goals through innovative, flexible, and reliable AV-over-IP solutions. We look forward to continuing to lead the industry in delivering the best possible products and support to enhance the live streaming, live production, broadcasting, and professional AV markets.
The handheld camera capable of framing.
In the midst of the expansion of AI, with this camera comes the time to knock down another of those concepts that until now were part of our "great immutable truths".
Technological evolution never ceases to amaze us. Automatic functions have been an invaluable help for many years, although we have always said that a good professional outperforms any automatism. And that, at least for the time being, is still a totally valid statement.
Needless to say, as strong advocates of innovation and technology, we always welcome
with enthusiasm. And while this increasingly facilitates participation of professionals at different levels, we would like to emphasize that a wellprepared professional will always enhance any tool and will not just be content with relying on it.
It is true that autofocus systems are faster and more accurate than most of us, but in general they do not know how to
This has improved since these systems are now capable of recognizing faces, eyes, and even specific people to keep the focus on them and not on someone else.
And although at the moment they still lack the ability to bring the focus to the take of narrative interest in certain situations, the ability to select the desired point or area of focus at any time
It is a handheld camcorder, with integrated 20x zoom optics, a single 1.0-inch type R-CMOS sensor with native 5K resolution, and Ethernet + WiFi connectivity.
Something similar is provided by exposure automatisms, capable of very precise adjustments, but only as long as the lighting is not complicated and we manage to "tell" them in some way what the critical area is for our content to be really useful for our purpose. Which, again, can probably be more dependent on narrative needs than on the quantity, quality, or direction of the existing light.
It is even a frequent and common feature that PTZ-type cameras are capable of tracking people, although the innovation we discussed in our recent BRC-AM7 lab test (also from Sony) with the ability to maintain a certain frame by specifying the air on the contours, on subjects who move quickly and in wide spaces is indeed impressive, thanks to its integrated movement system.
But, back to our headline... how is is possible that a handheld camera can properly frame, if does not have the ability to move by itself? This is one of those immutable truths that we must now revise.
Let's see what all this is about, and although the answer is simple and it does have limitations -like all automatisms- there are situations in which this will help a lot. But we continue to insist on this great truth, which we would wish to remain unchanged: it will always be the hands of a good professional that will
make any tool really effective and efficient.
As usual, before delving into the subject, let's review the other features in order to provide proper context. It is a handheld camcorder, with integrated 20x zoom optics, a single 1.0-inch type R-CMOS sensor with native 5K resolution, and Ethernet + WiFi connectivity.
It seems to be designed to cater to needs mainly in Broadcast and ENG areas, and in view of other features, it will also yield excellent results in areas from web creation to aspirational filming.
The integrated zoom optics cover an optical range of 24-480 mm in equivalence to full-frame focal lengths. In combination with the Clear Image Zoom -a built-in function that uses additional processes to avoid the characteristic artifacts of digital zoomthe initial 20x can be increased up to 30x without noticeable loss in image quality in UHD; and up to 40x when working in HD, thus reaching a truly extraordinary 24-960 range.
Brightness starts from an excellent f/2.8 at an angle, which goes down to f/4.5 on telephoto. The focusing distances decrease to only 1 cm at an angle, being even capable to focus inside the parasol, and grow to a minimum distance of 1 meter with telephoto at maximum range.
That is, although the optics are not interchangeable, it has sufficient qualities to solve most situations, with the comfort of being perfectly integrated and forming a solid and compact whole with the body.
It comes with a built-in optical stabilizer with two operating modes. The two independent rings for focus and zoom are complemented by an iris control that is carried out by means of a knob placed in a comfortable and natural position for any operator, although already on the body and not on the optics barrel.
Its position, close to and a twin of the knob that controls the variable neutral density filter -already
characteristic of the brandmakes it easier for us to always have the best possible exposure control.
By the way, and according to information provided in the technical documentation, it is possible that some upcoming firmware update will allow the iris control function to be assigned to one of the main control rings. We find that this possibility not only facilitates the customization of the equipment, but also increases its versatility by enabling to tackle different needs with maximum ease for different operators or various scenarios.
The variable ND functionality, which can also be configured at fixed levels like the classic filters and not necessarily in progressive order as in the rest of models, can be set up to run on auto mode. In this way, the selected exposure will be maintained, compensating for the differences in lighting when moving from one environment to another without affecting the visual narrative.
This ability to control the exposure level without modifying diaphragm, shutter speed or sensitivity, means that depth of field, movement or noise in the image will not be modified even if lighting levels vary by up to 6 diaphragm steps, which is its operating margin between 1/4 and 1/128 exposure, and always without causing any chromatic alteration.
For these reasons, variable ND is one of the features that has seduced us since it appeared years ago, and we are very happy to see that it is a natural part of all the new equipment that Sony launches on the market.
The Exmor RS CMOS sensor, 1.0 inch-type, has an effective 14 Mpixels that allow for 5K resolution. Color response can be configured in different spaces including S-Cinetone, S-Log3, HLG, and various variants of 709.
In addition to resolution, there are dedicated BIONZ XR processors and a specific AI module. This set, in addition to contributing significantly t o the reframing procedures that we will see in detail below, facilitates greater focusing precision by combining contrast detection and phase
detection techniques, thus optimizing both speed and accuracy.
This processing speed of the sensor assembly is what allows recordings with high frame rates, which reach 120p in UHD format (3840 x 2160) or 240p in HD (1920x1080).
Color response can be configured in different
spaces including S-Cinetone, S-Log3, HLG, and various variants of 709, thus facilitating integration with different cameras, environments, and workflows. That is, it will always be easy for us to match the chromatic response with both Broadcast cameras and Cinema Line range cameras.
Operating margins in the classic image capture parameters are very broad: shutter speeds between 1/8000 and 64f; white balance between 2,000 and 15,000 ºK; gain from -3 to 36 dB in 1 dB steps. ISO between 250 and 16,000, with the possibility to configure lower and upper limits if automated.
The autofocus has up to 475 detection and recognition points for people, with subject detection. That is, you can identify a person to track focus on a particular individual. And we'll see that it's not just the focus...
For visualization we have a viewfinder and screen, as is also usual in this type of equipment. The 1 cm XGA-OLED viewfinder features 2.36 Mpixel and proximity detection if we are using it, in which case it turns off the screen. As it is logical, this mode is configurable so as to prevent it from being accidentally turned off if it is the hand or the body which covers the eyepiece.
The 3.5-inch 2.75 Mpixel LCD display offers excellent
resolution and image quality, easily visible in broad daylight, and features touch functionality for ease of operation. We have a folding and removable sunshade as standard and three buttons on the side always accessible with zebra, peaking functions, and a customizable button.
The amount of information that can be overlaid on the image during recording can become overwhelming. Fortunately, all of this information can be turned on and off with a simple tap on the assigned button. This information includes not only data in the form of text or figures, but also graphs, such as histogram, vectorscope, or waveform.
By configuring this through the menu, LUTs can be used, both on-screen and in one of the outputs, so that our display is as close as possible to the final result that we will obtain once the color-grading of the filming is completed.
The LCD support has been newly designed, and although it feels robust
and inspires confidence, due to the position in which the screen remains once folded, it gives the impression of being somewhat vulnerable to not very delicate handling. Only the qualities of the new materials and the experience of use over time will judge the benefits of the new solution.
Regarding operation, another feature that we really liked are the various methods to control all its functions and capabilities: on the one hand, the buttons and selectors with dedicated functions, in the style of classic ENG cameras, in the usual places.
In addition, most of the 12 customizable buttons distributed throughout the body offer a different touch to make it easier to locate them based on touch, and through the menu we have a range of up to 63 functions among which we can choose for each of them. In this way, we will always have maximum efficiency when dealing with any type of project.
On the other hand, we have access to the menu in two modes: summarized mode, so as to access the main configuration functions (with a short press), and extended mode (with a long press) to control every single detail of each parameter. These two modes are available on the two menu buttons, one of them being very accessible on the handle itself for use during filming if necessary.
And finally, the combination of an on-screen quick menu with the joystick to switch between parameters and adjust each one directly. These parameters are the ones we usually find as information displayed on the edges of the screen during filming and, in fact, this mini joystick is duplicated on both handles (side handle and top handle) next to the two recording buttons.
Regarding audio, exceeding what has been customary recently, we have up to 6 physical inputs for 4 audio channels, always recording audio in LPCM 48 kHz format at 24 bits.
We have two inputs through the XLR connectors, with the possibility of power supply, and classic direct controls for the input and level selector. Two others through the MI-shoe, capable of carrying both analog and digital signals and whose controls are accessed through the main menu. And in addition to the stereo input provided by the built-in microphone, we have another 3.5 mm TRS mini-jack input for an unbalanced external microphone.
Regarding video recording formats, it is important to note that it will invariably be done on AVC, intraframe or long-GOP codecs, and always in progressive. Internally, we no longer have MPEG2 codecs or interlaced mode. Pay attention to this, if for any reason there is a need to generate interlaced content.
In summary, within the wide range of options available in terms of resolution and frame rate, we can select from a maximum of 500 Mbps in intra UHD at 50p, up to a minimum of
50 Mbps in long-GOP HD, either 420 or 422 and 25p or 50p. It seems that the reason for maintaining the minimum of 50Mbps is to always meet the standard supported for broadcast. As for proxies, these are made up of 16 Mbps.
The recording of MXF or MP4 containers is carried out on type-A CFexpress cards or SD UHS-I/II cards, in either of the two slots with configuration flexibility: different formats in parallel, the same one in relay, and even an independent control for each of them from each of the two recording buttons.
Obviously, all the formats that the camera is capable of generating can always be limited by the actual speed of the card being used. With the option to choose between SD or CFexpress, let's keep in mind that not all of them can support the sustained data rate in recording required for the most demanding formats.
Being all very aware that capacity affects recording time, we must make sure about the sustained data rate in recording, since many manufacturers only advertise the read rate, which is always higher. Be careful, please, with this.
Moving on to connectivity, in addition to the audio inputs already described, we have
two BNC connectors: one configurable as input or output for the time code, and the other for the SDI output. Another HDMI connector provides a second video output, as both SDI and HDMI outputs can be configured with different resolutions or overlays.
We have a 3.5 mm jack for stereo headphones, and a small mono speaker for monitoring recorded content. What is not so common is having a menu "reading" system that allows us to listen to options while operating the device. However, this feature is not available -at least for nowin all languages in which the camera can be configured.
As it is also common in this type of camera, we have network functions, both Ethernet 10/100/1000BaseT and Wi-Fi on 2.4 and 5 GHz bands, thus enabling direct streaming or uploading of files to any FTP platform, as well as dedicated cloud services such as Sony's proprietary Creator's Cloud C3 Portal, with QoS quality controls and real-time data flow.
Streaming is done using RTMP, RTMPS and SRT protocols with configurable H.264 and H.265 codecs in 720p, HD or up to UHD resolutions with 2 16-bit audio channels at 48 kHz.
Here we must make a quick mention of a new optional PDT-FP1 (Portable Data Transfer) device. Similar to the well-known external recorders, this one offers us a few more functions: a larger screen with 4K display and remote control functions on the camera, possibility of internal recording and, most interesting of all: additional connectivity featuring a stable data transfer at high speed with a multitude of bands, such as LTE, 5G Sub6 and mmWave. It connects via USB-C and releases the camera's network functions for any other purpose.
These network capabilities allow, in addition to the 2.5 mm LAN-C connector, remote control of the camera from phones or tablets via Wi-Fi or Bluetooth, with the possibility to become its own hot spot in those places where there is no network available.
A USB-C port provides high-speed connectivity -up to 5 Gbps- with USB 3.2 protocol suitable for various uses, from content dumping to remote control.
Streaming is done using RTMP, RTMPS and SRT protocols with configurable
H.264 and H.265 codecs in 720p, HD or up to UHD resolutions with 2 16-bit audio channels at 48 kHz.
The batteries are BP-U type, with the already known capacities that provide either lightness in the smallest BP-U35 model or long operating time in the highest-capacity BPU100. The feeling is that performance has improved overall as compared to previous models, with up to 90 minutes of continuous recording with the smallest and lightest battery, according to the catalogue.
Keep in mind that consumption will always be determined by actual use, and that this can vary significantly if you use, for example, network services or not, the display or the viewfinder, prolonged viewing of content, etc., plus all their possible combinations. This is why we suggest that each operator carry out some tests in their environment and under actual conditions so as to obtain really accurate data for their working conditions, and thus have all the peace of mind that they will need when the time comes.
We end with the description of the set, confirming that all these qualities are in a comfortable and very manageable body, in which good care has been taken in details such as the fact that ventilation is designed so that the air flows from the left and down, to the right and up, moving the hot air away from the person operating the camera and thus providing greater comfort of use.
It has been manufactured by combining light metal alloys with new synthetic materials, in search for robustness, lightness and reliability together with the best balance with environmental commitment. In this way, the whole set weighs around 2.4 kg (little over 5 pounds) in operating conditions, battery included.
We have needed this entire lengthy block just to list and briefly comment on the large array of features we have at our disposal, as well as some of the benefits. But let's get now into that new, fascinating autoframe feature.
As we can imagine in a camera without PTZ functionalities, the autoframing method is conceptually very simple. And it has its limitations too, as is the case with all automatisms, although this does not preclude a surprising result.
can lead to compromising image quality if we are too permissive or aggressive with the setting, it can be tremendously useful in a multitude of situations.
Considering that the sensor is 5K, in an HD shooting scenario like the one still used in many news pieces, events or broadcasts, we can make a larger x2 magnification without loss of quality of any kind. That means that an operator in the middle of a crowd of operators in a news piece -even if not being able to aim carefully- will succeed in achieving a recording or a more than reasonable live feature in real time.
Basically, what we do is establish the cropping margin that we will allow so that, once the subject is identified in the same way that we do to maintain focus through identification of people, our protagonist remains in a prominent place in the image and not lost in a corner.
Right. The reframing is done by cropping and scaling the image, in the same way we would do in editing, but in real time during the recording itself! And while it is true that this
By combining the speed of image processors with AI functions, the versatility of subject identification with the effectiveness of autofocus, we achieve a result in which our subject of interest will be on focus at all times, and will stay as focused as possible within the frame of reference. We have two parameters to control system response: on the one hand, the range
of reframing or enlargement that we want; and, on the other, the sensitivity or speed of response.
This is where the skill of the relevant professional will make all the difference. Imagine a general take in an auditorium, where the subject will move within a limited space: the reframing will follow the speaker, thus offering a closer shot on the person, but the first decision will be to establish the limits of the general shot, since the reframing, obviously, can never exceed them.
Then, if we force the enlargement too much, we could compromise image quality. If we magnify too little, the result of the tracking will be insignificant. If we exaggerate sensitivity, the protagonist's small movements could lead to unnatural tracking. And, if we reduce it excessively, we risk having to get too close to the limits before reframing.
For all these reasons, being a system that generally works quite well, unwise tweaking can lead to an
undesired result, far from our expectations and from the true capabilities of the system.
Another example of very practical use is when a single operator does an interview, in which we will devote all our attention to the content without also having to be aware of the framing of our protagonist while carrying out the interview.
On the other hand, with a correct setup, and if we also add optical stabilization, the shot achieved from a crowd that keeps us moving while we follow the relevant news piece, can end up looking like a perfectly planned and executed shot.
As with the subject identification system, if our protagonist exits the take we will lose the tracking, but we will recover it as soon as the person re-enters the scene. In this case, having configured autoframe activation/deactivation
in one of the customizable buttons will facilitate direct and instant switching of the function.
Finally, just mention that together with this camera a similar model (NX800) has been presented at an even lower price which, sharing the same body and most of its features, only cuts on some specific broadcast functions such as the BNC connectors with the SDI output, the timecode and it only records in MP4 formats.
In short, a new progress in content creation possibilities that provides the most sophisticated features currently available at a highly competitive price. Summing up, and insisting on our initial idea: being delighted with all the advances, we continue to argue that a good professional will provide unquestionable value, by enhancing the best tools instead of being replaced by them.
The reframing is done by cropping and scaling the image, in the same way we would do in editing, but in real time during the recording itself!