IP Showcase Theater Schedule
Sunday April 6 to Wednesday April 9, 2024
Francois-Pierre Clouet, IntoPIX
JPEG XS is emerging as a transformative technology in the media production and distribution industry, particularly for IP workflows like ST 2110. It significantly reduces bitrate without compromising quality, enabling the handling of high-resolution video with low latency and minimal power consumption. This visually lossless compression allows existing network infrastructure to remain in use, making the transition to IP workflows more efficient.
By utilizing JPEG XS, companies can avoid power-hungry codecs like HEVC, ensuring that video signals are transmitted effectively over various networks, including the internet. JPEG XS is designed for sustainability, enabling greener workflows by minimizing energy use and prolonging the life of existing equipment. Its lightweight nature helps reduce overall operational impact, showing that technological advancement can align with ecological responsibility. Real-world applications demonstrate measurable environmental benefits and operational improvements, making JPEG XS a crucial player in pursuing a sustainable future in media production.
The transition to the cloud is further supported by key manufacturers and service providers like Nvidia (GPU, AI), Dell (servers, networks), and AWS (cloud platform), who offer specialized solutions, including hardware-accelerated XS encoding and decoding services. This minimizes any computational or graphics overhead that could hinder workflow. JPEG XS enables the use of high-fidelity tasks like color grading and VFX compositing while optimizing resource utilization and energy efficiency, thus facilitating the migration of workloads to the cloud.
In summary, JPEG XS not only improves video production and broadcasting efficiency but also significantly reduces carbon emissions and energy consumption. Its compatibility with current infrastructure makes it an accessible choice for businesses aiming to enhance sustainability without sacrificing performance.
Steve Holmes, Leader
"If you've worked with NMOS, you know that AMWA's Networked Media Open Specifications (NMOS) make it possible to control IP networks in much the same way you're used to in SDI, such as router control panels and keeping track of video sources and receivers in the network.
That said, you've also likely faced some of the challenges that can come up with SDP files within NMOS, such as getting all the receivers and transmitters to register themselves into the registry server, and making that data available to the external control system so you can use your existing panels and equipment.
Likewise, what do you do when you push Source, Destination, Take and nothing happens? How do you make sure that the SDP file is valid, formatted correctly, and not malformed?
Join this session to find out:
Why do you need NMOS and how can you apply it within your 2110 workflows?
How can you spot and prevent common NMOS/SDP problems?
What are the basics you need to know about NMOS?
How to use analysis tools that will help you in monitoring NMOS?
What do you need to know when it comes to NMOS troubleshooting and error reporting?"
Spencer Deame, Nextera Video
"SMPTE ST 2110 and IPMX have emerged as key technologies enabling flexible and scalable AV over IP infrastructures to be deployed for broadcast and ProAV. NMOS is the control layer that makes it plug and play. The latest advancements in NMOS take it to a new level, surpassing the level of control provided in SDI and HDMI while also adding a layer of security that has been sorely needed in control systems for quite some time.
It also addresses the needs of the system integrator and studio builder, enabling automatic discovery of important services required to configure endpoints automatically on startup.
More recently, features have been added to ensure stream compatibility, address compressed video of multiple types, support EDID management for IPMX, and the most radical advancement, extending to device configuration and control.
This paper will dive into the details about what NMOS is, how it works, and then look at where it is going and how it adds significant value to the system designer and integrator."
-
Sustainable Live Production with JPEG XS as a Game-Changer in Carbon Footprint ReductionFrancois-Pierre Clouet, IntoPIX2:00 pm - 2:30 pm
-
What You Need to Know About NMOS In An IP WorldSteve Holmes, Leader2:30 pm - 3:00 pm
-
Evolution of NMOS for ST 2110 and IPMXSpencer Deame, Nextera Video3:00 pm - 3:30 pm
-
No Presentations10:00 am - 2:00 pm
Sam Recine, Matrox
As the AV industry continues its shift to IP-based systems, IPMX has emerged as a leading open framework for AV-over-IP interoperability. In this session, we’ll explore the growing market demand for IPMX and why the industry is rallying around a standardized, open approach.
We’ll focus on three key asset classes undergoing this transformation: live media and production equipment, Pro AV signal routing systems, and media on PC/IT platforms. The goal is to clarify why IPMX is being pulled by market demand rather than pushed by technology providers.
In addition, we’ll provide a high-level overview of the IPMX ecosystem, including the status of IPMX-ready equipment, trademark usage, and initial training efforts for 2025. This update will outline the progress made by AIMS, VSF, and AMWA in defining profiles, labelling requirements, and trademark guidelines.
This session sets the stage for a deeper exploration of IPMX’s technology and roadmap, focusing on the business case for adopting an open standard. Attendees will leave with a clear understanding of the market’s role in driving IPMX forward and what to expect in the near future.
Jed Deame, Nextera Video
This presentation will discuss the business and technical details pertaining to the hot new AV over IP system called the Internet Protocol Media Experience (IPMX). The core of the presentation will be a review of the individual components detailed in the TR-10 specification dash numbers and are in an English description of the functionality they bring, including things like asynchronous senders, copy protection, and FEC. We will also discuss the benefits of open standards and look at the technologies behind IPMX. Finally, we will review how the NMOS control system maps to IPMX devices to provide the same plug and play control as is provided in ST 2110, but with some significant ease of use extensions.
Jed Deame, Nextera Video
This presentation will discuss the technical details behind the hot new AV over IP system called the Internet Protocol Media Experience (IPMX). The core of the presentation will be a review of the individual components detailed in the TR-10 specification dash numbers and a "plain English" description of the functionality they bring, including things like asynchronous senders, copy protection, and FEC. We will also discuss the benefits of open standards and look at the technologies behind IPMX. Finally, we will review how the NMOS control system maps to IPMX devices to provide the same plug and play control as is provided in ST 2110, but with some significant ease of use extensions such as EDID management.
Andrew Starks, Macnica
The AV industry is rapidly adopting IP-based technologies, with IPMX emerging as the leading open standard for AV-over-IP interoperability. Developed by the Alliance for IP Media Solutions (AIMS), IPMX enables seamless integration of professional audio, video, and control systems over standard IP networks.
In this session, you’ll receive an early look at the official IPMX roadmap as AIMS prepares for its formal launch in late 2025. We’ll clearly outline the baseline requirements, including supported video and audio profiles, codec capabilities, security features, and interoperability criteria.
Attendees will leave this session with a clear understanding of upcoming milestones, how IPMX interoperability will be validated, and how manufacturers, integrators, and end users can confidently leverage IPMX to build flexible, scalable AV-over-IP solutions.
Andreas Hildebrand, Lawo
Originally, the definition of IPMX started from the proven grounds of AES67 and SMPTE ST 2110. While AES67 and ST 2110 were build around the interoperability requirements of the broadcast realm, it became immediately apparent that for application to the world of ProAV enhanced functionality would be required, while certain requirements indispensible to the broadcast world would be too complex or rigid - namely the very strict timing and synchronization requirements only made possible by the use of PTP. During development of the IPMX specifications within the VSF group it became clear that certain constraints and enhancements have to be made with respect to AES67 and ST 2110 to enable wider adoption in the ProAV domain.
IPMX is now taking shape with several companies having announced product availability. While the larger focus of IPMX interoperability is on the video side of things, audio is certainly a very important aspect. And with over 5000 AES67- or ST 2110-compatible audio devices in the market, it would certainly be a massive adoption accelerator if these products could immediately work in an IPMX environment.
Andreas will discuss how the addditional constraints and enhancements of the IPMX specifications affect interoperability with AES67 and ST 2110 audio devices and how they can be used in IPMX environments.
-
IPMX NAB 2025 UpdateSam Recine, Matrox10:00 am - 10:30 am
-
What is IPMX? Plain Language Summary of the IPMX Technical RecommendationsJed Deame, Nextera Video10:30 am - 11:00 am
-
IPMX Deep DiveJed Deame, Nextera Video10:30 am - 11:00 am
-
IPMX RoadmapAndrew Starks, Macnica11:00 am - 11:30 am
-
"AES67, ST 2110 & IPMX" - Differences, Commonalities & InteroperabilityAndreas Hildebrand, Lawo11:30 am - 12:00 pm
Andy Rayner, Appear
This is a case study that unpacks some of the live media production technology underpinning a couple of significant sports events that took place at Christmas 2024.
It will describe the topology and the details of how the event was connected and brought to viewers.
Paul Evans, NetInsight
This paper delves into the critical role of IP-focused standards, with a particular emphasis on SMPTE RP 2129, in addressing significant real-world challenges such as security across Wide Area Networks (WANs) in use cases such as remote production and facility interconnect. As media production and distribution increasingly rely on IP-based networks, ensuring robust security measures becomes paramount. SMPTE RP 2129 introduces the concept of Trust Boundaries, which function as media-specific firewalls designed to secure IP-based network interconnections. These Trust Boundaries are essential in maintaining the integrity, confidentiality, and availability of media assets during transport.
The paper will explore the practical applications of SMPTE RP 2129, detailing how these standards can be implemented to mitigate security risks effectively. By establishing Trust Boundaries, facilities can protect against unauthorized access, data breaches, and other cyber threats that could compromise media content. Additionally, the paper will discuss the importance of these standards in enhancing operational efficiency, ensuring seamless integration across diverse media networks, and supporting the scalability of media operations.
Furthermore, the paper will include case studies and best practices to illustrate the effectiveness of SMPTE RP 2129 in real-world scenarios. These examples will highlight how facilities have successfully implemented these standards to safeguard their operations and maintain high-quality service delivery. The discussion will also cover the challenges and considerations involved in adopting these standards, providing a comprehensive overview of their impact on the media industry.
By examining the intersection of IP-focused standards and facility security, this paper aims to provide valuable insights into the strategies and technologies that can help media organizations navigate the complexities of modern media production and distribution. The findings will underscore the importance of adopting SMPTE RP 2129 and similar standards to ensure the resilience and security of media operations in an increasingly interconnected world."
Sergio Ammirata Ph.D., SipRadius
With video content distribution over the internet now the norm, security has become a vital consideration: no broadcaster wants to risk unauthorised content insertions; no producer can risk valuable intellectual property being intercepted. Yet despite this, security standards are not uniformly high -- some popular encoders on the market store passwords in the clear! -- and while video streams are becoming secured, the communications and control around them are still too often neglected.
In this paper, Sergio Ammirata, founder and chief scientist of SipRadius, will look at real-world best security practices bringing communications and content into a common, hardened encryption model. Further, he will discuss the concept of self-hosting video distribution services, using open standards, using proven solutions from SipRadius and others alongside the availability of bandwidth capacity and stability, as the route to maximum security.
-
Case Study: Remote Production of Christmas American FootballAndy Rayner, Appear2:00 pm - 2:30 pm
-
Enhancing Facility Security with SMPTE RP 2129: Addressing Real-World Challenges in IP-Focused StandardsPaul Evans, NetInsight2:30 pm - 3:00 pm
-
Taking Control of Your SecuritySergio Ammirata Ph.D., SipRadius3:00 pm - 3:30 pm
-
No Presentations10:00 am - 10:30 am
Mathieu Rochon, CBC with
Simon Patenaude (co presenter and author)
Michel Proulx (author)
Felix Poulin (author)
Francois Legrand (author)
"The Canadian Broadcasting Corporation (CBC/Radio-Canada) is embarking on a transformative project: the redevelopment of its Toronto headquarters. This initiative is necessitated by the rapid and profound shifts within the global media landscape, driven by technological advancements, evolving audience consumption habits, and the imperative for operational efficiency. The core challenge lies in constructing production facilities that not only meet current demands but also remain agile and relevant for decades to come.
This project transcends mere architectural reconstruction; it represents a strategic reimagining of CBC/Radio-Canada production capabilities. The vision is to create a dynamic, adaptable, and technologically advanced hub that fosters innovation and collaboration. The redevelopment will incorporate cutting-edge technologies, including agile software based production workflows, flexible studio spaces, and integrated media management systems, to ensure seamless content creation and distribution across all platforms.
Central to this endeavor is a comprehensive exploration of future media trends and their implications for infrastructure design. This involves rigorous analysis of emerging technologies, such as software based production tools, cloud-based production and artificial intelligence, and their potential to revolutionize content creation and delivery. CBC/Radio-Canada is also taking a leadership role in the EBU Dynamic Media Facility RA and the Media Exchange Layer Initiative. The aim is to build a facility that can seamlessly integrate these technologies, in order to implement our technology vision.
This presentation outlines the initial findings of our exploratory phase, detailing the strategic vision, technological considerations, and design principles guiding the redevelopment. It provides insights into the research conducted, the stakeholder consultations undertaken, and the emerging concepts that will shape the future of CBC/Radio-Canada Toronto headquarters. The goal is to create a state-of-the-art facility that empowers CBC/Radio-Canada to continue its mission of delivering high-quality, relevant, and engaging content to Canadians for generations to come, while navigating the complexities of the evolving media landscape.
Vincent Trussart, Grass Valley
While many broadcast operations are software-based, live production and studio infrastructure still largely depend on specialist physical or stream interconnects between hardware devices (e.g. SDI or IP-equivalents such as ST 2110). This has been previously justified by bandwidth and latency concerns, but today's compute and network infrastructure easily meet broadcasters' requirements. Therefore, we are now seeing all-IT software-based live production frameworks from large broadcast vendors.
However, these first frameworks are proprietary, bringing interoperability challenges and vendor lock-in. The EBU's Dynamic Media Facility initiative anticipates this by describing an architecture using generic compute clusters to replace many of the hardware devices in a facility. The architecture takes a layered approach inspired by that used by cloud hyperscalers and can take advantage of technologies and interfaces for each layer. In this architecture, ""Media Functions"" run in software containers to maximize workflow flexibility and resource utilization on multi-core/GPU servers.
The architecture also includes a high-performance Media Exchange Layer (MXL) for connecting Media Functions. This should not just replicate SDI, NDI or ST 2110, but enable things that weren't possible before, such as ""faster-than-live"" working, seamless working between different facilities, and use of timestamps for asynchronous workflows.
The EBU has been working with industry to define and demonstrate a practical MXL, built on high-speed local shared memory and remote direct memory access (RDMA) and OS-bypass networking. Unlike ""traditional"" standards, the group is adopting an ""implement-first"" approach with the team, working in an agile way to create an open-source SDK, which will start simple and add further required capabilities later.
This presentation will explain how Moore's Law and similar trends justify the architecture, benefits and requirements of the DMF initiative, report on (and demonstrate) progress of MXL in the industry, and outline next steps.
John Mailhot, Imagine Communications
The cloud has proven to be technically capable of running channel origination jobs, but the important question is financial - when does the economics of the cloud pencil out, and when does it make sense to run on-prem. This talk explores the interplay of fixed and dynamic channel orchestration, and how to optimize the combination.
David Arbuckle, MediaKind
Today, competitive video platforms must handle skyrocketing traffic, and evolving user demands with ease. Yet many solutions remain constrained by single-cloud architectures, resulting in vendor lock-in and limited scalability. Media organizations and service providers often have requirements influencing their cloud vendor choice. Flexibility in this choice improves pricing and service availability by allowing the option to choose any cloud provider and build redundancy across multiple providers.
This presentation will discuss how a ""Build Fast, Scale Faster"" approach transcends these barriers, delivering a fully portable, scalable video platform that seamlessly operates on AWS, Azure, and GCP. Attendees will discover how they can give their streaming services the competitive edge -- no matter which cloud (or clouds) they call home.
An infrastructure-as-code philosophy needs to be at the center of any such approach, combined with containerization and microservices. This presentation will cover how this trio enables platforms to spin up standardized environments, orchestrate multiple workloads, and manage the lifecycle of key processes - including transcoding and content packaging - on any cloud. By abstracting services into modular components, consistent performance, rapid scaling, and the flexibility to dynamically allocate compute resources based on real-time demand can be achieved. The expertise to create, for example, a certain configuration of an HD streaming channel can be built into the deployment automation, rather than needing specific expertise to correctly size the necessary processing and storage.
The presentation will also investigate how breaking free from proprietary tools - including specialized storage, serverless functions, and cloud-specific monitoring - can be crucial to achieving true portability. A systematic approach to replacing/refactoring these dependencies with cloud-agnostic and open-source alternatives enables a decoupled design that mitigates the risk of vendor lock-in. This approach simplifies updates, streamlines maintenance, and delivers predictable performance regardless of provider-imposed changes or pricing fluctuations.
This presentation will also explore how Terraform and Kubernetes enable automated deployments, container orchestration, and unified configuration management across AWS, Azure, and GCP. A robust caching layer, combined with globally distributed edge resources, minimizes latency and ensures seamless streaming experiences. Centralized analytics and logging frameworks provide complete visibility of performance metrics, while dynamic load balancing across multiple clouds offers resilience against regional outages. Kubernetes Operators abstract detailed technical dependencies, allowing the vendor to provide infrastructure and service expertise, which is built into the deployment automation to ensure consistency and accuracy. These innovations collectively maintain the high-quality, stable video service end users expect.
Sithideth Viengkhou, Riedel
As broadcast systems transition to standard commercial off-the-shelf (COTS) computer platforms, efficient data exchange between compute nodes becomes an increasingly important issue to address.
This paper proposes using Libfabric, part of the Open Fabrics Interfaces (OFI) framework, as a method for sharing memory between compute processes running on different nodes, maintaining the asynchronous nature of how memory is dealt with in COTS platform. By offering a streamlined interface, Libfabric abstracts hardware complexity, making it compatible with both on-premises and cloud-native platforms.
The paper also examines the suitability of Libfabric for handling large media (e.g., Raw Video) and smaller media (e.g., mono channel audio), with consideration of key media requirements such as latency, throughput, processing time, and cost. Finally, we will present the results from our work dealing with these types of media that highlight how Libfabric effectively supports various memory layouts, both contiguous and non-contiguous, in the context of media transfer in various network
Sithideth Viengkhou, Riedel
As broadcast systems transition to standard commercial off-the-shelf (COTS) computer platforms, efficient data exchange between compute nodes becomes an increasingly important issue to address.
This paper proposes using Libfabric, part of the Open Fabrics Interfaces (OFI) framework, as a method for sharing memory between compute processes running on different nodes, maintaining the asynchronous nature of how memory is dealt with in COTS platform. By offering a streamlined interface, Libfabric abstracts hardware complexity, making it compatible with both on-premises and cloud-native platforms.
The paper also examines the suitability of Libfabric for handling large media (e.g., Raw Video) and smaller media (e.g., mono channel audio), with consideration of key media requirements such as latency, throughput, processing time, and cost. Finally, we will present the results from our work dealing with these types of media that highlight how Libfabric effectively supports various memory layouts, both contiguous and non-contiguous, in the context of media transfer in various network
Andy Rayer, Appear
As the media industry moves towards software workflow for live production, there is a need to compute-native open interconnect of media flows. Currently all interconnect is based on linear/synchronous streaming interfaces. This is far from ideal for software systems and is very inefficient both in terms of use of compute and of aggregate latency.
VSF GCCG released a draft API last year which has now been prototyped and will be demonstrated as part of the presentation.
Reference will also be made to other industry initiatives that are underway to move towards a unified and open way of solving this key challenge for the industry.
-
The CBC/Radio-Canada Toronto Project, a Foray Into Dynamic Media Facility Infrastructure.Mathieu Rochon, CBC10:00 am - 10:30 am
-
EBU Dynamic Media Facility: The Second Wave of Live Media ProductionVincent Trussart, Grass Valley10:30 am - 11:00 am
-
Ground and Cloud - Maximizing Efficiency by Using BothJohn Mailhot, Imagine Communications11:00 am - 11:30 am
-
Breaking Cloud Barriers: Engineering a Portable, Scalable Video Platform for AWS, Azure, and GCPDavid Arbuckle, MediaKind11:30 am - 12:00 pm
-
Asynchronous Internode Media Transfer with LibfabricSithideth Viengkhou, Riedel12:00 pm - 12:30 pm
-
Asynchronous Internode Media Transfer with LibfabricSithideth Viengkhou, Riedel12:00 pm - 12:30 pm
-
Open compute-native media interconnect for live productionAndy Rayer, Appear12:30 pm - 1:00 pm