Redefining Audio Experiences with Eclipsa Audio by Google, Samsung, Arm and the Alliance for Open Media
Imagine sitting in your living room and watching a movie. As the helicopter flies overhead, you can hear it moving seamlessly above you, and then from one side of the room to the other, creating a truly immersive experience. This is now possible because of Eclipsa Audio, based on Immersive Audio Model and Formats (IAMF), a new open-source audio technology that uses fast and efficient processing for a variety of common consumer products – from high-end cinema systems featuring premium TVs to entry-level mobile devices and TVs.
The technology was developed by Google, Samsung, Arm, and the Alliance for Open Media (the organization behind the popular AV1 video format), which was launched at the Consumer Electronics Show (CES) 2025.
What is Eclipsa Audio?
Eclipsa Audio is a multi-channel audio surround format that leverages IAMF to produce an immersive listening experience. It will revolutionize the way we experience sound by spreading audio vertically as well as horizontally. This creates a three-dimensional soundscape that closely mimics natural settings, bringing movies, TV shows, and music to life.
Eclipsa Audio dynamically adjusts audio levels for different scenes, ensuring optimal sound quality. Additionally, it offers customization features that allow listeners to tweak the sound to their preferences, helping to ensure that every listening experience is personalized and unique.
An Eclipsa Audio bitstream can contain up to 28 input channels, which are rendered to a set of output speakers or headphones. These input channels can be fixed, like a microphone in an orchestra, or dynamic, like a helicopter moving through a sound field in an action movie.
Eclipsa Audio also features binaural rendering, which is essential for mobile applications when delivering immersive audio through headphones. Finally, the new audio technology supports content creation across consumer devices, enabling users to create their own immersive audio experiences.
How Arm worked with Google during IAMF development
Arm has been a strategic partner throughout the development of IAMF, working closely with Google’s team to optimize the technology’s performance. Our contributions focused on enhancing the efficiency of the Opus codec and the IAMF library (libiamf), ensuring that it delivers the best possible performance on Arm CPUs that are pervasive across today’s mobile devices and TVs.
Arm CPUs have included the NEON SIMD extension since 2005 and evolved significantly since then, providing remarkable performance boosts for DSP tasks like audio and video processing. For IAMF specifically, Arm’s engineers have focused on optimizations that allow real-time decoding of complex bitstreams with minimal CPU usage, ensuring reliable performance even when CPUs are busy processing other elements of the experience. This is particularly important for mobile applications where power efficiency is crucial.
Performance Enhancements
Arm has been upstreaming patches to the opus-codec and libiamf, focusing on floating point implementations for optimal performance. These enhancements include:
- NEON Intrinsic Optimizations: Supporting various Arm architectures (armv7+neon, armv8-a32, armv8-a64, and armv9), these optimizations speed up float to int16 conversion and soft clipping, and provide CPU-specific optimizations for matrix multiplication and channel unzipping in multi-channel speaker output.
- Performance Improvements: Significant performance uplifts were observed across different speaker configurations (Stereo, 5.1, 9.1.6) on devices like Quartz64 and Google Pixel 7. For instance, 9.1.6 output showed over 160% improvement on the Arm Cortex-A55 CPU cores in the Pixel 7.
- Decoding Efficiency: After optimizations, all test files decode in less than 16% of real-time on Aarch64 and less than 23% on Arm32, making them highly efficient on the Cortex-A55.
Core Technologies
IAMF supports several codecs, including LPCM, AAC, FLAC, and Opus. Opus, being the most modern codec, is likely to be the preferred choice. We have further optimized Opus for the Arm architecture, ensuring it performs efficiently within the IAMF framework.
The IAMF library (libiamf) decodes IAMF bitstreams and produces speaker output for various sound systems. Arm introduced a framework for CPU specializations, using compile-time feature detection to ensure the library is optimized for the platform it runs on.
By optimizing key components like the Opus codec and libiamf library, our engineers ensured that IAMF delivers unparalleled performance on Arm CPUs. This not only enhances the user experience but also demonstrates the value of our technology in cutting-edge applications.
IAMF’s open standard approach, supported by the Alliance for Open Media, aligns with Arm’s vision of broad accessibility and innovation. This partnership highlights our role in driving the future of immersive audio, making high-quality sound experiences available across a wide range of existing and future devices, from high-end home cinema systems to entry-level mobile devices and TVs.
The future of immersive audio is here
IAMF represents a significant leap forward in immersive audio technology, offering a versatile and high-quality audio experience using AI and deep-learning techniques coupled with robust performance optimizations that are supported by Arm.
The future of immersive audio is here and is now more accessible than ever. Whether you’re a casual listener or an audiophile, IAMF promises to transform audio experiences, bringing you closer to the action than ever before.
Any re-use permitted for informational and non-commercial or personal use only.