Why Arm is the Compute Platform for All AI Workloads
For AI, no individual piece of hardware or computing component will be the “one size fits-all” solution for all workloads. AI needs to be distributed across the entire modern topography of computing, from cloud to edge – and that requires a heterogeneous computing platform that offers the flexibility to use different computational engines, including the CPU, GPU and NPU, for different AI use cases and demands.
The Arm CPU already provides a foundation for accelerated AI everywhere, from the smallest embedded device to the largest datacenter. This is due to its performance and efficiency capabilities, pervasiveness, ease of programmability and flexibility.
Focusing on flexibility, there are three key reasons why this is hugely beneficial to the ecosystem. Firstly, it means the Arm CPU can process a broad range of AI inference use cases, many of which are commonly used across billions of devices, like today’s smartphones, and in cloud and data centers worldwide – and not only that, because beyond inference the CPU is often used for additional tasks in the stack, such as data pre-processing and orchestration. Secondly, developers can run a broader range of software in a greater variety of data formats without needing to build multiple versions of the code. And, thirdly, CPU’s flexibility makes it the perfect partner for accelerated AI workloads.
Delivering diversity and choice to enable the industry to deploy AI compute their way
Alongside the CPU portfolio, the Arm compute platform includes AI accelerator technologies, such as GPUs and NPUs, which are being integrated with the CPU across various markets.
In mobile, Arm Compute Subsystems (CSS) for Client features the Armv9.2 CPU cluster integrated with the Arm Immortalis-G925 GPU to offer acceleration capabilities for various AI use cases, including image segmentation, object detection, natural language processing, and speech-to-text. In IoT, the Arm Ethos-U85 NPU is designed to run with Cortex-A-based systems that require accelerated AI performance, such as factory automation.
Also, in addition to Arm’s own accelerator technologies, our CPUs give our partners the flexibility to create their own customized, differentiated silicon solutions. For example, NVIDIA’s Grace Blackwell and Grace Hopper superchips for AI-based infrastructure both incorporate Arm CPUs alongside NVIDIA’s AI accelerator technologies to deliver significant uplifts in AI performance.
The Grace Blackwell superchip combines NVIDIA’s Blackwell GPU architecture with the Arm Neoverse-based Grace CPU. Arm’s unique offering enabled NVIDIA to make system-level design optimizations, reducing energy consumption by 25 times and providing a 30 times increase in performance per GPU compared to NVIDIA H100 GPUs. Specifically, NVIDIA was able to implement their own high-bandwidth NVLink interconnect technology, improving data bandwidth and latency between the CPU, GPU and memory – an optimization made possible thanks to the flexibility of the Arm Neoverse platform.
Click to read Accelerate Your AI Data Center DreamsArm is committed to bringing these AI acceleration opportunities across the ecosystem through Arm Total Design. The program provides faster access to Arm’s CSS technology, unlocking hardware and software advancements to drive AI and silicon innovation and enabling the quicker development and deployment of AI-optimized silicon solutions.
The Arm architecture: Delivering the unique flexibility AI demands
Central to the flexibility of the Arm CPU designs is our industry-leading architecture. It offers a foundational platform that can be closely integrated with AI accelerator technologies and supports various vector lengths, from 128 bit to 2048 bit, which allows for multiple neural networks to be executed easily across many different data points.
The flexibility of the Arm’s architecture enables diverse customization opportunities for the entire silicon ecosystem, with our heritage built on enabling partners to build their own differentiated silicon solutions as quickly as possible. This unique flexibility also allows Arm to continuously innovate the architecture, introducing critical instructions and features on a regular cadence that accelerate AI computation to benefit the entire ecosystem, from leading silicon partners to the 20 million plus software developers building on the Arm compute platform.
This started with the Armv7 architecture, which introduced advanced Single Instruction Multiple Data (SIMD) extensions, such as NEON technology, as Arm’s initial venture into machine learning (ML) workloads. It has been enhanced over the past few years, with additions focused on vector dot product and matrix multiplication as part of Armv8, before the introduction of Arm Scalable Vector Extensions 2 (SVE2) and the new Arm Scalable Matrix Extension (SME) as key elements of Armv9 that drive higher compute performance and reduced power consumption for a range of generative AI workloads and use cases.
Seamless integration with AI accelerator technologies
Arm is the compute platform for the age of AI, driving ongoing architectural innovation that directly corresponds with the evolution of AI-based applications that are becoming faster, more interactive, and more immersive. The Arm CPU can be seamlessly augmented and integrated with AI accelerator technologies, such as GPUs and NPUs, as part of a flexible heterogeneous computing approach to AI workloads.
While the Arm CPU is the practical choice for processing many AI inference workloads, its flexibility means it is the perfect companion for accelerator technologies where more powerful and performant AI is needed to deliver certain use cases and computation demands. For our technology partners, this helps to deliver endless customization options to enable them to build complete silicon solutions for their AI workloads.
Any re-use permitted for informational and non-commercial or personal use only.