Why CPUs sit at the center of AI infrastructure: Five takeaways from Futurum’s latest report
AI is not a single workload with a single ideal infrastructure. It is a diverse set of workloads that demand a cohesive, system-level strategy to deliver performance efficiently and at scale. At the center of that strategy is the CPU, which acts as the system intelligence layer coordinating compute across cloud, data center, edge, and emerging physical AI systems.
These are the central conclusions of Futurum’s latest report, Arm at the Center of the AI and Data Center Revolution, which explains AI’s current inflection point and Arm’s central role. Below are five key takeaways from the market report.
1. AI is a system challenge
While accelerators can dominate conversation, the Futurum report notes that AI performance at scale is ultimately determined by how intelligently systems are orchestrated. CPUs play a central role in coordinating heterogeneous compute, managing data movement, and sustaining performance under power and cost constraints. As the report notes, industry conversation is moving from “how much raw compute can we deploy?” to “how intelligently can we orchestrate compute across environments.”
2. CPUs are the data pipeline that makes AI work
CPUs increasingly act as the AI head node – essentially the system brain – coordinating and orchestrating across the entire system, from feeding data and managing memory to enforcing security and coordinating execution. According to the report, modern AI pipelines rely on multiple CPUs per accelerator, not fewer. In fact, Arm CPUs are embedded across SmartNICs, DPUs, NPUs, and networking devices to offload and manage AI workflows efficiently. Essentially, CPUs are not peripheral to AI systems; they are mission-critical infrastructure.
3. Performance-per-watt puts CPUs at the center of AI economics
As AI becomes increasingly constrained by energy, performance-per-watt has emerged as a defining metric for both cost efficiency and sustainability. Futurum’s independent Signal65 benchmarks show AWS Graviton4 delivering double-digit performance gains over comparable x86 CPUs while maintaining a clear total cost of ownership (TCO) advantage. This underscores how Arm-based CPUs now compete on absolute performance, while continuing to lead on efficiency. In an energy-constrained world, CPUs that maximize throughput per watt become strategic assets.
4. Hyperscalers are standardizing on Arm CPUs, not experimenting
The report makes clear that hyperscaler adoption of Arm CPUs is no longer exploratory. AWS, Microsoft, Google, and NVIDIA are deploying Arm CPUs across cloud infrastructure, custom silicon, and AI-converged data centers. Nearly 50% of compute shipped to top hyperscalers is now Arm-based. This is alongside a 100% increase in Arm’s royalty revenue from data center customers and rising core counts across the leading hyperscalers, as set out in the recent Arm quarterly earnings. The overall picture is a long-term shift to Arm-based infrastructure.
5. Inference, edge and physical AI make CPUs even more critical
As AI inference overtakes training and intelligence moves closer to the edge and physical world, CPUs become even more central. The report highlights that edge AI, robotics, automotive systems, and autonomous machines depend on real-time, system-level computing, not just model execution. Edge AI, in particular, will rely on more CPUs for real-time computing, not just executing AI models. Meanwhile, the vast majority of today’s robotic and physical AI systems rely on Arm-based processors for perception, control, and system coordination.
Why this matters
Taken together, Futurum’s report reinforces a structural shift in AI infrastructure. The winners in AI will not be defined by a single breakthrough accelerator, but by those who can build efficient, integrated systems under real-world power, cost, and scalability constraints. CPUs – and the system intelligence they enable – are central to that future.
Download the full Futurum report Arm at the Center of the AI and Data Center Revolution to explore the data, benchmarks, and architectural analysis behind these conclusions.
Any re-use permitted for informational and non-commercial or personal use only.






