The next platform shift: Physical and edge AI, powered by Arm
As CES 2026 opens, a common thread quickly emerges across the show floor: most of what people are seeing, touching, and experiencing is already built on Arm. Arm-based platforms power the devices and systems behind the product and technology demos, including intelligent vehicles navigating complex environments, robots interacting with humans, and immersive XR devices blending the digital and physical worlds.
These mark a broader inflection point for AI as it becomes increasingly sophisticated, moving from perception to action in the real world. As NVIDIA CEO Jensen Huang put it in his CES 2026 keynote, “the ChatGPT moment for physical AI is here.” And it’s happening on Arm.
Built for the real world: Edge-first design and proven software ecosystem
As AI moves into the physical world it must operate under real-world constraints. This next phase is defined by systems that can respond instantly, run efficiently, and operate reliably in the physical world. That transition demands compute that is designed for predictable, low-latency performance, extreme power and thermal efficiency, and continuous local inference. Just as critical, safety and security must be foundational, not layered on after deployment.
This is where edge-first platforms become essential, with Arm uniquely positioned. Arm delivers both unmatched energy efficiency and the world’s largest software developer base, making it the natural platform for building and scaling physical and edge AI systems globally. From operating systems and middleware to AI frameworks and developer tools, partners like NVIDIA and Qualcomm have developed their technologies on Arm over decades. That maturity means innovation can move faster, scale more broadly, and deploy more safely as AI transitions from digital intelligence to physical intelligence in the real world.
The next frontier: AI that moves
At CES 2026, NVIDIA outlined its vision for robotics, with on-stage demos of robots powered by its new physical AI stack. NVIDIA unveiled open robot foundation models, simulation tools, and edge hardware – including Jetson Thor that is built on Arm Neoverse – to accelerate AI that can reason, plan, and adapt in dynamic environments. Partners including Boston Dynamics, Caterpillar, LG Electronics, and NEURA Robotics showcased robots trained on NVIDIA’s full physical AI stack that leverages the Arm compute platform and deeply established software ecosystem spanning automotive, autonomous and robotics.
Qualcomm is further advancing its robotics portfolio with the new Dragonwing IQ10 robotics processor for advanced use cases like industrial robots, autonomous mobile robots (AMRs), and humanoid systems. Qualcomm’s robotics portfolio runs on the Arm compute platform, delivering energy-efficient robots and physical AI at the edge.
These robotics announcements build on pre-existing technologies pioneered across automotive, an industry that Arm has enabled for decades. Much like robots, AI systems in vehicles already sense their environment, make split-second decisions, and act safely in the physical world. As robotics evolves, it will increasingly mirror the complexity, safety requirements, and system architecture of modern vehicles. Many of the companies shaping the future of automotive will also design the robots of tomorrow, like Rivian. With the entire automotive industry already building on Arm, the transition from cars to robots is a natural one.
In automotive at CES 2026, NVIDIA debuted their Drive AV Software in the all-new Mercedes-Benz CLA. The AV stack’s in-vehicle compute and Hyperion architecture is powered by Arm Neoverse-based NVIDIA DRIVE AGX Thor. Meanwhile, Qualcomm’s Snapdragon Digital Chassis continues to expand, and is now adopted by global automakers transitioning to AI-defined vehicles. These platforms are builton Arm’s compute efficiency and consistent software ecosystem across infotainment, advanced driver assistance systems (ADAS), and in-vehicle AI.
Scaling intelligence from edge to cloud
Beyond robotics and automotive, we’re continuing to see momentum for Arm-based platforms both in the cloud and at the edge.
NVIDIA’s new Vera Rubin AI platform includes six new chips, two of which – Vera and Bluefield-4 – are built on Arm. Bluefield-4, a DPU powered by the Arm Neoverse V2-based Grace CPU, delivers up to six times the compute performance of its predecessor, transforming the DPU’s role in rack-scale inference and enabling new optimizations such as a new AI inference specific storage solution.
At the developer level, NVIDIA is pushing the frontier with powerful local AI systems. Developers can take advantage of the latest open and frontier AI models on a local deskside system, from 100-billion-parameter models on DGX Spark to 1-trillion-parameter models on DGX Station. Both platforms are powered by the Arm-based Grace Blackwell architecture, delivering petaflop-class performance and enabling seamless development that can scale from desk to data center.
On the personal computing front, the Windows on Arm AI PC portfolio is expanding into the mainstream, enabling OEMs to scale solutions to the mass market, extend battery life, and close the gap with legacy x86 systems.
Arm is the compute foundation powering CES 2026
What connects NVIDIA, Qualcomm, and a global ecosystem of innovators? Arm’s scalable, energy-efficient architecture.
CES 2026 is already demonstrating that the Arm compute platform powers data centers, robots, vehicles and countless edge devices, including:
- NVIDIA’s accelerated platforms, from cloud to edge;
- Qualcomm’s mobile, AI PC, XR/Wearables, and automotive systems; and
- Nuro’s driverless fleets and Uber’s cloud infrastructure.
A prime example is the Nuro-Lucid-Uber partnership. Nuro’s latest driverless platform, built on the Arm Neoverse platform, enables efficient, real-time edge AI in autonomous Lucid Gravity SUVs. These vehicles, featuring NVIDIA DRIVE Thor and Arm Neoverse V3AE, deliver Level 4 autonomy with safety-critical reliability. Uber, meanwhile, is scaling on Arm-based Ampere servers to lower power use while increasing cloud density, illustrating Arm’s pivotal role from cloud to car.
Why ecosystem scale wins
CES 2026 sends a clear message: AI is now becoming embedded in the world around us. Making the physical and edge AI era a reality isn’t about individual chips or product launches; it requires full-stack ecosystem scale. This means:
- Software portability across devices;
- Developer familiarity and productivity;
- Long product lifecycles with stable platforms; and
- Standards-based innovation across industries.
The next platform shift isn’t defined by model size, but by intelligence that can operate autonomously, adapt in real time, and scale efficiently from cloud to edge. It’s about systems that are designed from day one to learn continuously, distribute decision-making, and perform within real-world constraints.
Arm provides the common compute foundation that makes this possible – trusted, scalable, and optimized for efficiency. That’s why Arm shows up everywhere at CES 2026 and wherever physical AI is taking shape.
Any re-use permitted for informational and non-commercial or personal use only.







