Arm Newsroom News
News

An Update on Arm’s AI Journey Toward a Trillion Connected Devices

The Arm ecosystem recently achieved the milestone of having shipped more than 150 billion Arm-based chips to date as we progress toward 1 trillion connected devices by 2035.
By Dennis Laudick, VP, Go-to-Market, Automotive, Arm

By Dennis Laudick, vice president of marketing, Machine Learning Group, Arm

The Arm ecosystem recently achieved the milestone of having shipped more than 150 billion Arm-based chips to date as we progress toward 1 trillion connected devices by 2035. Given this latest indicator of the pervasiveness of Arm technology, it’s a good opportunity to provide an update on our efforts to enable AI to run on Arm everywhere.

Arm’s product roadmaps have always been defined by input from our ecosystem on future software and applications needs across multiple markets. However, software isn’t just about the CPU, but instead a system-level approach that Arm has always taken when architecting our IP for specific applications. And in this new computing landscape where AI, IoT, and 5G are converging to enable new and increasingly complex use cases, having a system-level approach has never been more critical.

It’s with this background that Arm’s Project Trillium (first announced in February 2018) was conceived several years ago with this simple vision; an all-inclusive solution for connecting all of Arm’s AI/ML innovations across our product portfolio. That vision is manifesting itself today as AI/ML requirements are rapidly evolving from reliance on a single compute processor into a collective heterogenous compute solution, all based on principles which have underpinned Arm’s success in many markets:

  • Software always starts with the CPU
  • Augment Arm CPUs and GPUs with scalable and flexible dedicated ML processors for a wide range of dynamic ML functions
  • Support industry-leading APIs and provide open source software to ensure maximum performance and application portability
  • Invest in tools to make developing and deploying ML on Arm easy
  • Value, nurture, and support the Arm ML ecosystem

With these principles in mind, here’s a brief review of recently announced Arm IP products designed to play a role in our highly comprehensive, solutions approach to ML:

  • A 35x ML performance increase over previous generations for the Cortex-A76 and Cortex-A77 processors designed for premium mobile and large screen compute devices
  • Cortex-A55 with new neural network elements including improved prediction, 8-bit integer matrix multiply, and new architecture instructions incorporated in the NEON pipeline specifically benefiting ML workloads
  • Mali-G77 GPU; a 60% ML performance increase
  • Helium CPU extensions for Cortex-M processors, providing 15x more ML performance and efficiency for IoT and other power-constrained connected devices
  • The first in a range of ML processors initially targeted for mobile, home and security devices

Additionally, we continue to make ongoing performance optimizations to our open source ML projects, Arm NN, Arm Compute Library, and CMSIS-NN, all of which have shipped in hundreds of millions of devices a year over the last few years.

The road ahead

Many of the aforementioned products were already in development prior to SoftBank acquiring Arm in 2016. Following that acquisition, Arm became a private company and SoftBank gave us the freedom to increase our investment in AI/ML solutions, which enabled us to accelerate the development of today’s products. In fact, being part of SoftBank has been extremely advantageous and has enabled us to increase our investment in AI/ML features across all our product roadmaps. A recent example of this is our plans to support the Bfloat16 (BF16) data type for ML workloads in the next revision of the Armv8-A architecture and will be implemented in upcoming CPUs from Arm and its partners.

There is much more to come as we continue to accelerate the results of our years of product development with a range of CPU and GPU improvements, new NPU product lines, enhanced SW, tools, and developer support. At the upcoming Arm TechCon conference (Oct. 8-10 in San Jose, CA), Arm Machine Learning Fellow Ian Bratt will provide a glimpse of what’s ahead for Arm in ML as part of his day two keynote.

Talking with our silicon partners, device makers, and a diversity of service providers about their future AI/ML requirements at our annual partner meeting last month was further validation that our roadmaps and ongoing system-level design principles are hitting the mark. I walked away from those three days of partner meetings with our thriving ML ecosystem with the following realization; AI everywhere on Arm is no longer just a vision but an inevitability.

Article Text
Copy Text
Article Images
Download Images

Any re-use permitted for informational and non-commercial or personal use only.

Media Contacts

Kristen Ray
Director Public Relations, Arm
+1 (512) 939-9877
Media & Analyst News Alerts
Get the latest media & analyst news direct from Arm

Latest on Twitter

promopromopromopromopromopromopromopromo