Arm Newsroom Blog
Blog

Arm DevSummit 2021: Day 2 Keynotes

Arm executive and partner keynote highlights from the Arm Blueprint team on day two of Arm DevSummit 2021
By Arm Editorial Team

The Arm Blueprint team is attending Arm DevSummit 2021, a three-day virtual event that provides a place for Arm software and hardware developers to learn, connect and develop together.

Over the next three days we’ll be bringing you key highlights and takeaways from Arm DevSummit 2021 keynotes by Arm executives and our partners.

Ahead of the Plasticity Curve: AI, Biology and Technology

Ian Bratt, Fellow and Senior Director of Technology, Machine Learning, Arm

Artificial Neural Networks (or ANNs) were first applied to image comparison – their ability to outperform humans in recognizing patterns in images was a big deal. Since then we’ve seen them applied to speech and language tasks, more complex vision tasks and now, says Ian Bratt, we’re at a point where we’re seeing ANNs applied to very difficult problems – even electronic design automation (EDA).

What makes ANNs so universally applicable as a tool, asks Bratt? Firstly, they are universal function approximators. They can approximate any arbitrary function in any arbitrary space. [I’m going to have to Google those words later – Ed].

Secondly, back when ANNs were inspired by biological networks, there was little evidence that they had anything to do with the brain. But over the last few years, says Bratt, this has changed – with the advent of technologies that allow neuroscientists to record from clusters of neurons in a controlled environment. We’re now seeing that in fact, the abstract computations performed by ANNs are similar to those performed by the brain.

Today, says Bratt, there are countless ANNs deployed on Arm, from smart cameras to autonomous vehicles. We’ve also seen huge advancements in natural language processing (NLP) and can now deploy it locally (as opposed to remotely, as per a home smart speaker linked to the cloud) on resource-constrained Arm devices.

Bratt’s back into biology. Neuroscientists are still attempting to make sense of how the brain works but they know one thing – the brain is not just one large neural network. It is a collection of specific networks, each optimized for a given task such as movement, vision, balance, memory and so on. Working together, these individual ‘processors’ achieve all the amazing things humans are capable of.

Each of these areas has taken millions of years to evolve. We don’t have that luxury with neural networks, so we need to develop tools to short circuit evolution and enable quick exploration.

Bratt names architectural enhancements such as bfloat16, designed for machine learning (ML), matrix multiply and wider vector widths with the scalable vector extension (SVE). Many of these new features will be present in Neoverse V1, Arm’s upcoming server-class chip.

Of course, says Bratt, it’s more than just adding instructions and improving hardware. We also need to provide developers with the software, tools and libraries to enable ML performance in their applications.

Bratt mentions that Arm has been working with the Android team to ensure that new ML-related features in the latest IP can immediately be used via standard google APIs such as NNAPI and frameworks like TFLite. So when silicon with new features is available the performance is immediately unlocked.

He goes on to explore some fantastic examples of Arm partners developing AI solutions using our technology. One example is Arçelik [Arçelik Tackles Climate Crisis with AI-Enabled Fridge].

In the rest of his keynote, Bratt provide even deeper insight into how Arm is enabling AI across its ecosystem, from the microNPUs Ethos-U55 and U65 up to GPUs like the Mali-G76. Want the detail? Click below to watch the keynote.

Ahead of the Plasticity Curve: AI, Biology and Technology

Watch this session now at Arm DevSummit 2021. Registration required.

Talking Tech [Panel]

Rene Haas, President, IP Products Group (IPG), Arm

In this technical panel session, Rene Haas is joined by Suraj Gajendra, Senior Director, Technology Strategy, A&I, Arm, Andrew Rose, Chief System Architect and Fellow, Arm and Mark Hambleton, Vice President, Open Source Software, Arm.

Haas welcomes the panel, stating their aim today is to dive deep into Arm’s forward-looking engineering thinking, teasing out some of the reasons why Arm is taking certain technology directions – such as the new armv9 architecture support for Confidential Computing.

First question from Haas: If we take a look at the big picture, what is driving the change to computing?

It has changed over time because workloads have continued to evolve, says Mark Hambleton. Big.LITTLE was in response to an evolving smartphone ecosystem – the majority of smartphone apps and processes could run comfortably on small, low-power cores but there were heavier use cases (such as 3D gaming and HD video) which needed considerably more power. Big.LITTLE enabled our partners to build specialized SoCs that could cope with these performance extremes.

Now, in the age of AI, we have new workloads and lots of need for experimentation and specialized processing, says Hambleton.

Second question: We’ve talked a lot about this ‘specialized processing’. What do we mean by that and why should developers care?

Specialized processing enables optimized workloads for target hardware, says Suraj Gajendra. Developers’ apps will consume fewer cycles, which means lower power consumption and better response time – which in turn means new features can be added!

Arm is focusing on specialized processing so that common architectural features / IP can propagate throughout the Arm ecosystem, giving developers common features no matter the type of system they are targeting.

This is an important point, says Andrew Rose. For all CPU Aarch features this is fairly obvious. For GPUs and NPUs we’re enabling the most popular frameworks and APIs–such as Vulcan, GLES and TensorFlow.

Third question: The world is looking for a more and more of a ‘platform approach’ to software. Why do you think that is, Haas asks Rose.

The world is changing, says Rose. Especially if you look at the embedded world, it’s very much changing from connected compute to embedded connected compute to networks of embedded compute. To make that work in the real world, you need to be able to get software from different developers and different communities actually landing on those pieces of hardware.

It becomes a lot easier if you start thinking in terms of platforms, he says. You know you know what you’re going to land on.

With this hybridization of workloads, it’s becoming even more important to have a standard set of APIs and a standard software platform to land your hybrid workload on top of, agrees Hambleton. But as you do that, it has a huge effect on trust and security and how and where you allow parts of your data, parts of your workload to reside.

Fourth question: Security has been an important theme for a long time, but lately it seems to have got a lot more ‘real’. People are putting a lot more thought into securing their designs. What are your views on that?

Today, more and more workloads are being deployed in the cloud and deployed on edge devices, says Gajendra. This adds another whole level of security requirement. The entire infrastructure from cloud to edge has to be very secure. Security becomes one of the fundamental aspects of development going forward.

The panel goes on to debate what Arm is doing to make Confidential Computing – a major part of Armv9 – technology easier for developers to use and deploy.

Click below to watch the session and discover how all of this translates to opportunity for the Arm ecosystem over the next five years.

Talking Tech

Watch this session now at Arm DevSummit 2021. Registration required.

AWS and Arm – Empowering Developers to Thrive from Cloud to Edge [Panel]

Dipti Vachani, SVP GM Automotive & IoT, Arm

In this panel session, Dipti Vachani is joined by Bill Foy, Director, World-Wide Automotive Go-to-Market, AWS, Vin Sharma, GM and Director, ML Engines and Edge ML, AWS AI, AWS and Raj Pai, Vice President, Product Management, Amazon EC2, AWS.

AWS and Arm have a long history of joint innovation. From early Arm-based offload cards to the latest generation Graviton2 processors based on Arm Neoverse, developers have benefited from significant price-performance gains in Amazon EC2.

Raj Pai looks back on this journey – in 2012 they started looking at how to improve performance for customers by moving virtualization to Arm-based offload cards embedded in their servers.

We saw the price-performance that Arm delivered, and wondered what Arm-based server chips might enable, says Pai. Creating Graviton, and then Graviton2, really unlocked broad adoption of Arm on AWS.

Now AWS has thousands of customers running on Graviton2 – Pai names Snap, Lyft, Smugmug, NextRoll and Intuit as all customers who have seen a 30-50 percent improvement in price-performance via lifting and shifting their workloads to Graviton2.

We’re also making it easy and fun for customers to get started and learn how to use Graviton2 through the Graviton Challenge.

It’s really cool to see how hard you’ve worked on the ‘out of box experience’, says Vachani. “Free, low cost and easy speaks to the heart of developers”.

Vachani says that while AWS is well known for getting things done in the cloud, but perhaps not as many know just how much it is engaged in the automotive sector.

We’re working with almost every major name in the automotive industry to enable their digital transformation, says Bill Foy. We’ve had thousands of activations over the last couple of years.

Where does hardware meet software in automotive, asks Vachani.

The biggest trend by far is the move to the software-defined vehicle, says Foy. Being able to create cars whose features are determined primarily by software. Cars running over 100 million lines of code.

A major advantage is the ability to modify and upgrade vehicles or supply optional features that drivers can purchase and download if needed, says Foy. But all of this means it’s imperative to deliver a standardized framework that enables proven cloud-native technologies that work at scale.

This is why we’re so excited to be working with Arm on SOAFEE, says Foy.

When it comes to autonomous vehicles and machines, one of the key workloads is machine learning (ML) says Vachani. What’s AWS’ biggest opportunity here?

Vin Sharma points out that Amazon has invested in AI and ML to recommend products, streamline our supply train and improve capacity planning. We’re applying ML to every aspect of our business, he says. We’ve already helped over 100,000 customers use AWS for ML workloads.

Now, says Sharma, we’re super excited to be bringing this ML capability to the AI edge, from servers and gateways down to endpoint devices and sensors.

How can developers get their hands on all of this? Click below to watch the session on demand and find out.

Empowering Developers to Thrive from Cloud to Edge

Article Text
Copy Text

Any re-use permitted for informational and non-commercial or personal use only.

Editorial Contact

Brian Fuller and Jack Melling
Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm

Latest on Twitter

promopromopromopromopromopromopromopromo