Arm Newsroom Podcast
Podcast

The Future of Autonomous Driving

The Arm Podcast · Arm Viewpoints: The future of autonomous driving

Listen now on:

Applepodcasts Spotify

Summary

In this Arm Viewpoints podcast, we host a fascinating discussion between Suraj Gajendra, VP Product and Solutions, Automotive Line of Business, Arm, and Silvius Rus, VP of Software at Wayve, on the current landscape and future of autonomous driving.

The podcast covers the following areas:

  • Autonomous driving and the role of AI in the car today;
  • The technology behind autonomous driving, from hardware to software; and
  • The next stage of autonomous driving and role of end-to-end AI.

To learn more about Wayve and how its journey to becoming a leading autonomous driving company, then listen to this Tech Unheard episode with its co-founder and CEO Alex Kendall and Arm CEO Rene Haas.

Speakers

Suraj Gajendra, VP Product and Solutions, Automotive Line of Business, Arm

Suraj Gajendra, VP Product and Solutions, Automotive Line of Business, Arm

As VP of Automotive Products and Solutions in the Automotive Line of Business at Arm, Suraj Gajendra leads a comprehensive automotive strategy bringing together Arm’s IP products and software ecosystem initiatives, delivering best in class solutions to the rapidly growing and complex automotive industry. Prior to this, Suraj led the technology strategy team for in the Automotive and IoT Lines of Business at Arm and has an extensive previous career and background within the wider tech industry.

Silvius Rus, VP, Software, Wayve

Silvius Rus, VP, Software, Wayve

Silvius Rus is the VP of Software at Wayve where he leads the Software organization including robotics, data, compute machine learning and infrastructure. Silvius is responsible for building Fleet Learning technology to train and deploy Wayve’s AI Driver software at scale.

Before Wayve, Silvius was instrumental in delivering the platform software for all of Google’s datacenter compute infrastructure, including machine learning and Cloud Infrastructure as a Service platform. He also has a PhD in Computer Science from Texas A&M University with a focus on High Performance Computing.

Brian Fuller

Brian Fuller

Host Brian Fuller is an experienced writer, journalist and communications/content marketing strategist specializing in both traditional publishing and evolving content-marketing technologies. He has held various leadership roles, currently as Editor-in-Chief at Arm and formerly at Cadence Design Systems, Inc. Prior to his content-marketing work inside corporations, he was a wire-service reporter and business editor before joining EE Times where he spent nearly 20 years in various roles, including editor-in-chief and publisher. He holds a B.A. in English from UCLA.

Transcript

Episode Highlights

00:02:00 The Rise of the Software-Defined Vehicle
Suraj Gajendra outlines how compute architecture and cloud-to-car integration have rapidly evolved over the past five years to support advanced ADAS and autonomous driving.

00:05:00 A Realistic Roadmap for Autonomy
Silvius Rus breaks down the phased path to driverless vehicles—from hands-off, eyes-on to full autonomy—and where we are now in that journey.

00:08:00 Replacing Rules with Learning
Silvius explains how Wayve is pioneering an end-to-end AI approach that mimics human driving through deep learning, rather than relying on brittle rule-based systems.

00:11:00 A Zero-Shot Drive from London to Cambridge
Silvius shares insights from a real-world, fully autonomous journey—navigating new roads and cyclists with no prior map data, using perception alone.

00:22:00 What “End-to-End AI” Really Means
The duo dives into what defines end-to-end AI in automotive: learning from raw input to steering output, simplifying the tech stack and improving scalability.

00:32:00 The Future of Driverless: Timeline vs. Trajectory
While neither guest sets a date, both highlight the acceleration of learning and deployment. Hands-off autonomy is here; full driverless is a matter of engineering and scale.

***

Brian: [00:00:00] Hello and welcome to the Arm Viewpoints Podcast where we dive into technology topics at the intersection of AI and human imagination. I’m your host, Brian Fuller. Today we’re going for a little ride into the state of. Play in automotive, electronics, and autonomous vehicles at one of the most pivotal moments in the history of that industry.

Our guests today are Silvius Rus, vice president of software at Wayve, and Suraj Gajendra, vice president of product management in the Automotive line of business at arm. In today’s interview, we discuss the transformation of the automotive industry, the current state and future of autonomous driving safety and autonomous vehicles in the age of AI.

Implementation factors and reliability for autonomous systems understanding end-to-end, AI in autonomous driving, the future of autonomous driving and [00:01:00] AI integration, and much, much more. Whether you’re a developer in the automotive space or a tech savvy vehicle consumer, you’re gonna wanna listen to this from start to finish.

So without further delay, let’s get rolling. So Suraj, Silvius, welcome! Good to see you guys. Thanks for taking the time.

Suraj: Thank you Brian. Wonderful to be here with my friend, Silvius.

Brian: I know you guys aren’t in a car. That was what we had envisioned, but we’re gonna muddle on, so let me start with Suraj.

The automotive industry is obviously undergoing an amazing, unprecedented transformation right now. Give us an overview from your perspective of what’s happening now.

Suraj: Until 2018, 2017, we were talking about autonomous driving becoming a thing in the future. And obviously there were some bold claims in terms of when it’s gonna be a reality and all that stuff.

But 2018, 2019, that’s when the real. Movement kicked in the automotive industry, both from an ADAS autonomous driving [00:02:00] perspective, but also from compute architecture in the car, right? Because software-defined vehicle started becoming a reality. And when I talk about software defined vehicle, it’s the ability for the car to be taking in software updates, software upgrades, right? Even after it rolls out of the dealership, be it on the ADAS front or the autonomous driving front. Or the in-vehicle infotainment and, things like that, right? So the last five to six years, I believe there is a lot of changes for good in terms of how we are going collectively as an industry make this happen.

The whole nine yards on software-defined vehicle. How do we build a cloud infrastructure that will support this? How do we build a cloud to car software infrastructure? How do we go build the right compute architecture in the car to go enable all of these new use cases? I think there’s been a lot of stuff that’s happened in the last five to six years.

It comes to enabling the newer technologies specifically in ADAS and autonomous driving. I think again, there’s been. [00:03:00] Very good ramp up in terms of how we look at it, be it the algorithm maturity or some of the cool stuff that Silvius’s and Wave are doing with respect to end-to-end AI.

We put in as an industry a lot of work over the last few years, so right now we are at a point where we. Are ready to take it onto the further phase, which is more like making it a reality. What are the things that we’ll have to enable in order to making it a reality? In short, there is, there’s this plenty of action in the automotive industry, right?

Be it across the vehicle compute, architecture, or the cloud. And I believe we are. Set up at, at the right place for us to go and make most of these things that we’ve been discussing over the last five to six years, a reality.

Brian: So you alluded to autonomous driving, right? As we’re in the middle of this transformation, and I know a fellow in our town here who has multiple sclerosis and one of his arms is not usable. Basically, but he has an autonomous vehicle or a semi-autonomous vehicle, [00:04:00] and it’s changed his life because he doesn’t really have to worry about mobility. So obviously the promise is already being realized. But let me throw this to Silvius and Suraj, you can jump in as well. Where are we with autonomous driving today and where are we going in the next few years?

Silvius: We want cars who can drive us, who can shuttle us, wait for us, pick us up, drop us off and pick us up again. We’re not there yet, except for some walled fenced gardens that are mapped accurately and only in a few cities in the world.

When you look out there at who are the big players, how long it Took them to advance. There are two trains of thoughts. So one is this very focused, go to full autonomy, and another one is go wide, but progress more slowly. So we are in the second camp where we’re looking at the way to go to driverless [00:05:00] cars is to increase the level of autonomy step by step.

The reason is you need a large amount of data to be able to go to the next level of autonomy first. It’s hands off, but eyes on, which means look at the road. The driver is responsible, but the driver is aided in many ways. Basically, the car drives itself but is not fully trusted. Then you go to eyes off, but the driver is still at the wheel.

Driver cannot fall asleep, for instance, and they have to come back. But at this point, if the driver cannot do. Cannot intervene when they need to take over. The car will just put on the side of the road or so, and so this is eyes off and then there is driverless. When you don’t even need to be in the car will just come to you.

Right now we are at the verge of generalized hands off, but still eyes on. In the US that’s already rolled out. In the [00:06:00] UK, not yet, but we are on the verge of that. And then the next milestone to conquer will be eyes off, which will solve the commute. You can work during your commute. And then complete generalized driverless is after.

Brian: So we’ve come a long way. In a reasonably short amount of time. Some people who aren’t as versed in the technology would say why don’t we have fully autonomous? Now walk us through, and I’ll throw this open to both of you, walk us through some of the challenges that are being addressed now to move us farther along the journey.

Silvius: I still remember I was at Google in, I remember the year exactly May 2009 or so, and there was this car that was driving autonomously in the parking lot. It’s not yet on lot of public roads and everybody was, oh wow, this looks almost solved. So this is 15 years ago or so, and now we have a clearer picture of the next [00:07:00] sequence of steps and we can actually defend it with some data and what was not appreciated at the start was the complexity of the long tail of behaviors events. Yeah. The unknown unknowns. Yes, you can program the card to do it, A if B, but you don’t know everything. Moreover, what happened then in the industry, we built a very complex system of roots that is very easy to understand one by one, but then you have another exception, so you add another rule to handle the exceptions in another one, for the other exception and another one.

And at some point you’re building a house of cards that sooner or later crumbles. So Wayve pioneered a different approach, which is rather than adding rules, was recreate a brain akin to the human brain that uses deep learning models, which mimic, to some extent, the human brain. And rather than programming the [00:08:00] car to drive, you are programming the car to learn.

And it’ll learn how to drive, but you’re not programming the rules. You’re simply programming the car to drive, and then you feed it. Many examples and a signal of learning from those examples such as, Hey, “I am at the green light” and the car is stopped. Now you ask the model, what would you do? And the model says I would drive.

And then you say, that’s wrong because the car is stopped. So we changed the problem altogether. This is not an evolution. It is not like we got better at the previous problem where we got smarter at the rules, we just threw away the rules and we said, let’s teach the car to drive more like a human.

So then if we make our models more like that and we prove them like we prove humans, then it’s a better shot. But this is new technology. So we’re not in the business of promoting new technology. We’re in the business of safety. So what we’re working on is making sure that this technology, which is extremely [00:09:00] powerful, is safe in all circumstances.

Suraj: Yeah and just to add to that, funny enough, Brian, a human being consumes about 20 to 25 watts of power to do everything that, that Silvius’s just explained. That’s where we come in, right? From a compute platform supplier, because, one of the challenges that, that we have been looking at is for all of this new approach that Silvius’s and the Wave team is actually driving, how do we ensure that there is the best like platform that we can enable for this?

Like both from a power efficiency perspective, from a performance efficiency perspective how do we work very closely with partners like Wayve to ensure that we build the right hardware, right? Because again, one, one front this is enabling the right technology that can actually mimic a human brain as much as possible in order to do a fully autonomous drive, but at the same time doing that within a specific envelope of power, performance and cost, right? Because you can’t load up the car with like huge servers, to run the model. [00:10:00] And we’ve talked about this multiple times. How do we bring, enable power efficiency? How do we drive these technologies at scale?

Because end of the day, if you want to go drive this at scale, you will have to make sure that you keep the cost at check as well, right? Keeping power, efficiency and cost in check is is something that we have to work very closely together.

Brian: Silvius, you recently rode in an autonomous vehicle between London and Cambridge, where Arm’s Headquarters is, first of all, tell us a little bit about that journey. How did the car especially navigate those cyclists in and around Cambridge? And in terms of safety at a higher level how much of it is controlled by AI right now?

Silvius: So it was entirely controlled by AI from London to Cambridge. It’s now been a few months and in between our car actually improved tremendously, improved multiple sense.

But even back then it was the first time that I took the car [00:11:00] from London in that direction. Now we operate in multiple countries and we advanced very quickly over the last couple of months. Looking back at that ride, I remember in Cambridge, we had never been to Cambridge with this version of the car.

And even though we our roots traced to Cambridge as a company, but with this car version, with this algorithms and models, we had never been to Cambridge. It was what we call Zero Shot. We had not seen it before. We had not mapped the place specifically before, so we just showed up. The car drove just fine.

No problem with the bicycles. It’s like a human. It doesn’t know where it is. Is it in Cambridge? Is it in London? Is it at the edge of London? I mean it’s, it doesn’t know where it’s, so it just dealt with what it sees. It only takes information that it can perceive on the spot, right? So it gets the images that it sees and a map just like a human looks at the GPS map in front of you.
We don’t actually even tell it turn left or right, we just show you the map with the color of the root. Like that [00:12:00] my provider would give. And that’s what it takes as input, both in training and when it runs. The only mistakes it made were one was it hesitated on lane change on the highway because it saw that the car was coming fast from behind and it just was right at the edge of hesitation. And we intervened for safety. It may have done the right thing. We don’t take chances during testing, but, during an hour and a half this was not bad. And it was a marginal situation where I, as a human would’ve hesitated as well.

So our safety operators are well-trained, and they made the call to intervene. And the other one was, and by the way, in this one it’s interesting to think of what. A more powerful computer would’ve done, or a faster algorithm, but also a better algorithm. So we continuously improved this with data.

That intervention where we took over is going back and it trains the model and it says, okay, this boundary condition. You have to be on this side, [00:13:00] right? That’s what you need to do. And we collect that on many cars and with partners and so on. And the second one was the navigation of some lane splitting at the roundabout in Cambridge.

We didn’t take the right lane in order to change, to engage on a specific term. And this is because we don’t encode at that point. Back in time we didn’t encode lane information in a specific way. And now we’re looking at using the richer information that exists in in general in cars.

So basically as our technology matures, very few cases get solved. Some of them with data, some of them, the one with the lane navigation, that’s not even data that’s simply giving. The car was ignoring. Our model was ignoring some of the information that the car already had. And now we’re simply giving it access.

We’re opening its eyes, so to speak, so we don’t really need to tell the car, Hey, turn here, or turn there. We simply need to give it access to the information, and we need to correct it when it [00:14:00] doesn’t.

Suraj: Yeah. And just to add, what does it mean for someone like us. Look, when you have the rule-based algorithm, when you have sense of fusion perception, localization, when you have all of those steps, it’s the safety consideration and what we have to do in the hardware. Obviously there are certain set rules on, okay, here’s how. Things can go wrong and this is what we have to do at what step.

Okay. But then when you’re thinking about an end-to-end AI model that Wayve is pioneering then you got to think about, okay, like Silvius just explained, there are, the car basically learns on the fly. So they, there, there will be occasions where you can’t have pres simulation or you can’t basically simulate any any of this before.

So the car actually is learning real-time now. It becomes so much more important for us to optimize for safety as it is for performance. For example, it has to be a software plus hardware. Co-optimization, co-development. So we, that’s what we are actually looking at to do with Silvius [00:15:00] and team is okay, you have a real-time workload, you have a real-time use case that they are testing.

What does it mean for us to ensure that the hardware platform is does have the necessary safety and security hooks to be able to support this real-time learning, right? It has to be done completely, looking at the software, hardware, entire platform together.

And we can’t basically say, okay, I’m just gonna ensure that these are the levels of safety that we are going to meet in the hardware, and then we’ll see what the software can do. So it’s this, there’s a lot more engineering core development that needs to happen. And that’s what, we’ve discussed in the past that our teams are gonna be doing in the near future as well.

Silvius: To be clear, we don’t learn while we drive, but we do need to think very fast. So the car thinks many times a second what it would do. We don’t learn while we drive, although we could, in principle, we want to pre-validate very well. So therefore we learn from many cars and from data, from [00:16:00] many partners.

So this is much, much more than a human could possibly ever drive. And then we’re pulling this global knowledge into a model, and then we validate that model. We put it on the road and then we learn again from the behavior of that model at scale, but not while it’s driving. However, to Suraj’s point we need to make the best of the compute budget on the car.

The car needs to think in a predetermined amount of time. The cameras will open the shutters and close. So basically it’s like blinking many times a second. And at every point in time you need to say, what would I do? What would I do? What would I do? So there are several hardware aspects that come into play and low level software that give you these properties.

There has to be performance, there has to be. Extreme reliability and the system itself has to be hardened to the point where nothing skips a bit. Here the algorithm doesn’t, the model doesn’t [00:17:00] the system, software and the hardware itself, and at the same time we’re trying to pack more and more compute ’cause the human brain is actually pretty sophisticated.

So getting close to all the co-solving, all the corner cases that the human case solves, that there is a challenge of compressing that and putting all that on the car. However, I said at some point, the human brain uses 20 watts. Not all of that goes to driving.

Suraj: Exactly. Not always. Exactly. So it’s even less than 20 watts is what you’re saying?

Brian: The computing. Requirements are just mindboggling in these use cases. And earlier Suraj was talking about platforms. And Silvius, I want to ask you, let’s talk a little bit about implementation. So for autonomous systems, what are the most important factors that you look at that Wayve looks at in choosing technology to deliver your solutions?

Silvius: One of them is what I just said. So that was a good [00:18:00] segue. Real-time behavior is important. Reliable real-time behavior is important. Advancing performance continuously is important because as I said earlier in the discussion, we are now looking at hands off. So first is, even before hands off, there is a hands-on safety assistance and there is hands off and so on, but you need.

Presumably more and more compute. The models have to be larger. The models have to have the brain space to remember more types of situations. They do it in a very compressed way, like a human brain. But even so the ideal platform is a high performance, low power platform, with reliable real-time response that can be evolved in terms of the capability of iterating on your architecture and producing chips.

And then together with all this, there are very specific [00:19:00] automotive and scale out requirements related to security. We are pushing the boundary of knowledge and we are producing models that are very good and we’re putting quite a bit of knowledge and effort into it and would like to defend that IP.

So having ways in which there is, a secure consumption of this functionality is important as well. So these are not, this is not the first thing you think of, but this actually become important in the big picture when you go from just solving the technical issue to deploying at scale.

Suraj: Again, from our standpoint, Brian you know what, the way we look at it is, so Silvius’s mentioned real-time and reliable compute. When we look at it from a compute architecture perspective, we are like, okay, what is that real time quality of service requirement that Silvius sent team wants in the sense that within a silicon, within an SoC, how do we ensure that [00:20:00] the highest priority.

Data that needs to be processed without any interruption. It’s processed in the chip. Obviously one of the classic examples is you have a stream of data coming in, say from the sensors and it needs to be real-time processing. We cannot allow some other workload that is not so much real time or real time critical to hog the resources in the chip, right?

So we will have to ensure that. This real-time absolute highest priority data is processed without any sort of, issues, right? So we look at within the compute architecture, be it within our CPUs or our GPUs and other technologies and, System IPs and Interconnects and Controllers and all of that stuff to ensure that the quality of service for reliability is absolutely at the highest, right? So that’s why you know this whole thing about hardware, software, co-validation, co-optimization is super important because none of these different system elements, be it hardware or software [00:21:00] or middleware. Can skip a beat, right?

So we will have to ensure that all of them are mapped correctly from a reliability perspective. And on the other hand Silvius has mentioned about the whole security and the secure compute of this whole thing, right? So we obviously talk about the Confidential Compute Architecture that we are being, we’ve been working on, which again creates that realms, be it from a high level software to a low level hardware on how realms can be created for highly secure compute. That gets run when it comes to autonomous driving and robotics applications. So we take in these requirements, we understand those use cases that that Silvius and team for example, define and then see what does it mean from a compute hardware and a low level software perspective and try and make that happen.
So that’s the way we work together.

Brian: Silvius, Wayve talks about end-to-end AI. Define that for us and talk to us a little bit about Wayve’s technology in the context of end-to-end AI.

Silvius: Yeah, end-to-end [00:22:00] is a buzzword that refers to fully learned. So basically remember when I said that initially it was a number of rules, like the system was based on a number of rules. Now we were not using the rules, but instead we replaced the code with the artificial neural network. This artificial neural network gets us in both images or camera stream. And the map, just the roadmap. And as output, it produces a trajectory that says the car shall go this way. And then there is a car controller that will follow this trajectory.

So this end-to-end is like from eyes to actuation. From eyes to hands. This is what end-to-end. And the major change is that instead of having mostly software, now you have, and it was not just even before end-to-end, there were many models. So you can look at some implementations before ours that had tens of models, which [00:23:00] were smaller.

Neural is like really small pieces of brain that do something like recognize signs. Or recognize lanes or think of how to overtake or some, and they were all assembled with rules and we replaced all that with a larger a portion of the brain and we say, now learn it all. Just look at many behaviors and reproduce many behaviors end to end.

There is still code though, so when I say this is fully learning. Yeah, we we have a very busy software team but in terms of algorithm, we replaced many combinations of models and one with one large model that we say in a way we remove structure. And we free the model from the system and we let it learn and the model with less imposed structure on it. We’ll have more room to learn.

Brian: In autonomous driving, how much computing is now taking place in the car versus [00:24:00] how much compute is being handled in the cloud server? You guys talked earlier about power and also performance and the trade-offs there, so how much faster are decisions made if the processing happens locally, if you will, in the car?

Silvius: Yeah, all the decisions are made in the car, so we have a large cloud footprint to train the models. You can imagine many cars driving continuously learning with expert drivers, or in general just running on the road. Then all this data gets uploaded to cloud servers. And then we transform the data, all the images into numbers, and we crunch these numbers and they settle into a brain.

Then this brain gets compressed into a brain that fits into a car computer with much smaller power budget. If you’re looking at the GPU, so you know, we train some of our jobs at scale [00:25:00] on close to megawatts of power, whereas in the car you have maybe 75 watts. So training is much larger scale, right?

So unlike the human brain in training, we put many brains together and then we compress the knowledge and then we deploy it. But once it’s deployed on the car, all the decisions will be made locally. Why? I mean there is this argument about edge computing is becoming faster and closer and so on, but then there are tunnels, then there are glitches.

Right now for the type of product we’re after, where we’re looking at the mass product with basically with in private cars, right? So basically we’re working with OEMs to put our technology in private cars. Then you have to account for all kinds of situations. You cannot tell them, don’t take your car there. That’s their car. They’ll take it wherever they. And then that’s why we went for this approach where all the compute happens on the car, whether you’re connected or not. Now, if you’re connected, you can get maybe [00:26:00] extra functionality, but not extra safety. The safety is all packed in the car.

Suraj: Yeah. And reliability, right? If obviously you run, obviously, some of these key control functions in the car, then you’re much more reliable. You have that added reliability compared to what you actually run between the cloud and the car.

Silvius: And the architecture of the compute in the car is made for that, the risk redundancy in critical elements. And there is redundancy that is being developed. Done mechanical parts as well. And as you go, this is actually one change going from the driver, having the responsibility to with hands off to the driver being given some time in eyes. Then there has to be more redundancy in the computer architecture of the brain. The system architecture has to support the this this higher level of autonomy.

Suraj: When Silvius talks about redundancy, even there, there is economy involved. Like how can you still make sure that you just don’t adding redundancy for every element of compute. [00:27:00] Then the cost goes out of the roof, right?

So you’ll have to be very smart about how do we co optimize the whole system for safety and cost add redundancy. Where redundancy is needed at the same time, how do you optimize, the overall system, I think that’s crucial because if you’re relying on the vehicle or the bulk of the decisions to be made at the vehicle level.

Then you will have to add redundancy. You can’t get away from it. But at the same time, you also have to keep the overall cost at checks. It’s a balance that needs to be drawn across all of these elements.

Brian: So I want you guys to tag team on this next one, and we’ll throw it to Silvius first. How does Wayve manage the growing complexity of software in this AI based world? And then Suraj, talk from your vantage point about what Arm’s doing to support software-defined vehicles in the future? Silvius, you go first.

Silvius: There are many things to say. There’s the performance aspects that they just [00:28:00] mentioned. Another one is, interoperability. So working with multiple chips with multiple consistency models and composability of architecture of when you put together a system, you always will have some components that come into play. And how does the system architecture and validation work across them? Any simplification is good

Security is the other one. Security by design all the way rooted in hardware guaranteed productivity or, and actually making sure that the programming model and everything under it on the software stack can be trusted and can be verified. So things along the lines of checkers, sanitizers, hardware, added verification. Those things can make our life simpler. We can sleep better at night if the hardware design goes that way.

Suraj: Let me just borrow one key thing that he said. He said simplification of system architecture. Okay. Which is absolutely key for us to think about how we, how we go [00:29:00] advance in the software-defined vehicle, be it for autonomy, or be it for other applications in the car.

Because see, if you have multiple diverse system architecture, then you end up optimizing for all of them separately, which is not helpful. Let me just give you two aspects that we are actually focusing on. One is we talk about the Arm compute subsystem in automotive. Okay. Which we’ve talked to Silvius and team multiple times as well. So we, as Arm we are building a compute subsystem strategy that is, putting together the key elements of hardware in a single entity. So that is the CPUs, the Safety Islands, which basically does a lot of core low level safety functions, the security aspect of things, the debug aspect of things, or anything that is fundamental to a compute architecture. We are trying to standardize that in a meaningful way. Okay. Still allowing a lot of differentiation for our partners. So once you do that. That sort of takes away some amount of work or trouble that Silvius’ team basically will have when they basically [00:30:00] have to land the software on, on, on different hardwares.

Okay, so we are announced our compute subsystem last year and then we are well on our track to basically deliver the compute subsystems this year. Now, another key element that Arm is actually focusing on is what we call as the virtual platforms. Now, again, as what Silvius said, it’s more as he and his team basically develop and run most of this training for all of these models in the cloud. And that gets deployed onto the car. We will have to try and ensure that there is as much parity maintained between the cloud infrastructure and the car infrastructure, right? Be it in terms of instruction set architectures or even AI and other key elements of the compute. So we have been enabling a good, robust virtual and environment and infrastructure in the cloud to ensure that there is the best and highest level of parity.

Now that you know that there is very good Arm based servers available for the automotive market to [00:31:00] enable most of this cloud to car software development and deployment. We are now ensuring that there are good tools and methodologies in place, working very closely with our EDA partners to ensure that we create that virtual environment as early as possible in the design cycle.

So partners like Silvius’ and team can start their software development or can basically transition their software development to the newer architectures as, as early as possible, right? So that gives them that edge to then go, deploy that on a car environment that is hopefully very similar to what we are developing in the cloud, right?

So Arm is completely invested in driving this parity as much as possible. At the same time, we are also focused on standardizing the core elements of compute that gets deployed onto all the cars going forward through our compute subsystem strategy.

Brian: So you gentlemen have given us an amazing, excellent, articulate overview of the state of play right now in [00:32:00] automotive and autonomous driving development. Let’s do a little crystal balling now. Where do you see the next stage of autonomous driving? And I know you’ve touched on some elements of this already, but let’s wrap it up for the listeners. What’s the next stage of autonomous driving? What’s the role of AI there? And lastly, when do you see mass adoption of driverless vehicles happening?

Silvius: So we don’t I don’t make timeline bets. Others have made them before me. Of course, we see our cars continuously. I would look at, if you want to know what’s real, I would look at demonstrations of continuous driving. Like basically who can publish videos of uninterrupted driving in complex scenarios long, right? In our structured scenarios. And when those get longer and longer, then at some point it just works, [00:33:00] right? You’re closer. So that’s a good proxy and we’re seeing that ourselves in our technology. We used to be just in, a year ago, we were just in London driving 20 miles an hour different vehicle dynamics. And now we are in multiple countries, drive 70 miles an hour. And what I can tell you is the learning is accelerating. We have data partnerships that open the eyes of learning. We’re seeing that across the industry. There are not many players across the industry, so I believe overall this progression will happen is more material now than it ever was, and it is powered by this end-to-end.

If all the previous approaches, they hit evolutionary limits. And it was they only got that far. The amount of effort towards the end was very large. Some companies just couldn’t take it further. And on this branch, on this evolutionary branch of of end-to-end, which we were lucky to pioneer, but others are switching grid as well.

There [00:34:00] is a road is wide open. In front of us and things that looked further. Now they look closer. Not gonna say exactly how closer, but all this I can see. So hands off, we know everything that needs to be done for hands off to work. There is no more technical unknown. It’s just a matter of engineering and deploying.

And of course regulation, societal adoption except eyes off. We also have a line of sight to it in various, it’s harder in the city than it is in on the highway, for instance, in terms of complexity. But we still have a progression there. And going from there to driverless is more of a matter of engineering than science.

It’s a matter of engineering, of having remote my mention of tunnels before. How do you do remote? Overtake of a car if you cannot connect to it. So you need to have network everywhere or so, so in my mind, this is [00:35:00] the basically hands off is. Eyes-on requires some work, but it’s still within sight.

And then full driverless, globalized. Driverless becomes a matter of engineering and business. And is it worth it? For whom? Is it the fleet? Is it the private car? Or so in some areas it’ll happen sooner than later.

Brian: Suraj, I respect Silvius’s cautionary approach there. Are you the gambling man looking into that crystal ball? Give us your perspective.

Suraj: Arm’s role is primarily to enable. What Silvius has just said, okay. In, in, in a timely manner. And see, fundamentally again, the target he talked about 75 watts, okay? Is what he’s looking at the car today. The target for us is to bring it down to 25 watts.

Okay? See I truly believe that there is thanks to all the advancements in AI based compute architecture and compute hardware, and of course a lot of, lot of other angles from. Tooling and software. We are where we are right now. So the line of sight that Silvius has just [00:36:00] articulated, he said, okay, eyes off, there is a line of sight.

We can go enable this and so on and so forth. Our goal is to try and enable that. In a power efficient and scalable manner. That’s our goal. Okay. We will have to ensure that there is right hardware technology that can meet those targets that Silvius and team are actually setting. And again, without, sorry Brian, I’m gonna disappoint you without actually setting a timeline.
And then there is line of sight in this one as well. Okay. I can basically follow suite from Silvius and say, yes, there is line of sight for us to go meet the target, but obviously there is work that needs to be involved. From an overall system optimization perspective that we are looking at. There are bunch, quite a bunch of things that we are looking at.

Silvius and I have chats about this as well on, on some of the new types of hardware architectures and stuff like that can enable this transition. And at the same time it’s more about, again, when I talk about scale, I’m talking about Silvius’s talked about 20 miles to 70 miles per hour, talking about multiple countries.

The same time there is [00:37:00] different sets of challenges when it comes to how can we enable a new technology like this on vehicles that have a lot of legacy today. If you look at, if you look at the wide range of automotive OEMs, you have quite a bit of system architectural legacy and supply chain legacy.

We’ll have to figure out a way to enable this at scale, taking into account all of those things. So that’s the way I look at it, it’s just the large scale deployment of autonomous driving. I’m very optimistic will happen. It’s just a question of how do we deal with all of these technical and cost challenges at the same time?

How do we overcome some of these legacy hurdles that we have, both from a system architecture and from a supply chain perspective. So that will enable the right scale.

Brian: Thank you so much for your time and we look forward to seeing and hearing more from both of you in the months ahead. So thank you.

Suraj: Thank you, Brian. Awesome. Thank you, Brian.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo