Tech Unheard Episode 6: Alex Kendall
Podcast episode also available on the following channels:
Summary
In the sixth episode of Tech Unheard, Alex Kendall, co-founder and CEO of autonomous driving startup Wayve, joins Arm CEO and host Rene Haas for a conversation about embodied AI and what it takes to build a company from the ground up.
Alex co-founded Wayve in 2017 while simultaneously writing his PhD thesis at Cambridge University. Today, the company has hundreds of employees worldwide as it pioneers a new way to create self-driving cars using end-to-end AI.
To learn more about Wayve, its ongoing work and the technology partnership with Arm, keep an eye out for an upcoming conversation with its VP of Software Silvius Rus on the Arm Viewpoints podcast.
Tech Unheard
Learn more about the Tech Unheard Podcast series.
Speakers

Rene Haas, CEO, Arm
Rene was appointed Chief Executive Officer and to the Arm Board in February 2022. Prior to being appointed CEO, Rene was President of Arm’s IP Products Group (IPG) from January 2017. Under his leadership, Rene transformed IPG to focus on key solutions for vertical markets with a more diversified product portfolio and increased investment in the Arm software ecosystem. Rene joined Arm in October 2013 as Vice President of Strategic Alliances and two years later was appointed to the Executive Committee and named Arm’s Chief Commercial Officer in charge of global sales and marketing.

Alex Kendall, co-founder and CEO, Wayve
Alex Kendall co-founded Wayve in 2017 to reimagine autonomous mobility through embodied intelligence. Under Alex’s leadership, Wayve is one of the fastest-growing AI companies for autonomous driving globally. From his award-winning research at Cambridge, he pioneered a new approach to self-driving, using end-to-end AI to build a general-purpose driving intelligence that can drive any vehicle, anywhere. As CEO, Alex leads the company’s overall strategy, working closely with investors, and partners to deploy Wayve’s Embodied AI software into millions of vehicles worldwide.
Transcript
Rene Haas (00:07.298)
Welcome to Tech Unheard, the podcast that takes you behind the scenes of the most exciting developments in technology. I’m Rene Haas, your host and CEO of Arm. Today I’m joined by Alex Kendall, co-founder and CEO of Wayve. Wayve develops embodied AI for autonomous driving, and Alex has been in the driver’s seat there since its beginnings at Cambridge University in 2017. All right, Alex, welcome.
Alex Kendall
Awesome, let’s do this.
Rene Haas (00:35.79)
So the story I tell a lot of people, the first time I had a demo of your technology was from Stansted Airport into central London. It was December in the UK, rainy, cold, not easy to see, two hours-ish, and we drove all the way in. It was assisted, but the technology was so amazing. I remember Masa was in the car with us, and it was so smooth that he fell asleep, which was quite a testament. But where has your technology come since that day? I think that was the end of 2023, maybe, maybe a year and a half ago.
Alex Kendall
Yeah, it was. I take that as a compliment that our AI could drive so smoothly that it’s smooth enough for you to relax and even doze off for bit of the drive. So full compliment. So the approach that we’ve taken for autonomy is one where we treat this as an AI problem. Fundamentally, it’s a really high dimensional decision-making problem, dealing with uncertainty, different signals, and having to put out a decision to drive a car in a way that has incredible generalization because the diversity of things you see when you drive on the road or in fact for any form of robotics is enormous. So eight years ago, we started off with the approach of end-to-end deep learning for driving and started off in a modest way with various different techniques of reinforcement learning and some Sim2Real approaches to learn everything from initial lane keeping to traffic lights, to roundabouts. But today, as I was mentioning, the strength of this foundation model is its ability to drive all around the world in many different types of vehicles with different sensor sets for different manufacturers and our ambition is to really see this launched as an embodied AI platform across a wide variety of different robots and work with the very best manufacturers and fleet operators.
Rene Haas
Definitely want to start with your upbringing and background, but I want to dig a little deeper in what you just said. In 2017, not obvious that using deep learning for automobiles or autonomous was an answer. What was behind your thinking that, yeah, this is going to be a great application for autonomous driving or anything autonomous?
Alex Kendall (02:36.032)
Both our companies started in Cambridge. It was bit of Cambridge magic, but more seriously, I spent so much time when I was growing up building intelligent machines, playing with robots and just seeing how brittle and frustrating it is to build robots in a way that’s the tang coded where you build the infrastructure rather than the intelligence. And that coupled with some years doing a PhD and surrounded by amazing opportunity to take a step back and look at where things were going. It was just very clear that end-to-end deep learning was slowly gobbling up the robotics stack through perception. And my PhD was able to do some of the early work that showed how to build different perception problems, figuring out where you are, what’s around you, what’s going to happen next with end-to-end learning. And it was clear that that was just going to keep going all the way through to control of the vehicle. And then when you think about what the future of robotics should look like, I don’t imagine a car that’s driving in a preset geo-fenced area following a map with a set of hand-coded rules. I imagine the future of robotics as intelligent machines we can trust to delegate tasks to them in a way that they can coexist in society, they can act and interact really broadly. And so for me, the way to build that is to actually develop a level of intelligence that can make its own decisions and operate safely within the environments that we live in. So it’s that belief coupled with a couple of like technical, you know, let’s call them first steps that, that gave me the, the, the confidence to go, okay, let’s go. We can go build this.
Rene Haas
So in 2017, what were the technological limiters relative to implementation? In the AI world, eight years ago is like 100 years ago. But if you go back to when you of conceived all this, what were the big technical barriers in terms of either compute capability and or sophistication of the models? Where were the big hurdles to climb?
Alex Kendall (04:29.144)
I think it’s so important to work on things that aren’t possible today, but are going to become possible and getting that strategy right on what’s going to become possible at the right time. There’s, of course, lots of aspects of luck at this, but also being prepared to take those opportunities when they become possible. And look, we’ve just seen DeepMind build the amazing AlphaGo, where there are more stakes in the game ago than there are atoms in the universe. There’s an enormously large problem that was able to be solved through very efficient data sampling of the self-play algorithm. So there was that, but of course the game of Go is a very low dimensional space. It’s a small checkers board, right? Whereas you think about driving, typical car might have, I don’t know, anywhere from seven to 12 cameras, might have five radars around it. That dimensionality, you’ve got tens of millions of megapixels coming into the AI system. And so the dimensionality is enormous.
Some of the things that were really challenging then was how do you compress that information, understand it? Even how do you aggregate that amount of data? If you think about what’s making self-driving possible, there’s a ton of work that’s gone on the hardware platform, the supply chain, access to data, and even more importantly, the off-board work and the data centers and the compute that actually allows you to train these kinds of models. And people often say that a system like GPT-4 was trained on maybe about a petabyte of data, but when we think about problems in robotics, I mean, today we’re aggregating a data set that’s over 100 petabytes. And so it’s, it’s really in another scale.
Rene Haas
When you say 100 petabytes that is on when you make that comment relative to robotics. 100 petabytes represents what in that world?
Alex Kendall (06:09.762)
Yeah, it’s bringing together really diverse sources of data to be able to train this model because we want to make it so our AI can understand what various sensor sets can see. So we have everything from dash cams to surround camera, camera radar, camera radar, lidar data, or even internet scale text and video. And so it’s all of that kind of diverse data. But of course, the multi-sensor data sets, camera, radar, lidar, of course, dominate in their size compared to the highly compressed dash cams.
Rene Haas
Well, one thing that I was struck by when we made that ride was not only the smoothness of the drive and how well it worked, particularly in central London, but there were not a lot of sensors and cameras on that car. You know, relative to what you see in the conventional AV1.0 with LiDAR and whatnot, I remember you telling me that you were not only achieving this kind of breakthrough using AV2.0, but with less sensors than a conventional LiDAR model or AV1.0. Talk to me about that a little bit and how that all comes together.
Alex Kendall
Yes, there’s a couple of underlying principles that make our strategy possible. We talked about the first one, which is that taking an end-to-end learning approach, doing away with high definition maps, but taking an end-to-end learning approach. The second part of this is being flexible and agnostic to the hardware stack. More importantly, working with a lean stack that can be mass manufactured. So not the kind of hardware stack that you might retrofit at a small volume. But if you look at many of the today high-end vehicles, and in a couple of years, what will be mass market vehicles manufactured in tens of millions of unit volume a year. These are vehicles that have hundreds of tops of GPU compute on them. They have surround cameras, surround radar. Some of them have a forward facing LiDAR, but working with a hardware stack that, you know, isn’t like the robotaxis you see driving around Shanghai or San Francisco, but is like the vehicles that you see driving on the roads around the world. Those, that’s the kind of hardware that we’d love to work with.
Alex Kendall (08:07.606)
And the benefit of that is that it enables you to bring this technology worldwide to really generalize it, to use cases that will eventually even go beyond passenger cars.
Rene Haas
But as to compute, know, so back to the conventional stack that exists inside of an unquote standard vehicle versus these look like spaceships, respectfully, kind of running around San Francisco. If you had more compute, is there something you look at and say, here’s the minimum viable product in terms of this amount of compute I need the conventional vehicle versus if I had two or three times that number, call it 100 TOPs or 200 TOPs to baseline. Do you look at and say, I need that baseline and or gosh, if we had two or three X of that number?
The sky’s the limit.
Alex Kendall
Yeah, I think onboard compute is certainly one of the limiting and boundary factors for what we’re building or for any robotic system. So where we’re at today, yeah, call it a 200-TOPs baseline. think this is certainly enough to get a really compelling hands-off driver assistance system that can drive globally in all kinds of scenarios. I think the jury’s still out whether that kind of compute can get you to a general purpose eyes-off system in the near term. I think there are certainly, it’s amazing, like there are optimizations and approaches that just shave a factor of two or three off inference efficiency that can really be game changing or make or break for the system. But certainly, I think we can build such a system with some of the next generation chips that are going to be hundreds of TOPs, maybe a thousand TOPs. But there are a lot of, I think, very compelling reasons why we’ll be able to compress these models down. Certainly if you narrow them to something like highway only or something like that, then you can achieve the level of safety on that sort of limited compute.
Alex Kendall (09:43.266)
When we think about AI scaling curves, which one of the most clear things for us is that they apply in robotics, just like we’ve seen them play out in large language models. In robotics, though, you don’t just have pure AI scaling. You have system limitation issues. And inference compute’s a clear one. Camera resolution or sensor fidelity’s another one. And so you’ve got this dual battlefront where you’ve got to push AI scaling in terms of data and compute, but also work through systems issues, like empowering you with better and better actuation and vision.
Rene Haas
In autonomous is the limit for eyes off compute or sensor fidelity or combination of both.
Alex Kendall
Today it’s intelligence. the vehicles that we’re seeing that are coming into mass production, I think it’s advantageous to have redundant sensing. So camera radar, for example, and the kind of sensor sets that we have right now. For example, if you were sitting with all the windows blacked out in the car and you only saw visualizations of the sensors that the car could see, you could drive at a level five or a general-purpose level.
It is possible for you to drive with those sensors with our human intelligence. We just need to be able to build the embodied intelligence to be able to do that. So I don’t see a fundamental sensor limitation here today with where mass manufacturing is going. It’s really, we’ve got to develop the AI and that’s what I think we’re on the cusp of launching.
Rene Haas
So without giving away the secret sauce, tell us a little bit about how you develop the models. In other words, how you put out fleet vehicles, capture data, end quote, feed it into the model, and the model delivers some output relative to essentially assisted driving. So call it AV2.0 for dummies, if you will, in terms of how that all works.
Alex Kendall (11:22.552)
Yeah, it’s a little bit like a striker in a football team. You know, the policy learning algorithms of actually how do you train the models gets all the intention, but actually is probably less than 1% of the work. 99% of the work is of course all of the infrastructure to really make the iteration speed on these systems faster to develop. And then more importantly, how do you evaluate them? I think we’re seeing this across all forms of AI that actually evaluating and understanding and proving levels of safety in self-driving or more generally understanding AI’s performance is one of the harder challenges. But for the specific algorithms, I mean, it’s a really powerful recipe of training a general-purpose foundation model that learns to understand many different physical tasks and many different physical domains. And then it’s increasingly training and optimizing it for a specific deployment context, the specific SoC we’re implementing into sensor architecture and the behavior that that manufacturer or fleet is after on their vehicles.
Rene Haas
For example, how do you teach the model, a 17-year-old can learn how to drive because very quickly the person will understand, I will understand that if I go too fast and accelerate, car breaks in front of me, I’m going to hit it. Now, when you’re trying to capture that instance in terms of teaching the model, in terms of what’s the appropriate distance, or if you see a passerby, are you actually capturing image data of what an accident looks like or what tailing someone too close looks like? Or do you create that data in terms of synthetic data.
Alex Kendall
Both. So of course the classical machine learning answer is, okay, we collect a bunch of examples of that and go and train against it and shift and bias the behavior to resolve whatever behavior you’re trying to do. Of course, there are many scenarios in driving that are either too rare or too unsafe to collect in the real world. And so we want to do that in simulation. I think one of the misunderstood things about synthetic data is that it replaces real world data. And I really don’t see that being the case. I think if you have you know, more simplistic environments, then sure, you can hand code a simulator that can replace real data. But for robotics, the sheer scale and diversity that we operate in means that the very best simulators, and look, we put out one called Gaia, a generative world model, they’re actually end-to-end data driven in themselves. Now the advantage is that they can learn to recombine data in new ways, and more I view them as giving you leverage on real world data rather than replacing it, because they need to be trained from the real-world data distribution.
But they can recombine and give you new experience of that data in a way that can help drive new reasoning and knowledge.
Rene Haas
Again, I tell the story many times of the Masa falling asleep in central London. But also my takeaway from that more than anything else was AV 2.0 is going to replace the current models. This is the future. And that’s what I was thinking at that time. And if you look a year and a half later, there still are these kind of large vehicles running around with 1.0 systems. What’s it going to take for 2.0 to just wipe the 1.0 stuff off the face of the earth? Is it just time?
Alex Kendall
Time’s certainly going to help. I mean, it’s been really interesting to see the just seismic shift in the industry over the last year. Eight years ago when we started working on this, this was like deeply contrarian and dismissed by almost everyone in the industry. Some of the proof points we’ve had coupled with, of course, just the sheer broad breakthroughs that the AI field has had. And I have really materially shifted this in the last couple of years. And even to the point where two years ago, I couldn’t even get conversation with most automotive manufacturers to now that we’ve got the CEOs of these companies stepping out of our car and those that haven’t fallen asleep with comfort going, wow, I want that in my product yesterday. And now it’s a case of, okay, how do we engineer this into these products? How do we go and validate the levels of safety that are required? I think we’re going to very quickly see, at scale, hands-off point-to-point driving. I mean, we’re seeing some products out there now and what our solution will do is enable many different automakers to launch that kind of capability. But the real inflection point in safety and value comes when you make it eyes off. so I think collectively we’re in a huge challenge and I wouldn’t necessarily call it a race. I’d call it yeah, mean, ambition to bring that product together.
Rene Haas
Going back to when you started the company. What was the vision for starting Wayve? What was your purpose?
Alex Kendall
I mean, our mission statement is about reimagining mobility with embodied intelligence. And so it’s all about building AI systems that enable us to have autonomous machines. Now, when you look at all the different forms of autonomy from healthcare, manufacturing, domestic robotics, I think what was very clear to me was that in automotive and in autonomous vehicles, this would happen far beyond these other spaces because there are so many more factors and conditions that are mature there. There’s global supply chain, there’s the availability of data, there’s a compelling business case, there’s proactive regulation in place. All of these aspects are still very nascent in these other spaces. So I have no doubt that they will all become huge game-changing opportunities as well. And I would love to see our AI generalize into them as well. But it was clear that starting in autonomous vehicles was the place to start.
Rene Haas
Did you start it when you finished your PhD work? Where were you when you started it?
Alex Kendall (16:36.17)
I was simultaneously writing up my PhD thesis and raising a seed round of funding.
Rene Haas
That’s courageous.
Alex Kendall
And trying to find some local lab mates to join me on this wild journey and turn down their seven figure big tech offers and build a prototype in the garage there with me. It was a pretty fun couple of months.
Rene Haas (16:57.28)
And why did you not take the seven figure big company offer versus rubbing two sticks together on your own in a small little barn house in Cambridge? I always find when I’m speaking to founders like yourself who have been unbelievably successful, that the courage it takes to sort of start that kind of thing when you kind of know the opportunity is big, but you’ve got behemoths with lots of capital and lots of people competing against it. What was the fork in the road that had you choose to found a company?
Alex Kendall
Yeah, was a very, very bizarre time. You can imagine coming out of a $10,000 a year PhD stipend. We also, not only that, had offers from a couple of big companies to buy out the technology we’re working on. I think it was really interesting, actually. Some of those companies that wanted to buy out the technology wanted to use it to drive perception or data labeling systems. Even there, I was just very skeptical because, of course, the point of a computer vision system is to make decisions. And why stop there? I think, you know, it was very, very clear you needed to go to end-to-end learning for control, but it really just came down to my core values. Driven by learning and adventure, and I’d almost reached some of the limits of what was possible in the academic setting I was in, in terms of needing to build a team and had the amazing opportunity to come to Silicon Valley and spend some months with a robotic startup there, Skydio, that was doing incredible work, but learn what it was like to be part of a venture-backed mission-driven team.
Rene Haas
Were you advising them or what was the role that you were?
Alex Kendall
Yeah, was an intern. I implemented the very first deep learning system for them. Gotcha. I got the drone to go from following people wearing a blue t-shirt to just following people in general, and a very small part of what is an incredible company today. But I guess that exposure opened my eyes to, what’s possible with venture capital? Yeah. But moreover, it was a means to fuel the passion that I had, which was building this technology.
Rene Haas (18:50.072)
So you went from an intern while you were obviously getting your postgraduate work, but now you’re the CEO of a tech company, but you haven’t had years and years of management training and such and whatnot. What were some of the things that you’ve learned about yourself and learned about building a company in those early days? I think folks would love to hear the story about what you learned as a founder.
Alex Kendall
Every moment you have is, it’s interesting how it can just be an opportunity for learning. Cause one of the interesting things I observed was, look, there’s, there’s PhDs and there’s PhDs. It’s very, very different experiences you can have. But some of the things I remember, you know, going through that experience was, I believe you have to let your results do the talking and first have something of substance. But if you are fortunate enough to have something of substance, how to communicate it, how to talk about it, how to give speeches and tell a narrative that can drive further progress was just such an important skill. I turn up to conferences and share work where five different people would have done that work and those that could communicate it, was a real advantage. I remember even just the importance of being able to describe and share a narrative that connects people and aligns them around a vision. But then even just in the early days hacking and building together these prototypes, particularly when there’s nothing that can path find for you. There’s no existence proof that it’s possible. How exciting I found that was, and how every little breakthrough you make, just how much energy it gives you and then how every setback would just also just drive so much determination to go solve it. But I remember the day that we had the very first reinforcement learning system that could learn to actually lane follow and drive down the road. And I remember going out, we were so frustrated after weeks and weeks of work that one weekend I just went in to the house, got the car, went to the test site myself and just, you know, thrashed it out to the point where it worked. And I remember coming back to the team with that result and the delight and moreover how limited it was and how much more there was to do was a really magic moment.
Rene Haas (20:51.448)
You also did something pretty tough in that you started a company 2017, doing very physical things, obviously, in terms of autonomous. It’s not obvious in 2017 how this is going to be used. Pandemic hits in 2020. Did COVID, was that a setback for you guys in terms of your development? Or how was that period like for Wayve?
Alex Kendall
That was, it was interesting how much we problem solved to work through that. For example, bringing cars to people’s houses so they could test in the local area around the block where they were and still actually get time driving, sitting in the car and actually feeling the system drive. But then also, man, it was a tough time personally, cause I was, I was back in New Zealand for a lot of it back home.
Rene Haas
My gosh.
Alex Kendall
And, of course the time zone difference between we were largely a UK company there, but, the time zone differences exactly 12 hours between London and New Zealand. So I was waking up at midnight, working till midday ish and, think getting, getting some sleep and I was back there for some personal reasons, but yeah, that was, that was a rough nine months.
Rene Haas
Did it slow you guys down in terms of if you look back and say, gosh, if we didn’t have the pandemic, i.e. access to roads, butts in seats, or you guys just muscled through it?
Alex Kendall
We muscled through it, but counterfactual if the pandemic wasn’t there, I’m sure it did have a slowdown effect. I think when you’re building any kind of robotic system, being there physically, being able to experience and brainstorm and whiteboard is just so important. So finding ways to do that virtually required some creativity. But of course, I think we saw this effect that it did bring teams together.
Rene Haas (22:26.894)
So you guys have about 500 folks, right? 550. And I’ve worked in small companies where when we got to 180 or so, that was a bit of a tipping point relative to everybody knows everybody and they know what people are doing. For you guys, how’s it been scaling to that number and has it felt different as you know, or at this size and growing?
Alex Kendall
Yeah. And even, you know, we’ve, as a team, we’ve just gone through, I think another one of those inflection points. It’s just amazing to me how, I spend so much of my time designing the, and thinking about the machine of how to operate such a team. And I think that’s the most complex thing about building the company more so than the technical strategy. But as soon as you get good at that process, you outgrow it. And it’s just been my repeated experience growing to where you are today, but this latest one that we’ve gone through is taking us from more of a centralized roadmap to one where it’s empowered teams that have all the cross-functional skills to go execute against an outcome with the right communication structures to keep them all aligned because building this kind of product requires so many different areas to come together, but we’ve just gone through a change where we’ve tried to reduce some of the coupling and empower our teams to move faster.
Of course, collaboration does come at the expense of bandwidth and it’s important to make room for that, but to make room for it in a deliberate way where it’s needed. And you see many of those differences from when we were a 50 person team and everyone knew everything to even now getting comfortable with not knowing everything. I certainly don’t have the visibility we had when we were 50 people or even 20 people. And so being able to grow and empower structures to run where you’re deliberate about where you collaborate and where you trust and empower has been one of the biggest things I’ve seen here.
Rene Haas (24:16.27)
How does feel for you personally? I mean, there’s so much written about, you know, end quote founder mode, and leaders need to be in the details. And I know at my company, 8,000 people, obviously I’m not in every detail, but I like to be in the details around the stuff that I think is utterly critical to the success of the company. For you, I mean, you’re managing an incredibly complex product and roadmap, technically. How much are you in the details?
Alex Kendall
I think highly and I think I pride myself that although I took one of our friends, Jan Lacoon for a demo drive the other day and he continued to debate with me really, really deep down into algorithmic details and I was embarrassed that it got to a point where I couldn’t go any deeper, which would not have been me a couple of years ago. But in general, no, I think I’m an exceptionally detailed orientated person and actually have had to, to coach myself to try and step out and empower a team in many ways. But I think one of the things that I do spend a lot of time on is trying to fly at a lot of different levels, to be able to make sure that there is a quick, line of communication throughout the company. But then, you know, when it comes to actually, aggregating a lot of that signal and, figuring out where there is red tape or problems to be cut or changed to do so in a way that empowers people to drive and those outcomes.
Rene Haas
One analogy I’ve always heard, which I like for CEOs, is that you need to behave like a helicopter. In other words, be able to see above everything, but at a moment’s notice, dive down to three inches above the problem.
Alex Kendall (25:49.002)
But I mean, you guys have gone through an amazing last year as well. What’s, what’s been the biggest, biggest thing for you that you’ve felt?
Rene Haas
You’re turning into interviewer mode here.
Alex Kendall
Why not? Got take the chance.
Rene Haas
I think scaling a company through transformation is a big challenge. We’re going through a lot of changes in terms of moving to more of a platform-based approach, which ends up being a more complex solution. And as a result, you’re going to be bringing in new muscles into the company that you haven’t brought in. At the same time, you go through change. And thankfully or unthankfully, depending how you look at it, many companies go through change during a time of crisis. In other words, their business is in peril and it’s either change or die, which can accelerate people’s acceptance of change. We’ve been doing very well. Arm’s had a great history. The last number years have been terrific. So when you’re trying to instill that level of change in the midst of a successful business model, that can be a challenge from a leadership standpoint. But that’s probably the single biggest one. It’s a great time as we’re chatting about this technology.
If you’re on the cutting edge of more and more compute and complex technology and complex algorithms, what more to ask for. You mentioned Jan, who is a great guy. I love talking to him as well. And now you’ve sparked something in your comment that I’d be curious to get your viewpoint on. He’s been very vocal, and I’ve chatted with him also about this, that the work that people should be spending on vis-a-vis LLMs, cetera, is not how to make the LLMs more interesting, but what’s beyond them, right? What’s kind of beyond the transformer-based model from an algorithmic standpoint, and there’s no way I’ll go anywhere close to as deep as you guys can do. When you think about that, what is kind of beyond that? Because right now, in a good way, in terms of more and more compute, people are just adding more and more gigawatts, more and more data centers. The more pounding they can do on the problem, the more reinforcement learning, the more compute, the better it is. But is there a next level paradigm shift relative to, from an algorithmic standpoint, that says, hey, when we get to this threshold, everything is different?
Alex Kendall
Yeah, I think there is absolutely a cascading set of S curves of new approaches to bring in. It’s been really interesting to see, I mean, broadly speaking, think it’d be fair to comment in the large language model and cognitive AI space that shift from being a science and engineering problem more to a product problem of how do you exploit new data sets and integrate into different product workflows. But one of the interesting things in embodied AI is, is yeah, we absolutely are seeing that new ideas are needed because there’s still that big safety bar.
You need to get and not only get, but measure in a way that has quite, quite unbounded and the domain that you can drive in. So I share a lot of Jan’s views that I think absolutely for this next S curve, this next, next jump, we’re going to need to find new methods that look, there’s a lot of detailed work that goes into whether you’re going from, you know, whatever flavor of learning structure from convolution nets to transformers, to RNNs, LSTMs, all the, all the different, you know, architectural structures that will continue to make incremental gains. But the representation learning methods for me are the ones that are really driving more general-purpose understanding and in particular how we can get more effective learning signals out. Cause I think if you can predict and understand a scene, then actually acting and engaging within it should be, should be very efficient.
Rene Haas
Anyway, as we wrap up, it’s been fascinating, Alex, and what you guys have done has just been amazing. You’re a New Zealander, I’m an American, but we’re both running Brit-based companies. You’ve had a lot of help from the UK government in terms of just clearing regulatory hurdles in terms of getting your stuff on the road. How do governments need to continue to help in this? Are governments a regulatory, one of the bigger hurdles to get through? When you think about kind of eyes off driving?
Alex Kendall (29:36.268)
Look, I think there’s a lot of misunderstanding with regulation because I actually feel very optimistic about it in autonomous driving. And in general, think there is a hard question about general purpose AI regulation. But when you think about specific AI applications, whether it’s medicine or education or specifically for driving, there is a fantastic set of regulation as well as understanding of the risks and the opportunities in that space. So I think actually in autonomous driving, you know, we are today seeing very proactive regulation, even though the technology is not widespread. We helped the UK government write into law last year, the automated vehicles act 2024, which legalizes autonomous driving. We work a lot at the UN-level to drive global harmonized legislation. And I know the US is starting to do a lot more there too. So in general, I think for applications like autonomous driving, we’re actually in quite a healthy spot. And yes, we need to do more.
We need to implement it more quickly, but I think progress is compelling. And for me, it’s an important problem, but the long pole of seeing this technology launch to scale still remains the intelligence, the AI, the science.
Rene Haas
So last point, plug for the audience. what countries can we find Wayve technology these days?
Alex Kendall (31:17.464)
Rene, we’re all around the world now. We’re in the UK. We’ve got a fleet, some of our cars being tested in Germany, the US and Japan now. We do road trips all around Europe. We were in Italy last week. We were in Canada a few weeks ago, but that’s just our dev fleet at very small scale. What I’m excited for is to see this launched into mass market, consumer and commercial vehicles around the world. And we’re working with a number of automakers and fleets to make this possible. Two were publicly announced, namely Nissan and Uber. We’re incredibly excited about. hopefully in a car that maybe you own in a couple of years, or maybe you’ll be driving and that’ll be there soon.
Rene Haas
And that car will have Arm inside, so we both win. Alex, thanks so much.
Rene Haas (31:44.822)
If you want to learn more about Wayve and their software strategy, keep an eye out for an upcoming conversation with their VP of Engineering, Silvius Rus, on the Arm Viewpoints podcast. Thanks for listening to this month’s episode of Tech Unheard. We’ll be back next month for another look behind the boardroom door. To be sure you don’t miss new episodes, follow Tech Unheard wherever you get your podcasts. Until then, Tech Unheard is a custom podcast series from Arm and National Public Media. And I’m Arm CEO, Rene Haas. Thanks for listening to Tech Unheard.