Arm Newsroom Podcast
Podcast

How Arm Neoverse is Redefining the Global Computing Infrastructure: Part 2

The Arm Podcast · Arm Viewpoints: Redefining the global computing infrastructure – Part 2

Listen now on:

Applepodcasts Spotify

Summary

In the second of a two-part series, Dermot O’Driscoll, VP of Arm’s Infrastructure Line of Business, chats with host Geof Wheelwright to changing design requirements in the data center and infrastructure.

Speakers

Dermot O'Driscoll, VP, Product Solutions, Infrastructure Line of Business, Arm

Dermot O'Driscoll, VP, Product Solutions, Infrastructure Line of Business, Arm

Dermot O’Driscoll leads product solutions for the Infrastructure Line of Business at Arm. His responsibilities include definition, delivery and successful deployment of Arm-based products across the Infrastructure segments. With a detailed knowledge of data center and networking applications he leads the team responsible for Arm’s IP and Software products in those markets.

Dermot works with Arm’s silicon partners to support the development of their products. He also engages end customers in the cloud and telecom industries to ensure successful deployment of efficient Arm-based solutions.

Dermot has a long history of IT, EDA, CPU and SoC design experience at Arm and has been with the company for over 20 years in various engineering and management roles. He received his bachelor’s degree in electronics engineering and masters in microelectronics from Edinburgh University, Scotland.

Geof Wheelwright

Geof Wheelwright

Geof Wheelwright is the host of Arm’s Viewpoints and New Reality podcasts. He has worked as a journalist, author, broadcaster and consultant for more than three decades – and in a variety of technical content management, corporate communications and senior management roles at several technology companies. He has contributed to a broad range of media outlets – including The Guardian, the Financial Times, The Daily Telegraph, The Daily Mail, The Independent, Canada’s National Post, Time Magazine, Newsweek and a number of specialist technology industry sites (such as Geekwire) and Travel titles (including Travel + Leisure).

Transcript

Geof: Welcome back to part two of this special two-part episode, all about how Arm is redefining the infrastructure computing landscape. In part one, we talked about Arm’s recent roadmap announcement. But in this part, we’ll discuss the wider infrastructure market, Arm’s role in this growing space and what this means for an industry increasingly moving to Arm-based solutions.

Joining me to talk about this is Dermot O’Driscoll, Vice President of Product Solutions from the Arm Infrastructure Line of Business. So it’s clear that Arms growing massively in the infrastructure space. You’ve been in Arm for quite a while. So my question is, what took you guys so long? I asked that kind of tongue and cheek because but you’re keeping, trying to keep up with a tremendous pace of demand in the market.

And if you think about the major cloud service providers, for example, and they’re adopting universe and that’s fantastic, but also brings challenges. Is Arm ready for those challenges?

Dermot: Yeah, I mean it, it’s definitely been a journey. And if I go back to seven years ago when we started on this path, I’m not sure we had a vision for everything we would need to do.

And one of the things that matters a lot is early wins, right? Winning Amazon early for us as a lighthouse customer. And then helping both develop the ecosystem, whether it’s educate us on what was required. I came into the infrastructure market, curious and excited. But I’ve learned so much.

So that’s the first part of it. Now, the second question you ask is a great question, and it’s a question I challenge my team with every day, which is, are we ready for the onslaught? And we are building out, we are building out our developer program. So that the work we’re doing on our developer platforms to, to support developers to make sure that they have the tools, the software, the hardware platforms, the models all of the things that they need to easily develop on Arm.

we’re making sure we have the right training materials; we’re making sure that we have the right software ecosystem. The amount of energy we’ve put into the software ecosystem in the last five years is phenomenal, right? And that’s been in partnership. I mentioned Amazon, but other people are seeing this as an opportunity to go beyond where they are today and develop the software ecosystem.

So for simple example open JDK. Millions and millions of lines of code that have had to be import, that have had to be imported from a one architecture, to multi architecture. So getting multi architecture support from both, for both X 86 and Arm. So that the. Base level of what’s required for developers.

So most developers are not looking into the open JDK guts. Most of ’em are not looking into the guts of a GCC compiler. Most of ’em are not looking into the runtime that is beneath.net or that is beneath go. What they’re trying to do is they’re trying to get their job done right, so they need. Development environments and they need a base level of enablement that we have to provide.

Geof: That makes a lot of sense. And building on the great things that you’ve done before also makes sense. One of those that you referred to earlier was about your reputation for power sipping and Arm’s ability to combine high performance and low power. And you seem to be in a great place to help tackle some of the sustainability issues in the infrastructure space.

And we’ve talked about this in podcasts before, but I’m interested in hearing your take on how Geof is committed to providing a cleaner cloud.

You know there, there’s some very interesting data coming out around

Dermot: this. And. I’m a firm believer, like I mentioned earlier, that we have come from a place of great efficiency, and we are adding to the CPU micro architectures and to interconnect products only that which is needed to make certain workloads and the key workloads that those that, that the major cloud companies care about better.

We are not trying to be a one size fits all. One size fits all means you end up adding, features and technologies that can be wasteful. So the first thing is we’re very specific and meticulous about what we will add to our architecture, what we will add to the micro architecture of the course we build, so that we are good enough to meet the needs of those key workloads and what matters to the cloud companies, but not over doing it to the point where we’re wasting power. So that’s the first thing. So that’s something that is fundamentally what we do. The other thing that’s quite interesting to observe is just how much more efficient the public cloud is versus standard data centers. So there’s a metric, it’s called PUEs (power usage effectiveness).

I won’t get into the specifics of it. Your listeners can do their own research on that. But it basically measures how much of the power that’s consumed in a data center is towards doing the compute versus overhead. And public clouds drive down or drive down PUE basically the amount overhead to really low levels and good numbers are in the twos, and Google, I think, prides themselves on being close to a one. They basically have very limited, less than 10% overhead. Why that matters is our strategy has always been about the public cloud. Public cloud first, right? Making sure that the public cloud has access to our best technology is going to help drive down that PUE, drive up efficiency and the consumer, the user can then make a choice between, hey, I can use public cloud, which is the most efficient way of doing compute.

And I’ll come back to why that matters. Efficiency right matters for two reasons. It matters because your costs go down. We all know the cost of energy went up in the last year. That cost of energy is translating to an impact on the cost of other things in our ecosystem, right? It costs more to ship product to your local grocery store, but it also costs more to run your data center.

More efficient processor means you consume less. So most more efficient processing is not just good for a sustainability standpoint. It’s good for how the cost of building our products and therefore the cost to us is consumers. And I think that’s where companies really start to think twice about deploying Arm.

They realize that actually I can get more compute because of our efficiency and how we design our products for both a lower cost to my consumer and for less energy. And that, that helps tell the sustainability story. And the sustainability story is something that most cloud companies are starting to really think hard about because they are consuming very large amounts of energy and it’s important to them to push that down and Arm can allow them to do that.

Geof: As we look at that cloud and server. I’m going to take the lens back a bit and think a bit more about some of the other things you’ve talked about in terms of networking and edge and what Arm is doing there. Maybe you could tell me a bit more about the latest on that.

Dermot: I already talked a little bit about the requirements for efficiency in DPU and smart NIC and again, why is that important, right?

Simplest level, 30 to 40% of cycles in a traditional cloud environment are to manage things like networking, security, and storage. So if you are a public cloud provider, you are spending 30 to 40% of what is your most expensive or one of your most expensive assets doing what are considered data center tax function.

Why wouldn’t you wanna take those and run them on something that’s more efficient and basically that, that, that way you do two things. One is you isolate from a security, networking and storage standpoint those functions to something that is specifically designed to do that. And secondly, you free up those 30 to 40% of your cycles. So very simple math says, I can make a lot more money sending my server cycles than I can using them for internal tax functions. So the first reason that that a lot of these cloud companies have moved to having DPU Smart Neck devices is. To do free up cycles that they can sell or use for other functions.

Second is, it’s a really useful way to isolate those services and take them away so that they’re not conflicting for resources, they’re not conflicting for security on the main device. So that, that’s, that, that has that wave started about two years ago. But it’s really taking flight now.

And you can see announcements from folks like even Intel who are announcing their collaboration with Google on the Mount Evans design point. The, they call it their IPU which they released to the market for general consumption. And that’s based on a Neoverse N1-based design point. So a lot of traction there.

You can see it with what Nvidia you’re doing with their BlueField platforms and beyond. So that, that’s really exciting to see what was initially being driven by one or two cloud providers really taking over and becoming a much bigger trend. Second thing we’re seeing. In, in the wireless infrastructure space.

So if you look at, when we talk about wireless infrastructure, we’re not talking about WIFI hotspots, as I’m sure your listeners will know. We’re talking about the ran the 5G wireless space, as most people would think about it today. Now true. We have supplied processors into that space. The embedded RAN for a long time. You open up a box with a base station, there’s Arm in that box. But what’s happening now is the compute demands, whether it be around beam forming or other higher levels of processing – L2, L 3 – is just, it’s ballooning, right? The amount of data that you and I and our kids and beyond are downloading on wireless networks is exploding, right?

And in order to meet that demand those high bandwidth requirements, there’s a lot more compute happening on that pole by your house, right? And. To supply the power to do that, compute gets harder and harder, right? So people want more efficient compute. So that’s another big transition that we’re seeing is the performance bandwidth demands in 5G are creating a requirement for a lot more efficient compute.

On those devices and where traditionally people were comfortable with, hey, I’m just going to stick a traditional X 86 server somewhere out there, they’re going, actually, I need 2-3X the amount of compute and I need it in the same power envelope. How can we do that? Arm can you help me?

That’s the question we get a lot of the times. So that’s big. And the other thing that’s happening is virtualization. So in those wireless networks, the way software is developed traditionally was you had 10,000 software engineers sitting at Ericsson, sitting at Nokia, basically coding up all the stack that went there.

And some of that silly goes on. It does absolutely go on, but they wanna move to a more virtualized environment, right? Where network functions are software driven. They’re not custom specific to certain DSP functions. So they can leverage a broader software development ecosystem. There’s a lot of software developers out there in the world.

These guys can basically work on this type of space. So this is an area again, where now you’ve got what is almost a traditional server workload, right? What used to be an embedded RAN-type use case and software. Program, application, moving to being more of a server class and coming back to what we said earlier, the efficiency of what we’re providing, what our partners are building, folks like, like Ampere that I mentioned really allows them to say, oh, I can move from embedded ran where I was very.

Every line of code was met, metered, and every line of code was met because I was running on a very low-end processor with very low end per, memory support and what have you, to running in a more server class environment. I’ve got a lot more flexibility. Do I have now the ability to do that in an efficient way.

Because the first model we describe is very efficient because every line of code is made. The second one is probably not. So you’ve got more higher-level software development concepts but therefore you need the hardware to be more efficient, so you’re trading off what was very tight levels of software control to looser software control, but much better hardware.

Geof: I’m really glad you mentioned software development. I recently had a great discussion with Mark Hambleton in an earlier podcast about the Arm DevSummit that you were talking about earlier, and the renewed focus of the event on software developers and the vital role that Arms playing. Maybe you can talk a bit more about Arm’s recent work in the infrastructure space with software development. Is there anything particularly exciting that jumps out?

Dermot: Yeah. I actually, I listened to your podcast with Mark, Jeff, and I gotta say that was partly the reason why I was excited to talk to you. Because I love Mark. Mark and I have been working together for years. He’s a great guy. I really enjoyed that.

In the infrastructure space I joke that we’re. We’re so rich in terms of the blogs we have. I don’t have time to read all the blogs. We are doing things across the space, right? We have, I think in the last year we’ve released 50 user case blogs, right? On how to use Kubernetes, how to use Mongo DB, how to use Java, how to build out, right?

So now we’re now at a point where we’re training the software development community and helping support the software development community in how to deploy on Arm. And that’s the, we have reached a point where the hardware’s available. We have programs like Works on Arm. We’re expanding that to additional suppliers beyond what we were we had done originally.

So hardware accessibility is there. Now what we’re doing is we’re reaching out to all the software communities and saying, hey, let us show you how easy it’s to do this. Let us show you how easy it is to migrate through traditional use cases, your traditional workloads onto an Arm based architecture.

So I think that’s a big part of what we’re doing is the education and the training and the support and making that ecosystem completely accessible and opening it up. And that’s what I’m excited about because I actually think that democratization of compute, right? How do you make compute super accessible to people and to developers and make it cheap and make it efficient?

And that’s part of the work works on our program is making low cost, if not zero cost compute available to developers so they can go play and they can go and basically do their work and do their experimentation for free.

Geof: So we started this conversation talking about the roadmap and the roadmap’s all about the future. Maybe we could also talk a bit more about what excites you most about infrastructure and Arm’s role here.

Dermot: It’s a super exciting time, right? I mentioned this earlier, just the infrastructure market has captured the imagination of so many people, right? There’s so much investment going into it. So I’m excited about that.

What I’m excited about for our customers, Is the breadth of technologies we’re making available to them. So if you’re a semiconductor partner and you’re looking or even, or a cloud provider and you’re looking to build your own silicon, or you’re looking to build your own products, we now have.

Three different product lines that you can take advantage of. We have our V class; we just announced the v2. We have our N series for scale out use cases, and we have our e-series for efficient accelerator level processing. So to me that breadth of product line that we’re offering to people, the flexibility we’re offering, if you’re building.

That’s fantastic. So that’s super exciting. If you’re a hardware guy, and we find this with our part, with our customers when we talk to them directly, is we’ve never had such a great broad offering in the infrastructure space. It, and I don’t say that to brag.

I say that humbly, we’ve been very successful at leveraging what we do in our client space, adding infrastructure specific features, the right ones in order to build a product roadmap that allows people to really do what they want. Basically it’s, it, to build a, a 5-watt product on one end of the spectrum to a, if they want to 250, 300 watts server on the other end of the spectrum. So that, that breadth of offering that we have is super exciting. The other thing that’s super exciting is just, I’m a great LinkedIn follower, right? And I spend, I do spend, that probably is my other social media site addiction is spending time on there and just the amount of traction we’re seeing from the partnership in the software side.

How many people are moving to developing on Arm. There’s a great presentation done by the folks at Airbnb recently. I don’t know if you’ve got to see that one, but it was just how easy it is to migrate and run and move to run on Arm. We’re seeing that that level of just people believing that it’s possible. If I went back five years, Geof, I don’t think in my wildest dreams I would’ve thought we were at this point today. We’ve just really very lucky in that respect. One of our partners and one I’m obviously very close to – Amazon – said the 48 of their 50 biggest customers are now running something, running significant amounts of their workloads on Arm.

That’s mind blowing in terms of just that traction. And I think we’re, I think we’re at the start. I think we’re just at the start of what is the level of traction we’re about to see. There’s early release of technology and availability. It’s just starting at the other major cloud providers.

That’s that. This is gonna take off. So that’s what excites me. That’s what gets me up in the morning is how do I move that ball? How do I, accelerate that hockey stick? What levers do we need to pull in order to get people and enable people to develop for the Arm architecture?

But no mistake about it, right? This is now moved from do we have the right hardware? Yes, we absolutely have the right hardware, and we have the right hardware roadmap. How do we get to developers and enable the developers and support developers and invest in developers to make them successful because that’s, we got to make this frictionless.

Geof: One thing you said there about going back five years. I’m wondering if you go back even further to when you were a kid and you were circle back to what you were talking about right at the beginning of the podcast. What would you say today to a young Dermot at the start of his technology journey and, maybe a little bit about voltage safety, but what other things would, what other insights would you wanna offer?

Dermot: I’m curious. I’m a curious person, right? And I don’t mean curious as in curious strange. Curious as inquisitive. And I think for me it’s always be curious, always be, inquisitive about how do things work. There, there is this belief sometimes that as technology becomes more complex.

It becomes less accessible. I don’t necessarily believe that because I actually think that as we, we can break it down and we can understand it, and you don’t have to understand every minor detail. So having the confidence that curiosity pays off, I’m not sure I had that when I was growing up.

That, that the idea that just because you like to take things apart or maybe put them back together, hopefully the right way. It is that you can actually. That’s exciting and that’s fun, right? And that’s as an engineer and I started as an engineer, I’m more on the product side now it’s about how do you engineer a solution? How do you, with your curiosity, with your brain, how do you come up with something that meets a set of need? So that’s the first thing is to believe that’s always possible. And the second thing that, that then I would advise people, coming up is look after your network, and I don’t mean your wireless or your network. Look after your people network. I’ve been amazed just in the last few years how you can have a problem and reach out to your network, and they can help you understand the challenge. They can help you, basically troubleshoot it.

And they might not be even part of your business. They might be outside of your business. So I think, maintaining a strong network of people who you talk to and communicate with and listen What they tell you is super important. I’m not sure if that’s what you were looking for, Geof but that, that, those would be the two, two things I would encourage my, my, my younger self to do is just keep listening and be curious.

Geof: Thank you, Dermot. This has been great. You painted a really inspiring vision of the future from the kinds of even more powerful infrastructure that’ll be available to us, to the amazing things we’ll be able to do with it. So thank you for that. And speaking of the more immediate future, we look forward to bringing you further glimpses of it soon in the next episode of Armed Viewpoints. Thanks for listening.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo