Arm Newsroom Podcast
Podcast

Agentic AI, Edge Computing and the Race for the Future: Tech Predictions for 2025

From billion-dollar AI models to distributed computing breakthroughs, industry expert Matt Griffin reveals how the tech landscape will transform in 2025 and beyond
The Arm Podcast · Arm Viewpoints: Matt Griffin-FINAL

Listen now on:

Applepodcasts Spotify

Summary

In this episode of the Arm Viewpoints podcast, the Arm content team speaks with Matt Griffin, founder of the 311 Institute, about the future of technology and the role of AI in various sectors. We reached out to our old friend and collaborator to help us think about predictions for 2025. In the episode, we discuss the technological surprises expected in 2025, the acceleration of innovation through AI, the emergence of agentic AI, and the implications for data centers and power demands. The conversation also touches on China’s AI strategy, the future of augmented and virtual reality, and the security challenges posed by interconnected devices. Finally, they explore the transformative role of AI in healthcare and its potential to revolutionize the industry.

Speakers

Matt Griffin, founder, 311 Institute

Matt Griffin, founder, 311 Institute

Matthew Griffin is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award-winning futurist, and author of “Codex of the Future” series. Matthew’s work involves being able to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society.

Jack Melling

Jack Melling

Jack Melling is a Senior Editorial Manager at **Arm**, where he plays a key role in managing the company’s editorial content, including blogs, podcasts, and reports. He works closely with the Arm Newsroom team to communicate the company’s innovations, particularly in fields like mobile technology, 5G, and AI. Melling has been instrumental in highlighting Arm’s contributions to the evolution of mobile technology, including its role in advancing the mobile form factor and 5G connectivity. He also provides insights into emerging tech trends, such as foldable phones and next-gen AI applications.Melling’s work emphasizes the impact of these technologies on society, particularly how 5G is set to transform industries by enabling faster connectivity, smarter devices, and new use cases like smart cities and autonomous driving. Through his editorial leadership, Melling helps position Arm as a driving force behind the tech innovations that shape our daily lives.For more about his contributions, check out the Arm Newsroom or community blogs where he frequently shares insights on the future of technology.

Brian Fuller, host

Brian Fuller, host

Brian Fuller is an experienced writer, journalist and communications/content marketing strategist specializing in both traditional publishing and emerging digital technologies. He has held various leadership roles, currently as Editor-in-Chief at Arm and formerly at Cadence Design Systems, Inc. Prior to his content-marketing work inside corporations, he was a wire-service reporter and business editor before joining EE Times and spending nearly 20 years there in various roles, including editor-in-chief and publisher.  He holds a B.A. in English from UCLA.

Omkar Padwardhan

Omkar Padwardhan

Omkar Patwardhan is a Content Specialist at Arm, where he crafts and manages engaging content like blogs, whitepapers, videos, podcasts and reports for the Automotive and Client Lines of Businesses. With a research and analytical mindset combined with a creative flair, Omkar has created insightful pieces across Arm Newsroom, Arm Community, SOAFEE.io, and developer.arm.com.

Kurt Wilson

Kurt Wilson

Kurt Wilson is a seasoned B2B copywriter with experience in the SaaS industry, specializing in SEM/SEO strategies. He excels in crafting compelling blogs, digital ads, emails, and case studies that drive engagement and deliver results. He joined Arm’s content team in 2024 covering Infrastructure and IoT.

Transcript

Chapters

00:00 Exploring the Future with Matt Griffin
02:26 Technological Surprises of 2025
04:19 Acceleration of Innovation through AI
08:08 The Rise of Agentic AI
12:01 Generative Computing and Power Demands
18:10 AI’s Impact on Data Centers
22:02 China’s AI Strategy and Efficiency
23:00 The Future of AR and VR
31:13 Security Challenges in an AI-Driven World
41:15 AI’s Transformative Role in Healthcare

__

Brian: [00:00:00] Matt, here you are with our mighty four person content team. We just want to pick your brain because we’re going to be writing some prediction content for 2025. We also want to get a fun, lively, interactive podcast out of it as well. And you know what? There’s nobody better to play in that sandbox than you, my friend.

So welcome. With the cats. Everybody jump in with your questions when you feel like it, but I’ll just kick it off really generally at a high level. What do you think you look at so many different applications, so many different vertical areas? What do you think are going to be the big surprises technologically in 2025?

Matt: I think realistically, I think we’re still going to be having a look at artificial intelligence. So when we have a look at some of the other large technology groups, and basically I follow like 600 of these emerging technologies. Realistically, when we have a look at [00:01:00] things like virtual reality, immersive reality, yeah, mixed reality, augmented reality, I think probably in 2025, we’ll actually see those uptick a little bit.

And part of the reason for that is because. Computing is coming along, the ecosystems are building, but more importantly, some of the devices that we’re actually starting to see in the labs are very sleek, very slim, almost invisible, so if you think about virtual reality today, you still got to have those big headsets whereas, increasingly we’re seeing virtual reality glasses, we’re seeing much sleeker augmented reality form factors coming through as well.

When we have a look at immersive reality, I still argue that if you don’t actually wear glasses, then you’re not going to be walking around the streets of New York wearing glasses and throwing your smartphone in the bit. But nevertheless, I think, when you have a look at how convenient the technology is going to be, how accessible it’s going to be, how affordable it’s going to [00:02:00] be, but also increasingly how invisible, and it’s not invisible, but increasingly how invisible and easy it is to adopt.

I think you’ll, I think we’ll start seeing a balance in immersive tech.

Brian: So on with when I, yeah, I didn’t mean to interrupt, but I was going to, I was going to riff on the AI topic you and I talked, I think it was two years ago at the very dawn of generative AI and your son, a healthy teenager had created a book using AI.

And that was just a couple of years ago. So obviously. The possibilities are accelerating. Talk a little bit about that acceleration and what it means for innovation.

Matt: So it means a huge amount for innovation, but there’s there’s kind of two ways you can actually look at it. From an innovation perspective the kind of top headlines is that artificial intelligence really.

Increasingly across the board, almost [00:03:00] irrespective of what industry you actually reside in, is accelerating how fast you develop new products, whether that’s hardware and or software, and get them to the market. But there are two tracks to that. Now on the one hand, there are humans that are using artificial intelligence, To design new products.

So an example of this is Under Armour using AI to develop new trainers in two hours that would have typically taken them about 18 months. Toyota using AI to develop new electric vehicle batteries in two weeks that would have taken them two years. Companies like Hyperganic using AI to create new rocket engines in six hours compared to how long is a rocket engine, six years.

So on the one hand, basically, we’ve got the use of different kinds of artificial intelligence, but especially generative artificial intelligence, which is a little bit different to the AIs that we think of as open AI. So when we have a look at true generative artificial [00:04:00] intelligence, increasingly, it is easy for individuals to use that technology as a tool to develop new products faster than ever before.

We’ve already developed 380, 000 new material compounds in a couple of hours. We can develop new chemicals. We can develop new proteins in minutes. It’s that kind of stuff. The second sort of hand on the wheel when it comes to product development is AI developing its own product. Now, when we have a look at the bowel wave, Artificial intelligence using cell.

To proactively develop new technologies and solutions, especially in the software space. is still very nascent. In the software space, we’ve seen the use of artificial intelligence agents, with the example being Debian, where you simply say to an AI agent, [00:05:00] create a product, and it will actually create the software for you.

At Harvard, we used artificial intelligence to build 70 businesses, et cetera, et cetera. So when we have a look at the transition, on the one hand, we’ve got the use of artificial intelligence as an innovation acceleration tool across lots of different areas, including chips, GPUs, tensor processing units, AI accelerators and everything else.

We are slowly moving from humans who Have a level of control over what the AIs create and how they create it to AI creating the product itself. So we’re moving from augmented and automated to autonomous. Which, and the autonomous development of new inventions and everything else, leaves us with quite a lot of regulatory problems.

Jack: [00:06:00] No doubt. Yeah, I was just wondering in terms of next year, thinking about 2025, what do you see as the sort of big thing in AI about what happened.

Matt: So really basically it’s agents. I’ll back up slightly because there’s actually a lot going on in the large language, in the LLM space, in the sort of generative AI spaces, we see it at the minute.

So if you look at say, chat GPT today and GPT 4, they were trained on human text. When we have a look at the capabilities of those individual artificial intelligences, if you ask them, for example, to. If you ask them to describe this scene that we have here, they would write a story about it, right? They would see this scene through text input, text output.

Now, increasingly, we’re training these artificial intelligences with large audio models, so they can hear, large vision models, so they can see, large behavioral models, so they can [00:07:00] understand the behaviors and relationships between different things, whether that’s People between people and people and objects, objects and so on and so forth.

When we actually have a look at what we’re doing with generative AI today, and we’re also embedding it into robots. So embodied AI and objects and everything else. What we’re increasingly doing is we are giving AI the ability to send just as we humans do. Hear, see, touch, feel, experience the world.

When we have a look at where the technology is actually headed, increasingly it’s towards agents. But there’s a bit of an issue. So today, basically as an organization, if I’m interacting with a single, with an artificial intelligence, I’m interacting with a single AI. So I would say to you as an AI, design a new computer chip.

And that single AI would try its [00:08:00] best to design a new computer chip. When we move to agents, I still tell one AI design a computer chip, but it goes out to the equivalent of a gig economy of AI workers, so little AI agents and AI bots. Not really, AI pieces of software. And it starts, it tells all of them that they need to coordinate themselves together to create the latest chip.

When we have a look at agentic artificial intelligence, the security implications are insane. Wow. The regulate, the regulations and compliance are nuts. The problems that we could actually see with agentic artificial intelligence. is insane. It’ll be very difficult for you to understand if the agentic AI that your AI is talking to is based in China and so on and so forth.

So the reason I say this is because what we have is we have [00:09:00] companies like OpenAI, Microsoft and so on and so forth talking about agentic AI, but when it comes to companies actually using it and adopting it, yes, it sounds great. The marketing will be really nice and slick. The demos will be super duper.

Brian: But the risks and the unknowns are insane. And also the pressure on hardware architectures is going to increase exponentially, I would assume, because like you said you’re dealing with multi modal models and they’re a lot more complex than your typical language model, I would assume. So talk about the pressure on hardware in the next couple of years.

Matt: One of the things that NVIDIA has been talking about is when you have a look at say generative AI, if we ask generative AI to render an image, for example, or write text or write code or whatever it happens to be, or [00:10:00] to do something that is, that involves information from the world around it.

At the moment, basically these AIs will take data in and dump them back to a data center, to the cloud. And then we’ll have GPUs in the cloud that render different things and spit it back to you. Increasingly, these generative artificial intelligences will run at the edge. They’ll be much smaller and leaner models, basically, than the large multi trillion parameter models that we actually have today.

We think that there will be the need at the edge of the network to create what we call generative computing. Whereas a computer chip today basically will render, for example, a game, increasingly we will actually just generate those images frame by frame at the edge. This is where we move from GPUs that render.

To generative computing that is a [00:11:00] new kind of paradigm.

Brian: Interesting.

Jack: What does this all mean for the power demands in the data center? Is this just going to completely, I don’t want to say blow it up, but will it, this doesn’t sound sustainable to me in the long term, right?

Matt: I’m currently advising an organization called NexGen who are building a 1 billion pound data center in Norway.

And we’ve got about 50, 000 NVIDIA Blackwells going into it. We chose Norway as the center because it’s geotherm. New medium sized AI factories are running in the 300 to 500 megawatt range. Meta and others are already planning data centers that are in the the one to two gigawatt range. And these data centers have their own mini nuclear reactors.

When we actually have a look at AI’s power consumption, as the models get larger and larger, the power ratio increases exponentially. [00:12:00] So it’s, it is unsustainable. However when we actually have a look at University of Oxford, as well as others, University of Oxford is trying to develop new artificial intelligence models, basically, that use a thousand times less compute and a thousand times less energy.

We are trying to tackle the energy problem, but it’s difficult. Then, on top of that, you’ve also got a number of new computer chips you’ve got GROT and a few others, that typically do AI inferencing about 20 times faster, but with 20 times less energy. But either way, when you actually have a look at, say, for example, the majority of the large new data centers that are being deployed and built, the power requirements are insane.

And so one of my clients is RWE and RWE are one of Europe’s, in fact, I think they’re pretty much Europe’s largest energy company, and they are saying when it comes to artificial intelligence data centers, [00:13:00] We are pretty much the only company that companies like Meta and Microsoft and NVIDIA and so on and so forth can go to because we are one of the only companies with a 300 megawatt grid link.

There’s a split though between Google DeepMind and Microsoft and OpenAI when it comes to models. So when you have a look at OpenAI’s models at the moment, they’re generally in the 2 trillion parameter range. So Sam Altman typically believes if you want to achieve artificial general intelligence, you need an even bigger model.

So that’s more data, more parameters, 10 times the compute, 10 times the data center capacity. Whereas when you have a look at Google DeepMind, they’re starting to go another route. They’re starting to say it doesn’t really make sense to make these models bigger and bigger and bigger.

Because, just to put some numbers on it, the CEO of [00:14:00] Anthropic, who are funded by Amazon and Google to the tune of multi billions of dollars, says that today he sees AI models that are going into development that will cost one billion dollars to trade. That’s crazy. And he projects out that in the future, which I disagree with this bit, just, but just as a comedy moment.

He says, so this is the CEO of Anthropic and Anthropic is one of the other best, other really good foundational LLM models. And he says, I can see a time when it costs a hundred billion dollars to train an AI. So when you start looking at, when you start looking at, you need a new mini nuclear reactor you need a hundred to three hundred thousand blackwell chips, you’re trying to create a model with five trillion parameters, and you’re running out of data in the world, it doesn’t make sense to keep trying to make these models bigger and bigger and bigger and bigger.[00:15:00]

So back to DeepMind’s strategy is to try to keep the size of the models the same, but to make them more efficient. And at the moment, we don’t really have too much data on which approach is better. But realistically, you, if you went to any business and said, in the future, I want you to spend a billion dollars training each model, I want you to spend 100 billion on a data center like we’re seeing with Microsoft Stargate data you, the shareholders will have a fit, when you have a look at OpenAI, they made about 3.

7 billion this year, but they burned through 5. 5 billion. They got 6 billion in new funding, but that’s just their burn rate. You’re looking at this,

Brian: just going, this is insane. And you get companies like Microsoft. Who have seen fit to invest in a nuclear power plant. So to Jack’s point, [00:16:00] it’s a challenge.

Yep. But we’ve,

Matt: we’ve then got a bifurcation in the models though. So what we have is we’ve got we’ve got the large American tech giant, let’s face it creating relatively proprietary model in order so that they have something to sell and they can make their money from those models. But then you also have the open source models like LLAMA.

LLAMA’s energy use is unknown, but it uses a lot of 80 billion parameter models stuck together. But then in China, which I think is also worth chatting through, China is in an AI arms race with everybody. Sanctions mean that China has not been able to get access to the H one hundreds or the blackwells from Nvidia.

So NVIDIA’s been creating what, the H 20, in order to line up with us export sanctions on GPUs [00:17:00] China about a month ago, and I forget the exact company, but there was, it’s like Baidu. So I’ll say Baidu Baidu announced basically that they were training their artificial intelligence, LLM models, using distributed compute, which for an artificial intelligence model has never been done before.

So whereas open AI basically will just have a giant data center, and you train everything in there because of American sanctions, the Chinese have had to use federated computing platforms. To train their LLMs. And that’s interesting because that means you can start spreading the workloads by see over smaller data centers, that I’m, I won’t say more efficient, we’re still not quite sure.

But a distributed computing model seems to make more sense in the longer run than building a 100 [00:18:00] billion

Jack: AI factory. So essentially these sanctions could backfire and we could be essentially incentivizing China to build more efficient AI, which is ultimately likely to be the direction of travel in the future because the way things are going where you just create more and more data centers, Yeah, more larger LLMs is not sustainable at all.

Matt: Yeah, so so that’s exactly what I’ve been arguing for about the past six to eight months If I said to you have all of the resources that you need to do something You’ll do it. You’ll do a lot of things, but you’ll probably at some point have a high level of inefficiency You know just because you’ve got all the resources in the world 30 percent of the resources that you have basically spinning their wheels doing something stupid etc.

But if I said to you, I want you to, I still want you to achieve the same goal, but now it’s with two [00:19:00] people, you have to figure out how to work really smart, really lean, really agile. And that’s exactly what the Chinese are doing. If you don’t have access to A hundred thousand NVIDIA Blackwell chips, but you still have to create an AI model that outcompetes the American models.

You are forced to try new ways to develop that model and experimentation. And eventually the kind of the argument I have is when you have a look at one of artificial intelligence’s end games, ultimately we want to be able to put AI at the edge of all networks, right? So if China are being forced to develop very good AI models that are very resource constrained, that use new sort of training techniques, at what [00:20:00] point does China not win the war when it comes to the development of excellent edge AI capabilities?

Brian: Interesting stuff. Kurt, Amkar, you guys.

Omkar: Want to weigh

Brian: in?

Omkar: Yeah, quite a lot of interesting things you said over there. And I’ve been nitpicking on a few points. I’m making notes over here as well. But, I do have a question on AI and Augmented reality or extended reality in general. Oh, we are only seeing smart glasses coming into picture again.

And I don’t see a smart glasses to be entire replacement to smartphones because they won’t be happy. They won’t be able to do that was building from my point of view, they would just be another Apple watch for us or just another asset that we’d be looking at. AR glasses one more time and AI, where do you see?

In [00:21:00] 2025 or beyond, when do you see all these two merging together and having one good product that can essentially be Yeah, I do need AR today. I do need virtual reality today

Matt: So I think that so you’ve got consumer and enterprise markets I’ve done a lot of projects for huawei and samsung actually on this topic So the topic that we generally discuss is what comes after this what comes out?

Yeah, what comes after the smartphone now as you guys will know we can take You All of the compute, the modems and everything else in these, and I can stick those into a belt buckle, right? I’ve compacted everything and it’s in a belt buckle. You no longer need your smartphone problem that these manufacturers have to solve is what display do you use?

Now we don’t have a problem creating new displays. We’ve got Pico projectors. We’ve got smart glasses, smart contact lenses. [00:22:00] Basically we’ve got. E ink and we’ve got all sorts of different technologies that we can actually use. If it’s 2am in the morning and you want to see whether or not there is an update on something, are you going to stick your glasses on?

Are you going to put your contact lenses in? Are you going to use e ink on your skin? Or are you just going to pick up your phone and go, I haven’t got any updates? Until society gets comfortable with a new display medium, this is the format we’re stuck with. The only technology that I see that is interesting is something called retinal screens.

Osh have made a couple of these now, there aren’t very many. Retinal screens basically are a technology that literally beams the display into your eye. So you could be wearing, you could be wearing a ring, and it doesn’t matter where that ring is, [00:23:00] it’s there. It literally knows where my eyeball is because it’s tracking it and it just beams the picture directly into my eye.

That’s awesome. So this is what we have is we’ve got we have got lots of new display technologies. Question of what technology wins or what format wins really is more about what display technology Would 4 billion people on the planet say, I love this. If you say, for example, you take Boeing and I say no, there’s a Boeing aircraft there.

I want you to do maintenance on it. Okay. I put some glasses on, they might look a bit clunky or whatever. They might be really nice and brilliant, et cetera, but I’m using it for an enterprise application. The only way to start changing the way that people see some of these displays though, There was a company that has created an augmented reality [00:24:00] laptop.

So if you think about the laptops you guys have got, you’ll open them up, yeah, just like your iPads and your tablets and everything else and it’s a single screen. But this augment, this augmented reality laptop is quite literally a pair of glasses that you put on and all of a sudden, a little bit like virtual reality, you’ve got any kind of environment that you actually want, multi screens and all that kind of stuff.

You’ve got your keyboard, I think that’s a neat solution.

Omkar: I’m looking at augmented reality and, yeah, I like that in general Java is from Ironman, if you guys see, if you guys seen it just how you can project that images anywhere, you can build something up with your hands, turn it around, 3D image.

It’s just, that’s how I’m looking at it, but it’s a long way off, definitely. But I do see a good use of it in healthcare or something in pharmaceuticals, for example.

Matt: So I wrote my codec, one of my codexes, or I wrote a bit of my codex in virtual reality. And on the one hand, it was really interesting.

It was fun. It [00:25:00] was quirky and everything else. But I wouldn’t constant, I wouldn’t do that day in, day out because it was just odd. But with a sort of augmented reality laptop, basically where you have a kind of mixed reality world. I think that’s much more doable, but when we have a look at the, when we have a look at the enterprise space, Accenture are doing a lot of their training in virtual reality.

And as a, as an employee of Accenture, if you went, look, you know what? I just do not like virtual reality. The headset gives me a headache and frankly, screw off. Then Accenture will say, if you don’t do the training, you don’t have the job. So it’s all in the enterprise space, there are a lot of very valuable applications for immersive reality.

And people will put up with inconvenient things because work tells them to, or because it’s work, but when it comes to our personal lives. Would you actually wear a pair of Augmented Reality glasses to walk [00:26:00] around, Cambridge? And these AR glasses go, oh, that’s the university, and that’s Frank, and You’d do it as a little bit of a quirk, but eventually you’d probably go, I’m a bit fed up with this now.

Jack: You’d probably do like a walking tour, wouldn’t you? If you did like a touristy thing I can imagine people walking around, and it’d be like something you’d do for an hour, but you wouldn’t walk around all the time because you’d just look a bit weird, wouldn’t you? If you

Matt: wanted a, an unguided walking tour basically of Cambridge, where, you are walking around and it says this building was built in this date and, and here’s some more information, you can see that’s actually got quite practical utility.

But in terms of replacing this in your everyday life, the reason why the Metaverse didn’t really work for Meta. It’s simply because at what point is everyone going to be sitting on the family sofa, watching a virtual reality movie, not talking to one another, even though we’ve parted through, it’s that kind of stuff.

So [00:27:00] it typically takes between 10 to 20 years for some of the latest technologies to actually be adopted. When we have a look at new form factors, this is where I think things like, say, for example, the space laptop and, augmented reality, smart glasses, and all these other bits and bobs. I think that they augment, they are additional devices to the smartphone.

So I will have my Aura Ring, I will have particular examples where I might don the glasses over something else, but my main go to device as a consumer is still going to be that smartphone.

Brian: Do

Matt: you

Brian: have

Matt: an

Brian: Aura Ring?

Matt: We were talking about this

Brian: yesterday.

Matt: Yeah. No, I was looking at, because I was with Adidas last week.

And quite a lot of them had URA rings, but actually when you have a look at the stats that it tracks, it’s not particularly accurate. So I’ve been doing a lot on predictive health recently as well. And so it’s fascinating and I it’s [00:28:00] interesting, but

Brian: not quite by anyone yet. So speaking of IoT applications I’m going to throw this over to Kurt because Kurt does a lot of work in the IoT space as well as infrastructure.

Sure.

Kurt: Questions, Kurt? The one thing I was thinking about was security. We’ve talked a lot about AI, but what an interconnected devices and that there are more of them all the time. What kind of security challenges or concerns do you see coming in the next, year or two? And then I guess the second part of that is what are companies You know, looking at to do it to counteract that.

Matt: So yesterday I was in Manchester talking about national security, CNI basically, and then enterprise security and SME, SMB security and consumer security. There are plenty of challenges. So there’s a little video. So this one this was a session for Fortinet recently. So about a month ago, we passed the point where we saw the [00:29:00] development of the first fully.

Autonomous, agentic, AI, cyber weapons that are able to evolve themselves, understand and compromise basically the target that they’re going after, and in one case managed to daisy chain together loads of zero day exploits to hack through military grade systems in two minutes. So I run three national security cyber emerging technology programs.

So for GCHQ and the NCSC in the UK. As well as the U. S. Pentagon under something called the DTIP program. Now, when we have a look at national security and CNI and then enterprise security, they are obviously different. When we have a look at trying to secure any kind of connected device, there are some solutions out there, things like quantum fingerprinting, which I think Brian, we’ve talked about before.

So quantum fingerprinting basically is where [00:30:00] we’re able to identify the quantum signature, for example, of the computer chip within your specific device. And because it’s an unclonable function at the moment, we, with a very high level of certainty, not always a hundred percent, but with a very high level of certainty, we can, we know it’s you.

We’ve got Morpheus computer platforms which is Michigan state. So the Morpheus computer platform is a computing chip that reconfigures itself about five times a millisecond. So even if you hack it, you lose connectivity to it. So we had 300 of the world’s top hackers who were given direct access and indirect access to the chip.

For three months, they didn’t hack it. When we have a look at the, how criminals can use some of the technologies that we have on the truck, including state sponsored actors. Quantum computers [00:31:00]as they come online etc. There’s a lot of threats. So the Chinese, about two weeks ago, carefully, if I want to use that word, carefully announced, basically, that they had used a quantum computer for the first time to crack 256 bit AES military grade encryption.

Which is QDeck. The last time basically we thought a quantum computer had been hacked, that a quantum computer had been used to hack encryption, it caused basically panic in the U. S. government, the Pentagon, and the U. S. national security apparatus. And they were all freaking out. And then they figured out that it probably wasn’t a quantum computer that was involved in that attack.

It was an attack against a federal database where, The US government really wasn’t sure how hackers had actually managed to decrypt a federal database. So when you actually have a look at the number of threats that we see [00:32:00] emerging now it’s insane. But in addition to that, I think there are two, there’s two threats when we look at cyber.

And this is where, from an ARM roadmap perspective, there may be opportunities. Now, When we talk about traditional cyber, what we’re typically talking about is endpoint protection phishing, malware, ransomware, worms, that kind of stuff. If I asked you today, how are you as a company defending yourself against ransomware, malware, and worms, and so on and so forth, you’d probably point to Fortinet, Palo Alto, Cisco, those kinds of guys.

Dark Trace. However, the other side of security is now what we call machine learning security, because in a variety of different ways, I can poison your artificial intelligences so that I can create backdoors in there, change their behaviours on a granular level, or even destroy them. In [00:33:00] addition to that, if you’re using generative AI, say within ARM, I can actually use social engineering to get your AI to do things and break its own guardrails.

In addition to that, I can use AI to jailbreak your AI. I can also use adversarial attacks to guarantee that I can break your AI, and then I can use your AI to against your own internal applications, especially generative apps, but I can also get your AI to attack every other AI that it’s connected to.

That was a fun, that was a fun one we did recently. So what we have is we’ve got traditional cyber. Fabulous. So yeah, so think, I’ll pin you on that one, Brian. It’s insane. So on the one hand, you’ve got traditional cyber, which is ransomware malware. Buy a Palo Alto product. But if I said to you today, as a [00:34:00] company, how would you assess the safety and security of your artificial intelligences, which you are using for maybe chip design, CRM marketing, decision making and everything else, where would you go?

Where

Brian: would you go? Like how would you

Matt: respond

Brian: or

Matt: where

Brian: or

Matt: where would you go? So if I said if I said to you today, Brian, I said, how certain are you basically that your ais have not already been hacked and compromised? What tools would you use?

Brian: We would defer that question to our IT expert.

Omkar: I

Matt: was

Omkar: about to ask

Matt: the same thing.

And increasingly, they’d say something like A. B. testing, A. I. auditing, but these models are so huge that you might only be able to audit like 0. 1 percent of the A. I. s that you have running in the business. This is where the new cyber frontier is what we call machine learning [00:35:00] security, which is basically the security of your A.

  1. s. Because your AI, the ARM AI, is highly likely to be plugged into lots of different parts of the organization. I don’t need to feed it ransomware or malware. If I can alter its behaviors, then I can get it to do pretty much anything. So when the CEO says we’re looking at new markets to expand into.

Tell me which new markets are fantastic to move into. I could get it to spit out the words Antarctica.

Brian: I think most people understand the risk in integrating generative AI into their enterprises. Do you think that’s going to slow down AI based innovation in the coming years, at least in big enterprises?

Yes.

Matt: Now there’s a couple of reasons for this. So the way that the vast majority of companies act is they all have FOMO, they, the fear of missing out. As we [00:36:00] see a new hype cycle emerging, which for example is generative AI, we see the hype cycle emerging. Everybody is talking about generative AI.

It’s in the press. It’s everywhere. The reaction of industry is from the CEOs and CFOs is, Everyone is talking about this amazing new technology. What are we doing? Are we, what are we using it for? Are we doing experiments? And people are saying no, it wasn’t really on our radar until yesterday.

We saw it with the metaverse. Yeah, these, all these bosses basically will say, apparently this thing is amazing. If we do not use it, if we can’t tell shareholders that we’re using it and experimenting with it, We’ll see our shares dive. So we have to use it, go off and find some use cases and go and use it.

So what happens is you end up typically with the IT and OT teams going and experimenting, say for example, with artificial intelligence, and then, and they’ll [00:37:00] develop some products and use cases and bits and bobs, and then they’ll say to the CEO, this is a use case, and this is where we’re using it. And the CEO will go how good is it?

And they’ll go it’s quite good, actually. Up until that point, nobody has generally thought about what is the risk associated with using this technology and bringing it into our organization.

Brian: We’re bumping up on time here, and I want to be respectful of yours. Gents, any. Final questions.

Jack: Yeah. So I have one about the use cases.

Cause you touched upon health. You do a lot of work with healthcare companies. So to me, healthcare is a really engrossed in use case where technology can have, and is having a really profound impact. Do you see that as a sort of V leading use case, which where technology is having a true impact or are there others in your work?

The use of

Matt: artificial intelligence in healthcare. Is groundbreaking in many different ways, whether it’s predictive or quantitative [00:38:00] healthcare, whether it’s the use of AI to develop new vaccines in seven minutes. Whether it’s the use of AI basically to synthesize every single known protein on earth in pretty much the click of a finger, whether it’s the use of AI to create new cancer vaccines new CRISPR and gene therapy treatments, whether it’s its ability and radiology to pick up, different things.

AI’s use in healthcare is stunning. The FDA have now approved about 300 AI sort of based treatments and the FDA actually have a separate arm where they try to fast track AI created treatments and cures and all that kind of stuff. Regulators are trying to speed up the approval of artificial intelligence based healthcare treatments and things in the U S and elsewhere.

So stunning. However, when we actually have a look at breakthroughs elsewhere one of my clients is DAO. [00:39:00] Using Google GNOME, we can create 20 million new compounds in days. And those compounds, they have impacts basically on things like semiconductors as well as every kind of material. We’ve got AIs basically that are being used to develop new, shall we say, solutions that Help us with climate.

We’ve got AI’s basically that are accelerating battery development, solar panel development. We’ve got AI’s basically that are helping with environmental studies, so while healthcare itself as an industry benefit probably the most from AI, because healthcare is just insanely complex anyway, the ripple effects are staggering.

And then we’ve got finance obviously basically with. Quantitative trading and decentralized finance. If you didn’t see it recently, AI recently made a 250 million dollar [00:40:00] cryptocurrency and then about two days ago someone actually created a couple of bots that kind of then got hijacked and they pushed the goat cryptocurrency to beyond 300 million dollars.

It’s a brave new world. Oh, it’s not I keep saying to people but when people say what about the future of jobs where we’re all made redundant I say you do realize that you can just make money because you can make crypto and then yeah Money is just a is a store of value where our store of value is fluid healthcare There are lots and lots of benefits But I think healthcare is the biggest one by far and it’s the one that resonates most with individuals

Brian: Matt As ever, talking to you is like taking a tap of LSD and just looking at the world around us.

It was fantastic. Thanks so much for your time. [00:41:00] Thank you, content colleagues, for your awesome questions. Yeah, likewise. Great

Matt: seeing you all again. Thank

Omkar: you. Great seeing you Matt. Hope we speak again soon.

Brian: Absolutely. We may follow up as we actually flesh out the predictions piece that we’ll do in the coming months.

We may follow up with you. So don’t be surprised. I look forward to it. Awesome. Thanks everybody. Thank you. Take care.

Omkar: All the best.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo