Arm Newsroom Podcast
Podcast

Arm Viewpoints: The Rise of Hybrid AI

TECHnalysis Research President Bob O’Donnell on the “great AI repatriation,” distributed AI architectures and what CIOs must do to prepare.
The Arm Podcast · Arm Viewpoints: Bob O'Donnell interivew episode.v1.0

Listen now on:

Applepodcasts Spotify

Summary

Artificial intelligence may have exploded into public consciousness through cloud-based tools, but its future is increasingly distributed.

On the latest episode of Arm Viewpoints, Brian Fuller, Editor in Chief at Arm, sits down with Bob O’Donnell, Founder and President of TECHnalysis Research, to discuss findings from his latest industry study on hybrid and why enterprises are rethinking cloud-only strategies.

Hybrid AI extends the hybrid cloud model into the AI era. Instead of running workloads exclusively in public cloud environments, organizations are increasingly distributing AI across three tiers: public cloud, private data centers, and client devices such as AI PCs and smartphones.

One major driver is what O’Donnell calls “the great AI repatriation.” Enterprises are moving certain AI workloads back on-premises due to cost concerns, data gravity, and the need to fine-tune models using sensitive corporate data. GPU-equipped servers and evolving enterprise AI infrastructure are making that shift more feasible.

At the same time, edge AI is accelerating. Nearly 60 percent of organizations surveyed are extending AI to the edge, particularly for latency-sensitive workloads such as manufacturing automation, robotics, and computer vision.

AI PCs also play a growing strategic role. While adoption is still early, O’Donnell sees agentic AI workflows — intelligent assistants running locally while tapping cloud and enterprise data — as a major inflection point. Advances in NPUs, Windows ML, and model optimization techniques are laying the groundwork.

As the world continues to adopt and adjust to AI, the takeaway for CIOs and IT leaders: plan now for a distributed AI architecture. Hybrid AI isn’t just a technical shift — it’s a strategic one. Organizations that design infrastructure, connectivity, and application development with distribution in mind will be best positioned as AI scales from the cloud to the client and everywhere in between.

Listen to the full conversation on Arm Viewpoints.

Speakers

Bob O'Donnell

Bob O'Donnell

Bob O’Donnell, the president, founder and chief analyst at TECHnalysis Research, has a lengthy, multi-faceted career in the technology business. He is widely regarded as an expert in the technology market research field and his original research and advice is used by executives in large technology firms all over the world. A little more than a year after the firm’s founding, O’Donnell made it onto AR Insight’s list of most influential analysts, and on several occasions, he has made it to the top 10 of that list. He is the highest ranking independent and small firm analyst in AR Insight’s Vendor Advisor rankings, which lists analysts that influence company strategy. He was also recently listed as one of the Top AI Experts by AR Insights. In addition, TECHnalysis Research was named a Best New Entrant in the Institute of Industry Analyst Relations Analyst of the Year Awards for 2015. O’Donnell was also listed as a top social media influencer on 5G in 2019, 2020 and 2022 by Onalytica and Onalytica interviewed him at the end of 2021. The same firm named him as a 2016 and 2017 top social media influencer on the topic of Virtual Reality as well. He is a prolific author and content creator as well, having written or produced over 1,000 different columns and podcasts over the last 11 1/2 years across numerous different sites.

Brian Fuller, Editor-in-Chief, Arm and host

Brian Fuller, Editor-in-Chief, Arm and host

Brian Fuller is an experienced writer, journalist and communications/content marketing strategist specializing in both traditional publishing and emerging digital technologies. He has held various leadership roles, currently as Editor-in-Chief at Arm and formerly at Cadence Design Systems, Inc. Prior to his content-marketing work inside corporations, he was a wire-service reporter and business editor before joining EE Times and spending nearly 20 years there in various roles, including editor-in-chief and publisher.  He holds a B.A. in English from UCLA.

Transcript

Brian: [00:00:00] Welcome to another episode of Arm Viewpoints, the podcast where we bring you technology insights at the intersection of AI and human imagination. I’m Brian Fuller, editor in Chief at Arm, and today I am joined by someone who truly needs no introduction in the tech industry. Bob O’Donnell, president and founder of TECHnalysis Research.

Bob has spent more than two decades analyzing the technology landscape from his early days in tech publishing to nearly 15 years at IDC and now leading his own independent research firm. In this episode, we dig into Bob’s latest research on one of the most important shifts happening in AI today. The rise of hybrid ai, we explore what hybrid AI really means in practice, why enterprises are starting to rethink cloud only strategies.

What’s driving the great AI repatriation, [00:01:00] distributed AI architectures, AI PCs, AgTech workflows, and what CIOs and IT leaders should be doing now to prepare for a more decentralized heterogeneous AI future and much, much more. It’s a wide ranging conversation that cuts through the hype and gets to the real architectural, economic and strategic decisions organizations face as AI scales from the cloud to the client and everywhere in between.

So the man, the myth, the legend. Bob O’Donnell, thanks for joining us. How are you, Bob?

Bob: I’m good, Brian. Thank you so much for having me. It’s great to be here.

Brian: You and I have known each other for more years than we’re going to admit on this podcast, but for those people who might not be familiar with you, give us a little bit about your background, your work at Tech Analysis, that sort of stuff.

Bob: Bob O’Donnell, president of TECHnalysis Research, [00:02:00] and I’ve been a tech industry analyst for a little over 25 years. Started clearly as a young child. But I started prior to that I was in, in computer publishing, so I did, was it some of the MAC publications? PC publications, online publications, the era of the beginning of online digital publishing for computer mags.

And then got started working at IDC. And I was at IDC for about 14 and a half years. And then started my own firm, which is TECHnalysis Research, and I’m now in two year 13 of that. I’m happy to say

Brian: interesting that. I both shared that publishing background and there were points in that. Particular era of my career where I thought about going the analyst route.

Why did you choose that path?

Bob: Yeah, I know, it’s a great question. So I actually had a buddy that I used to work with at one of the publications that I was at. I think he was actually, he and I worked together when we were at Mac Week, and he went to Forrester, I believe it was at the time. [00:03:00] And I ran into him at an event or something and I said, Hey, how is it?

And he says, oh, it’s great, and. ’cause you really get to dive deeper into the products and the strategy and everything else. And I’m like, oh, that’s for me, that’s the kind of thing that I wanna do. And happy to say, within, it’s one of those jobs where, sometimes you know right away either to the positive or to the negative with certain jobs.

And this is one of those things where it just clicked right away. And I was like, wow, I think I found my home. I think I know what I wanna finally be when I grow up.

Brian: Yeah.

Bob: And so that’s how it happened and since then it’s been a great run.

Brian: Indeed. It has. Then you went on out on your own after IDC which to some people would be a little nerve wracking, but you’ve knocked it out of the park.

Talk a little bit about that transition.

Bob: Yeah, no I had reached a point at IDC I was doing well. I had a good title and everything else, but I also saw that I had. Hit a bit of a ceiling in terms of where I was gonna be able to go and my interest had broadened. I wanted to get [00:04:00] into, at IDC I ran all the device research, so PCs, smartphones had just started the wearable research and I love all that stuff and continues to be a huge part of what I do.

But I also saw a lot of other opportunities out there and I wanted to really stretch. And one of the challenges when you’re at a big firm is you tend to be in a very deep. But clearly defined stovepipe, meaning, Hey, you can talk about this, but don’t talk about that ’cause that’s somebody else’s swim lane.

And I was like, you know what? That’s not what I wanna do. I wanna broaden myself out. And, a lot of the business of analyst relations is really based on relationships that you have with folks, senior execs at a lot of companies as well as the folks who run these. Programs and I felt I think people are willing to talk to Bob, not just Bob from IDC.

And I’m like, if I’m ever gonna do it in my life, now’s the time. But I felt like I had the opportunity and it’s been fantastic.

Brian: Your insights have always been fantastic. Always been ahead of the curve. And [00:05:00] speaking of insights, you recently published a report on hybrid ai. Tell us a little bit about that.

First of all, how do you define hybrid AI and what did you learn from how enterprises interpret that term?

Bob: So one of the things that I determined was gonna be important when I started TECHnalysis was to do my own independent research. There’s a lot of smaller independent analysts, but not a lot of them do in-depth research.

And I always felt that would be a calling card for me to be able to distinguish myself, and it’s proven to be the case. And so every year I tackle a topic or two that I think is gonna be relevant to a lot of people in the industry. And obviously AI and generative AI has been a huge factor, and I’ve looked at that.

But in particular this year, what I wanted to look at was this idea of hybrid ai. And by hybrid ai, I’m referring to the concept similar to hybrid cloud, where you leverage public cloud and then you have your local private cloud. And then with ai, it was a similar kind of thing where you’re doing AI workloads in the [00:06:00] cloud.

Some companies are starting to do them within their own private cloud enterprise. And then of course, on the client, that’s the third tier of this hybrid AI infrastructure or idea, and that’s really what I wanted to dig into was how were companies thinking about trying to do ai? Because what we’ve seen from, pretty much early on with hybrid ai, or excuse me, with generative AI overall, you get the best results when it’s.

The tools, the models are customized to your own data. So if I’m a big organization, I wanna be able to leverage all my corporate data, but I don’t necessarily wanna share it in the cloud. And so for a lot of different reasons, it just seemed logical to see more of this workload being divided.

And then of course, the third big piece, as we started to see this notion of an AI PC and AI enhanced smartphones as well, the compute capabilities. Came along at the right time. So we saw the development of these big transformer models, which originally of course had to run in the [00:07:00] cloud. But then we recognized, hey, they could be done small language models and other things could be run locally on device.

And then you realize, hey, wait a minute, I can leverage some of this stuff within my own data center as well as the public cloud. And that’s how the pieces started to fall into place for hybrid ai. And it’s fascinating, I have to tell you because. At CES, I was at all the major tech keynotes, which ironically were from all chip companies, which is crazy.

Except for Lenovo had a big one, of course, but every single company. At that event, use the phrase hybrid ai. It was clearly, the talk of the big, tech leaders in terms of where they’re headed because they see this happening, right? A lot of people initially got excited about doing these workloads in the cloud ’cause that’s where the tools were.

Chat, GPT obviously is the first one. Then we’ve seen Gemini. Anthropic and all the other ones that have become a big deal. But we’ve also, again, we’ve seen more stuff start to happen locally and now that is really in full momentum. And the [00:08:00] implications of that is what I really wanted to get into.

I know this is a long answer to a short question, but it was understanding those pieces. Then trying to say, what does this mean? If I’m an organization and I need to figure out how I wanna create an AI strategy that’s gonna leverage all the tools that are available to me and leverage my own data in a way that I am comfortable

Brian: with, you always have expectations when you feel the survey, right?

What jumped out at you and surprised you.

Bob: A couple of things. One of them was how willing organizations were to start to bring some of these workloads from the cloud. Onto their own premises. And it’s helped, it’s been helped by a couple of things. But we’ve got, Jensen Wong of Nvidia talking about the enterprise AI factory, right?

He talked about AI factories and enterprise AI factory, of course, is doing this on your own, basically buying GPU equipped servers for your own organization or a private cloud that you’re using of some sort colo center, for [00:09:00]example. So that’s been a. Big thing, but I was surprised at how willing companies were to make those investments.

’cause those are big investments and the skill sets for that are a big issue because not a lot of people have those skill sets. The other thing that really surprised me is the fact that we saw a lot of interest in I PCs from a strategy perspective. Now look, the reality is. IPCs, we thought initially were gonna really become this huge driving factor.

They’ve done okay, but they haven’t actually taken the world by storm just yet. In part because we haven’t seen a whole lot of AI applications running on PCs. We’ve seen some models, but not a lot of the applications anyway. But even despite that concern, the number of people who said they were strategically important for the future was staggering.

So that really surprised me as well, because again, sales haven’t really. Translated quite yet, but it gives me hope that we will see those sales over the [00:10:00] next year or two.

Brian: You used a phrase in the report that I really love the great AI repatriation.

Bob: Yes.

Brian: Define that and tell us a bit about what’s driving organizations to pull AI workloads out of the public cloud.

Bob: 10 plus years ago, momentum seemed to be towards, we’re gonna put everything in the cloud, we’re gonna close all the local data centers, and that’s just the way it’s gonna be. It’s almost back to the mainframe terminal model of way back when. But then a lot of organizations realized, a, the costs were super high to do that.

There were a number of other challenges with regard to that. So we’ve seen some companies bring some of these workloads back and with AI workloads in particular, we’re starting to see this. And it’s for a couple of different reasons. One of course, is cost. And in fact, the number one reason why companies said they’re starting to move some of these AI workloads back is because the costs involved.

The other thing is security. Control the data. What we’ve learned with AI [00:11:00] is it’s really important to have your own data be used to fine tune the models. Initially we thought companies were gonna build their own models. We found that really doesn’t make much sense. But what you can do is fine tune existing models and then leverage your own data and that becomes a kind of classic case of what we call data gravity, where you wanna do the computing, where the data lives.

The most precious data that most organizations have is still behind their own firewall. That’s the stuff that’s the most important stuff, which in turn becomes the most valuable stuff when you want to fine tune or train an AI model and then build applications around that. So it’s that data gravity, the costs and what have you that are, have really given companies this sense of, Hey, this is what we need to do.

We’ve also seen, again. The rise of these GPU equipped servers from all the big, server guys, the Dells, the [00:12:00]HPEs, Lenovos, super Micros, all Ciscos, all these guys are building these servers that companies are realizing, Hey, we can put them in place, run some of our workloads here. And that’s what allows that repatriation because now the things they used to only be able to do in the cloud they’ve created or are in the process of creating the infrastructure to do that.

And that’s where that notion of the great AI repatriation comes from, is that the beginnings of that movement.

Brian: So you’ve preempted my next question a little bit, which is we’re seeing a balanced distribution across cloud private data centers in the edge. What does a fully realized three-way hybrid architecture actually looked like in practice?

Bob: That’s exactly what you described, Brian. And look, so I think we’re gonna see. Every variation known to mankind. And what do I mean by that is we are certainly gonna see some AI workloads that stay in the cloud. ’cause it just makes sense. The tools are there. There’s not that much [00:13:00] data egress charges for moving data back and forth.

’cause one of the reasons for the. Repatriation. Two is you’re tending to move a ton of data when you’re training and fine tuning these models. That’s a lot of data movement, and a lot of times there are very expensive charges to do that back and forth. So companies are realizing that they can do some of that stuff on their own, but they’re gonna do some in the cloud.

And then. We’re just seeing the beginnings of running some of this on the device. So back to what I was starting to say is some workloads are gonna stay in their own environment. Some will just be in the cloud, some will just be in the enterprise. Some will just be on the client. But we’re starting to see companies go, Hey, maybe I can start to do.

Some of this work on a client, but then tap into the public cloud when I need to, or tap into my own enterprise. So I’ll give you an example of where I think we’re gonna see happening here in 2026. This notion of an AI browser. At first it was like, oh, wow, what’s the big deal? I [00:14:00] launched a browser and instead of going to google.com, it goes to chat gpt.com or goes to perplexity.com.

It doesn’t seem that big of a deal. But what companies are starting to do, these browser companies are saying, Hey, we can start to build models, small language models and orchestration engines that come along with the browser. So when I download that browser and it’s always being updated, ’cause we always know browser updates happen seems like a couple times a day.

That means the bottle’s being updated. But this orchestration piece of it means you could have something where you type in a query or some kind of request to the local engine and it says, oh, actually I can answer that one on my own, not send it to the cloud at all. Or it can say, I can do a portion of this here on the device, but another part I’m gonna have to send to the public cloud.

’cause I simply don’t have access to that. Just that ability to split those workloads could have a huge impact on terms of the [00:15:00] power requirements. Because one of the things we’ve been hearing about as well that’s driving hybrid AI is the fact that. We can’t necessarily count on these big data centers having access to all the electrical power that they need.

So another reason why companies are thinking about this is Hey, if I have my own private GP server farm, then and I can control the power to that, then I at least know I’ve got that. But longer term it also means in terms of the electricity usage. So imagine that scenario, that AI browser, like I said, between the client and the public cloud.

You could start to write custom applications that do the same thing, but it’s between a client device and the enterprise data center. ’cause maybe all your custom CRM data is, within the walls of the data center. But you’ve got a local tool on your client that does some of it, then hits the data center, and then eventually of course you could do a three-way distribution where.

Some applications can do some stuff in the public cloud, do some stuff locally, and then do some stuff in the enterprise. So we’re building the tools now and the [00:16:00] standards, most importantly, to enable this. So things like MCP, which is called Model Context Protocol. It’s a standard by which you can distribute AI workloads and hit various AI servers.

And the physical location of the server isn’t really relevant anymore. And so that’s enabling this notion of a distributed heterogeneous architecture where again, you can do the different compute elements in different areas, depending perhaps where the data lives, classic data gravity and or where the best of the most appropriate compute requirements are.

And eventually it’s gonna be. And where’s the. Is their power. So all of these factors come into play and that’s why I think we’re starting to see so much momentum behind the overall concept of hybrid AI and that’s how these workloads are starting to be built.

Brian: Are you seeing yet any patterns in workload placement?

Bob: No. It’s still early days to be clear, right? So I think we’re seeing a little bit [00:17:00] of. We’re certainly seeing a little bit of the client and cloud combinations. I think companies are still building their own custom applications, are gonna leverage client and the local data center. But I think we will, I think we’re gonna see a lot more of that.

Look, you could argue, if you think about it, what apple’s, they called their private compute is basically a hybrid AI model because. They recognized that there’s certain amount of AI they could do on the client, but the other part, they were gonna have to go out to the cloud for. Now, in their case, it was a walled garden sort of cloud environment.

But essentially that’s a hybrid AI model as well. So we’re gonna see other variations like that, that are starting to be built. But in terms of the exact way it’s gonna be split up, I think it’s still a little early to tell, but like I said, I fully expect to see different variations of all three or any two or three, are gonna be created.

Brian: So in your report, 60% of organizations are extending AI to the edge, right? We’ve talked about that distribution. [00:18:00] What’s the primary driver? Is it latency? Is it privacy, cost, user experience?

Bob: Not every workload is ideally suited for every environment. Like some workloads clearly are gonna be better on the edge, and so they’re the easy ones to knock off are things where latency is an issue or.

That’s for example, in a manufacturing environment where the sensors and the equipment are there, you gotta wanna run those AI workloads locally because just the time to go back and forth to the cloud, even though it’s not huge amount, can make a real difference in certain types of applications. So certainly latency is an issue in over an overall response time.

In terms of the speediness of the calculations, the amount of data that has to be transferred is also a factor. And all of this. In the study, one of the things I asked about is, Hey, which workloads are you running in which environments? And I have a breakdown of that with a fair amount of detail by company size and by industry and a whole bunch [00:19:00] of stuff.

And so things like robotics, manufacturing automation, those kinds of things, computer vision, all of those kinds of things have typically been things you’ve thought about running on the edge. And no surprise. Now that they’re being AI enhanced or AI equipped, those things are running in edge environments and I think over time we’re gonna see more and more of those.

But right now it’s things that you would normally expect are running on the edge. But one of the biggest takeaways I think from the entire study was the fact that over the course of the next couple of years, people expect this split where that it’s primarily in the cloud. Now, of course, not a surprise that, within a couple years, they expect it to be much more of a.

Almost equal split, still a little bit more in the cloud, but significantly more on the client and then in the enterprise data center.

Brian: So earlier you mentioned a I PCs going forward, what kinds of applications or experiences will these devices enable that were not [00:20:00] really seeing yet? How would they differ from the traditional notion of a pc?

Bob: Yeah, no, it’s a great question and I think the big thing that’s gonna happen locally is agents, we’ve been hearing about agentic ai, right? So it just makes sense to have an agent that’s doing something on your behalf, run on your device. And in fact, I think that’s gonna be the biggest application for IPCs for the next couple years.

We’ll see other things as well, but I think that’s gonna be really important is this idea of an agent or this doing things on your behalf. Now again. Many times those agents will request data from the public cloud to address a certain query. If you say, Hey, book me a flight to wherever. So it’s doing a little bit of its own thing, but it’s basically, then it’s gonna go to the websites of the airlines and other things that it knows that you like to use.

It’s gonna compile that, put it together and it’s gonna store it on your calendar. So you’re gonna see, that’s a classic example of an agentic workflow where you’ve got [00:21:00] some compute running on the device and then it’s leveraging some things from the public cloud. The other thing might be, again, maybe after book a meeting and then in that case.

If I’m in a big organization where I have to, specify I want this meeting room at this time, it’s gonna leverage that data is probably stored within the local enterprise data center. It, it might be in the cloud, but again, I’ve got an ejected workflow. I’ve started on the client device and then runs in the enterprise data centers.

I think it’s gonna be. A lot of agentic things that happen, and then slowly but surely we are gonna start to see more other general purpose applications. Run on AI PCs because a couple of different things are happening. Number one, of course, we’ve seen more AI compute NPUs. I do have a few comments I’ll mention on NPUs in just a sec, but NPUs have been a part of this.

We’re seeing a lot more usage of GPUs and the fact of the matter is that CPUs still play a really critical role in, in doing [00:22:00] all of these kind of workloads. So it’s, we talk about distributed environment across. Public private and client device, but there’s also a distributed compute environment within the SOC, between the CPU, the GPU and the NPU.

So you’ve also got that going on. So that’s gonna help drive some more things. Another huge factor is, especially in the Windows world, we, there was not a single standard to be able to leverage all the various NPU architectures. ’cause one little niggly technical detail most people didn’t realize is that.

When Intel did an NPU and then ARM, or excuse me, A MD did an NPU and Qualcomm did an NPU. They were all completely different, which means I’m a software developer and I wanna write an application that leverages the NPU.

Brian: Yep.

Bob: I had to write three different versions of that NPU function for each architecture.

Obviously that is not sustainable. So finally, Microsoft got dealt with the issue and they just released late last fall, something called Windows ml. And Windows [00:23:00] ML becomes this sort of abstraction layer that lets a general purpose developer write something and then it interprets and takes care of the translation to the various NPU architecture.

That was absolutely essential before we could start to see, general purpose PC AI application. And it takes developers a couple of months, to. Build tools that leverage that. So that’s why starting here in 26, we’re gonna see more of those kinds of applications. So I think that’s gonna be piece, a big piece of it.

The other thing is we’re also seeing much more clever quantization and dis model distillation techniques whereby you can take these huge models and reduce them down to a size that can actually fit and run. On a pc, and again, then you throw in things like MCP and the ability to do some work locally and some in the cloud.

In a, again, a standardized way. I don’t wanna have to create a special means to communicate with other servers. All of those pieces lay the [00:24:00] groundwork for the ability to build these AI PC applications.

Brian: So what’s the single most important takeaway? You want CIOs and IT leaders to draw from the report.

Bob: The single biggest thing is to start planning out a hybrid architecture and figuring out how you create those distributed apps. Look, it’s not easy to do, right? I’m saying that there are some tools. There are some, but there’s still a lot of holes to be filled. And there’s security folks who’ve said, Hey, current versions of MCP have some big, security holes have to be addressed.

So there are issues that have to be overcome, but you’ve gotta start planning. These things don’t happen overnight. Start thinking about how you can build the infrastructure that you need. By the way, all of a sudden networking and connectivity become way more important. I’ve argued in fact that in a case of a pc, a 5G modem could be just as important as any other element in the pc because if I am in a hybrid computing environment and I wanna be able [00:25:00] to leverage my AI tools, regardless of where I am or regardless of what connectivity I have, I wanna have an always on connection.

Especially if I travel around as I do and a lot of folks do, you need to make sure that connectivity is there. ’cause if I’m running something that can do some local compute, but then needs to go to the cloud for other elements, I want that connection to be always there. So I think it’s gonna lead to a, a.

Renaissance in of interest in cellular equipped modems on PCs. So I do think that’s gonna be important. So that’s something for organizations to think about. Then again, what level of commitment do they wanna make to these, GPU equipped servers. These are not inexpensive purchases.

These are six, seven figure types of purchases that these companies have to commit to. That’s a big deal. And then getting the expertise in-house, one of the biggest challenges that companies said is, Hey, we don’t have the skillsets yet. Now to be fair. A lot of these AI skill sets are brand new for everybody, so there’s just a lot of education that has to happen, but that’s again, part of the planning [00:26:00] process.

But bottom line is it’s definitely a world where there’s opportunity to leverage the cloud for what it’s best at. Leverage the enterprise data center for what it’s best at, leverage the client for what it’s best at, and then figure ways that those three different elements can work together. So let me put those elements in place and then let me get my, in-house custom application developers working on ways to leverage these pieces and then of course, whatever third party tools become available, perhaps leverage those as well.

So those are all the pieces I think companies need to think about and IT managers and C-level execs in terms of figuring out a strategy to create this capabilities to do hybrid ai. A

Brian: lot of complexity there, but also a lot of promise. Bob, where can listeners access the report?

Bob: They can find it on my website.

It’s on the homepage. There is a free version at tech analysis research.com, T-E-C-H-N-A-L-Y-S, and then the word research.com. Actually, you can even go to just [00:27:00] tech analysis.com. It’ll redirect you, and right there on the homepage you’ll see hybrid ai. There’s a free version of a summary of the first 20 or so slides that cover all these key highlights.

Brian: Let’s wrap up with a little off topic, lightning round. Four quick questions. Keep the answer. One or two words. One book that changed your life.

Bob: Zen and the art of motorcycle maintenance.

Brian: Oh, that’s a good one. William Lee Tet Moon, I think.

Bob: Can’t remember the author.

Brian: Alright, last meal.

Bob: Oh, it’s gotta be a big juicy ribeye.

Scallop potatoes, maybe some Brussels sprouts. Big, beautiful Cabernet. I’m good.

Brian: Favorite. Can’t live without technology.

Bob: Whew. I guess I would say my pc. It’s just what I, that’s what I am and who, that’s the device that I’m, I love my smartphone too, but I still, I’m old school.

I still go back to that pc. That’s where I like to get stuff done. [00:28:00]

Brian: Favorite thing to do when you’re not divining? The future of technology.

Bob: Playing music unquestionably it got a band and that’s what I love to do.

Brian: Rock on Bob. What a long, strange trip it’s been.

Bob: Indeed.

Brian: Bob O’Donnell, thank you very much.

You’ve been very generous with your time. We appreciate it. So thank you.

Bob: Thank you, Brian. Appreciate it.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo