Arm Viewpoints: Matt Griffin’s 2026 predictions for AI, robotics & global tech strategy
Speakers
Matt Griffin, founder, 311 Institute
Matthew Griffin is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award-winning futurist, and author of “Codex of the Future” series. Matthew’s work involves being able to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society.
Brian Fuller, Editor-in-Chief, Arm and host
Brian Fuller is an experienced writer, journalist and communications/content marketing strategist specializing in both traditional publishing and emerging digital technologies. He has held various leadership roles, currently as Editor-in-Chief at Arm and formerly at Cadence Design Systems, Inc. Prior to his content-marketing work inside corporations, he was a wire-service reporter and business editor before joining EE Times and spending nearly 20 years there in various roles, including editor-in-chief and publisher. He holds a B.A. in English from UCLA.
Jack Melling, Senior Manager, Editorial Content
Jack Melling is a Senior Editorial Manager at Arm, where he plays a key role in managing the company’s editorial content, including blogs, podcasts, and reports. He works closely with the Arm Newsroom team to communicate the company’s innovations, particularly in fields like mobile technology, 5G, and AI. Melling has been instrumental in highlighting Arm’s contributions to the evolution of mobile technology, including its role in advancing the mobile form factor and 5G connectivity. He also provides insights into emerging tech trends, such as foldable phones and next-gen AI applications. Melling’s work emphasizes the impact of these technologies on society, particularly how 5G is set to transform industries by enabling faster connectivity, smarter devices, and new use cases like smart cities and autonomous driving. Through his editorial leadership, Melling helps position Arm as a driving force behind the tech innovations that shape our daily lives.For more about his contributions, check out the Arm Newsroom or community blogs where he frequently shares insights on the future of technology.
Omkar Patwardhan, Content Specialist, Arm
Omkar Patwardhan is a Content Specialist at Arm, where he crafts and manages engaging content like blogs, whitepapers, videos, podcasts and reports for the Automotive and Client Lines of Businesses. With a research and analytical mindset combined with a creative flair, Omkar has created insightful pieces across Arm Newsroom, Arm Community, SOAFEE.io, and developer.arm.com.
Kurt Wilson, Senior Content Writer
Kurt Wilson is a seasoned B2B copywriter with experience in the SaaS industry, specializing in SEM/SEO strategies. He excels in crafting compelling blogs, digital ads, emails, and case studies that drive engagement and deliver results. He joined Arm’s content team in 2024 covering Infrastructure and IoT.
Summary
In Part 2 of our conversation with futurist Matt Griffin, we look ahead to 2026 — a year poised for breakthroughs in neuro-symbolic AI, real-time learning systems, quantum-accelerated models, specialized AI architectures, and next-generation robotics. Matt also unpacks shifting global AI strategies across the U.S., China, and Europe, revealing where the real competition is heading.
In this episode, we explore:
- Why 2026 will mark the rise of neuro-symbolic AI, combining reasoning, LLMs, and neuroscience
- The emergence of non-situational AI that learns and updates in real time
- How these breakthroughs may push AI closer to true few-shot and zero-shot learning
- Why large foundational models may give way to specialist AIs optimized for specific domains
- The overlooked importance of quantum machine learning algorithms
- Shifts in global AI strategy — U.S. labs racing to AGI while China focuses on open-source scale
- How China’s GPU optimizations and distributed training breakthroughs could reshape competition
- Why robotics is heading into a hype bubble — and where real progress is happening
- The divergence between Western humanoid robot pricing and China’s aggressively low-cost models
- The long-term outlook on general-purpose humanoid robots (not before 2035)
- The rise of fully autonomous “lights-out” factories in Asia
- Matt’s current favorite AI tool — and why user sentiment is shifting
- The future of agentic systems in 2026 and beyond
Transcript
Brian: [00:00:00]
Hello, and welcome to another episode of the Arm Viewpoints podcast. I’m Brian Fuller, editor-in-chief at Arm, and we’re excited to welcome back our old friend from the 311 Institute, futurist Matt Griffin. Joining in the fun today is the rest of our amazing Arm Content Team: Jack Melling and Omkar Patwardhan from Cambridge, and Kurt Wilson from San Jose.
In the second episode of our two-part conversation, we look ahead to 2026 and explore what Matt believes is coming next in AI, robotics, and global technology strategy. In this episode, you’ll hear highlights including:
- why 2026 will mark the rise of neurosymbolic AI,
- how non-situational AI and real-time learning could push us closer to true few-shot and zero-shot capabilities,
- why future breakthroughs may come fromspecialty AIs rather [00:01:00] than big foundational models,
- and what’s real and what’s hype inrobotics for 2026.
It’s a fascinating look into the near future and the technologies that will define the next wave of innovation. Let’s jump back in with Matt.
Jack:
Should we delve into the 2026 predictions now? We’ve talked a lot about 2025. I’m interested to hear what you think for the year ahead.
Matt:
Drum roll, please.
So, if we have a look at AI specifically, I’ve been doing a lot of talks recently, obviously, on the future of artificial intelligence, including with Google. Now, when we look at 2026, realistically there is still going to be a lot of talk about artificial general intelligence.
Proto-AGI is really a 2028 phenomenon, predominantly because of the scale and computing power that’s needed to get there.
Now, in the interim, in the sort of end-2026, 2027 timeframe, what I think a lot of people aren’t really going to be talking about are the developments we’re going to see in what we call neurosymbolic artificial intelligences. [00:02:00]
Now, this is the development of a hybrid sort of AI that combines large language models with reasoning to create artificial intelligences that, in researchers’ terms, genuinely think for themselves. So, what we’re doing is we combine LLMs with reasoning, but also with human neuroscience, to actually create what could be thought of as the world’s first actual thinking machines.
Now, if you think about the applications of neurosymbolic artificial intelligence and agents, we already see agents that are able to act and decide on their own behalf, as well as actually pull in resources as and when they want, as well as evolve themselves.
What happens as you put agentic artificial intelligence together with, say, a foundational model that can genuinely think in a very similar way to humans? So we’ve got that.
The next AI development that we see is what we call non-situational artificial intelligence. Now, the artificial intelligences that we use today have generally been trained on static data, and even though AI agents are very good at analyzing streaming real-world data, when we have a look at the big foundational models, they’ve been trained on static corpora of data. [00:03:00]
So, non-situational artificial intelligences are AIs that are able to update their knowledge and their insights in real time as these massive datasets themselves change. This is where you start edging a little bit closer towards few-shot or zero-shot artificial intelligences.
And when you think about few-shot and zero-shot AIs, these are AIs that are able to, should we say, learn new things without necessarily having to be trained on huge amounts of data. So a very simple example of this is: for a human to learn a new skill—a soft skill or a technical skill, for example—it takes us between 20 to 120 hours.
However, for an artificial intelligence to learn the same new skill, you need huge volumes of training data still. So I think in 2026 we’ll make more strides in the development of what we call non-situational AI, and we’ll edge a little bit closer to few-shot or zero-shot learning AIs. [00:04:00]
Now, in addition to that, I think we’ll also see a little bit more development around quantum. So, when we actually think about quantum and AI, we’ve got two areas of AI to think about.
We’ve got, should we say, traditional artificial intelligence models that run on quantum hardware—and that accelerates optimization outputs and so on and so forth.
Something that I’m a little bit surprised about, and I’ve been surprised about it for the past two to three years, is: when we have a look at the development of quantum machine-learning algorithms, there are some, but we don’t talk about it much.
We talk in terms of GPT, generative AI, agentic AI, AGI. But we miss out quantum, or any kind of conversations about what happens when you run AI on a quantum infrastructure and/or have quantum machine-learning algorithms running in parallel on hybrid systems.
When you have a look at the large foundational models, they keep chugging along. But I think we’re going to see the development of more specialist artificial intelligences, and the news coverage on those, I think, is probably going to be fairly limited, because I think the world’s press is quite myopic about: What’s OpenAI doing? What’s Anthropic doing? What’s Perplexity doing? What’s Alibaba doing? And what’s Google doing? [00:06:00]
But I think when we actually have a look at the overall landscape, I’m starting to sense a change in the focuses of these large AI labs.
So, for example, if you have a look at the companies that are expressly trying to develop AGI, it’s now really Google DeepMind and it’s OpenAI.
When you have a look at Anthropic, Anthropic seems to be doubling down more on, shall we say, software systems and so on and so forth. So I think Anthropic is starting to specialize, because I think maybe they’re realizing that they just can’t outspend OpenAI or Google—bearing in mind Google has that full stack, including the TPUs.
Now, when you have a look at Amazon, I think Amazon is obviously investing heavily in different artificial intelligence companies, like Microsoft did, but I think they’re running a Microsoft model, where they’re looking at compound artificial intelligence, where they work with lots of different labs, and then they put them through Bedrock.
Microsoft is now racing much faster to embrace as many AI models as it can, which is what they’ve done in the past with cloud. They would be the company that you could run anything on—it doesn’t matter whatever service you want to use, we, as Microsoft, support that and you can use it on our Azure platform. [00:07:00]
And then when you have a look at the Chinese… yeah, so I think we typically don’t pay as much attention to the right things in China as we should.
So, for example, two breakthroughs that we’ve actually seen in China are: about nine months ago, we saw Chinese researchers actually developing generative artificial intelligences across distributed data centers, which is something that we can’t really do in the US.
But then, about three weeks ago, Alibaba found a way to pool together GPUs so you could run multiple learning instances in parallel on GPUs, and they reduced the number of GPUs that they needed to train some frontier models by 82%.
Now, when you have a look at China, the things that strike me about Chinese artificial intelligence and the general technology market are: the Chinese are not very vocal with their quantum technology developments—in terms of quantum computing—bearing in mind how they talk about other things.
And when we look at AGI, there’s not a huge amount of talk in China about developing AGI.
When you have a look at the CCP’s 2035 economic plan for the country, China seems to be going a different way to the US. In the US, you’ve got two frontier labs that are very motivated to develop AGI. [00:09:00]
Whereas in China, I think they’re developing AGI in the shadows. But the CCP and the large tech giants in China are fundamentally trying to box the United States players in by developing very good open-source models that people in Africa, for example, will actually write software and services on top of.
But then, in addition to that, they’re trying to actually increase the amount of usage. So in China, they want a million artificial intelligence apps by the end of the year—which isn’t very long—and in the CCP’s 2035 economic plan, the CCP want 90% of companies in China to be aggressively using artificial intelligence to drive the AI economy.
So, whereas the United States labs are very much going in on research frontier models and then thinking about revenues—Where do we really get our money from?—China’s almost flipped that to: open-source models—this is where we get our revenue from—and then we’re developing AGI in the back room. [00:10:00]
Brian:
So, one thing we haven’t talked about that’s very AI-influenced is robotics.
Matt:
Yeah.
Brian:
How do you see robotics design evolving in 2026?
Matt:
So, there’s lots of different types of robots. Obviously, there’s a huge amount of focus on humanoid robots.
Now, when we have a look at the humanoid robot market, Elon Musk famously said, about three months ago, “I think that the humanoid robotics market will be worth $25 trillion,” because from an ambition perspective, he sees humanoid robots especially as being able to automate all labor.
And Elon Musk basically talks about a future which he thinks is about fifteen-ish years away, where work will be optional, where labor will be abundant, and where [00:11:00] money will be irrelevant.
But when you have a look at the development of humanoid robotics, really you’ve got some quite interesting European and American players. So in Europe, we’ve got Figure, and then in America you actually have Optimus, and others.
But one of the biggest problems that we see for Western humanoid robots is that in China, they’re developing humanoid robots now that have a price point of about $5,000 to $15,000. And when you have a look at Optimus—and you guys will know this much better than I—but when you have a look at Optimus, the latest AI6 chipset that Tesla’s co-developing with TSMC, that’s $5,000 alone.
When you think about a humanoid robot, they’re still going to be sitting at that sort of $30,000 mark. And one of my clients is Samsung, and quite a number of my clients are trying to develop domestic robots that go into homes and do your laundry, serve you a glass of wine, set the table, do the dishwasher, and everything else.
We’re still going through this hype cycle, and when I speak to people within the industry, they say general-purpose humanoid robots are still far away—even when you have a look at training robots in simulation and so on and so forth. [00:12:00]
So, when we think about robotics, I think you’re going to continue to see robots doing general-purpose work in factories and warehouses. So, for example, with OMI and BYD, we now see the first big development of the world’s first fully dark and autonomous factories, where there are no people—they’re completely lights out.
I can show you a robot that can evolve its own code, and then I can show you an artificial intelligence that will then develop and design new kinds of robots that get spat out on a 4D printer and walk off the printer.
But in terms of actually adopting these technologies, they’ve still got a relatively high price point. They are still relatively constrained in what they do. And then you’ve got things like cyber issues, reliability issues. You’ve got governance and regulatory issues as well.
I don’t see robotics—I don’t see humanoid robots—really making a dent in the consumer market until at least 2035. Even if they’re very good and very cheap, people just might not want a robot in the house. How many people actually have a Roomba robot in their house? Yeah, there’s a fair number, but it’s not loads.
Robotics has been very interesting to watch, and you see this kind of bubble forming around humanoid robots. But I think it’s a much weaker bubble than we see with artificial intelligence. [00:13:00]
Brian:
So we’re probably bumping up on time, and we had a whole raft of lightning-round questions to pepper you with, but I’m only going to do one.
Okie-dokie.
And that is: what is your favorite, most useful AI tool at the moment?
Matt:
So, I’d actually say Gemini now, which kind of comes—which I think is interesting, because actually I’ve been seeing this switch globally as I’ve been traveling around the world.
If you step back, say, six to eight months, I think the vast majority of people would be saying—outside of specialist use cases like software development and everything else, or image generation—from the foundational models, I think the vast majority of people would be saying, “We use ChatGPT.”
But when you have a look at Gemini 2.5 as well as Gemini 3.0, there is the general sentiment within the marketplace, among users and investors, that Google has now caught up with OpenAI and is now slightly ahead.
And so I’ve been… traditionally, I would use a mix. So I use Claude from Anthropic, I use Perplexity, I use Alibaba Qwen, I use a bit of Llama, and so on and so forth. [00:15:00]
But increasingly, in terms of my go-to AI just for general stuff—and I still don’t trust the output of these AIs, and you still have to know how to actually use them—increasingly, I’m leaning a little bit more on Gemini than on ChatGPT.
Brian:
As ever, probably the most valuable hour we’ve spent in recent time. Fantastic. Thank you.
Omkar:
I think I can keep on going talking. Yeah, I feel like we could do another hour. Can we add one more hour, Brian, please? After this, just right now. It’s like early Christmas coming soon.
Matt:
Let’s see. Have a great Christmas, all of you.
Brian:
You too. Enjoy yourself.






