Geof Wheelwright: Take a trip inside the brain and not just any brain, but specifically an artificial brain that is being built to mimic the operation of the human brain. It’s called the spiking neural network architecture and we’re going to look at an initiative associated with it called SpiNNaker 2. In an episode we call building a brain with 10 million CPUs.
The goal of this work is to advance the knowledge of neural processing in the brain. With me today to explain all of this are Steve Furber, ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, and Christian Mayr, Professor of Electrical Engineering at the University of Technology Dresden.
Steve Furber is the ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester. Prior to moving to academia he worked in the hardware development group within the R&D department at Acorn computers, and it was a principal designer of the BBC Micro and the Arm 32-bit microprocessor.
Having written my first book about the BBC Micro some 38 years ago, I’m very excited to talk to Steve. He holds many fellowships and awards such as being a Fellow of the Royal Society, The Royal Academy of Engineering, The British Computer Society and the Institution of Engineering and Technology.
Meanwhile, Christian is a Professor of Electrical Engineering at the University of Technology Dresden heading the chair of highly parallel VLSI systems and neuromorphic circuits. His scientific credits include the world first neuromorphic system on a chip in 28 nanometers CMOS. Several novel, mixed signal ICs aimed at the interface between nerve cell tissue and electronics, as well as foundational work on the modeling and circuit implementation of synaptic plasticity.
He’s the author or co-author of more than 70 publications and holds three patents. Welcome to you both.
Both: Thank you.
Geof Wheelwright: Perhaps we can start with what this work is about. Many of our listeners may not be familiar with the SpiNNaker project and spiking neural networks. So why don’t we kick things off by providing an overview of what this is and why it’s important?
Steve Furber: Okay. So SpiNNaker as you said earlier stands for spiking neural network architecture and the motivation for SpiNNaker is based on the observation that the principle way that the brain cells inside each of our heads communicate is through spikes. Whereas spike is a little impulse, I think of it as just going “ping” every so often. So all the thoughts you’re having are spatial temporal patterns of “pings” inside your head and a that’s quite a weird thing to think about but the SpiNNaker machine was built to provide a platform upon which we could build a realistic real-time models of brain functions.
Geof Wheelwright: That is pretty amazing. So at its heart, what are the benefits of spiking neural networks?
Steve Furber: The obvious benefit in the case of SpiNNaker is if you want to understand the brain, spikes are fundamental to how the brain works, but if you’re more interested in engineering applications of neuromorphic systems for example, to implement AI, then spikes have the potential benefits of enabling you to build a much more power efficient systems because they’re inherently event based so energy is only used when something interesting happens. And they can also operate very sparsely so only a very small part of the system is active at any time and that can reduce the energy compared with conventional AI networks which do heavy computations throughout the network all the time.
Geof Wheelwright: So Steve, that brings us to the SpiNNaker project itself. What is this exactly? And how did the idea for the project come about?
Steve Furber: As you said, my background goes back to the eighties with Acorn and early developments of the first Arm processor. And I spent 20 years building if you like, conventional computing systems. In the nineties I was building asynchronous and they’re slightly unconventional processes.
But over those 20 years, processes have got formidably more powerful, but they still couldn’t do things that humans find easy so I became fascinated by the fundamental differences between computing systems and biological brains. And that led me to thinking about where were the problems in understanding the brain and what could I as a computer engineer do to contribute to accelerating our understanding of how the brain functions.
Geof Wheelwright: So, Steve, now you have me wondering about the technology used in SpiNNaker and how you said about choosing it, and why was it the right choice?
Steve Furber: We spent quite a long time thinking about what was the right architecture for building brain models. And the first key idea we came up with was a novel way of interconnecting neurons inside the computer model. And we thought for a long time about how we build little hardware engines that could support the kind of models that computational neuroscientists use. What we realized was that those models were not particularly stable.
In particular learning rules the way that brains adapt their connections to learn new concepts and the models that computational neuroscientist use keep changing. If we want to build a hardware platform for supporting a rapidly changing set of models, then the obvious way to do it is to use software for those models. So having conceived of the interconnect mechanism first, we then moved away from using hardware engines to using programmable engines. The problem of modeling the brain is in the embarrassingly parallel class so you can break it up into as many small pieces as you want and the most efficient way to compute an embarrassingly parallel problem is to use very large numbers of relatively small and simple processes. And with my background linked to Arm, Arm was the obvious choice. Indeed, using quite a low-end Arm, as the basis for SpiNNaker 1.
Christian Mayr: Actually, I would completely second that, I mean, my group got started early 2000s in neuromorphic design and we made exactly that mistake and trying to streamline circuits too much for a given application. And by the time we had the chip, the theorists had already moved on and the model was something completely different.
So we tried that a couple of times and then we went to way more kind of configurable analog hardware. But with analog hardware there is this problem where the configuration space just gets too large. So I’m completely with Steve on that one to use our parallel digital system actually for this kind of exploration.
Geof Wheelwright: Thank you for that Christian, and that brings us to SpiNNaker 2 and the work teaming up with the University of Technology Dresden. So Christian, can you tell us a bit more about the collaboration and Spinnaker 2 and the differences between it and SpiNNaker 1?
Christian Mayr: Yes, certainly. So as I already said, we were starting our neuromorphics in the early 2000s. We were actually working with Siemens-Infineon in the mid-90s on this first wave of AI hardware. Then we got side-tracked in the brain models and basically we were hardcore electrical engineers so we were working a lot on very down to earth, transistor level analog stuff, but also communication circuits, multi-processor systems for various applications and we were part of precursor projects of the human brain project.
That’s when Steve became an associated partner there and that’s how we all ended up in the human brain project. And in 2013 at the beginning of the Human Brain Project, a couple of us flew over to Manchester and presented what we had been doing in other projects, like these processor systems on chip. And basically we said we match up like a perfect match. So Steve has all the system level high-level knowledge and we do the down to earth, basically transistor level engineering, which in some ways lets us make a way more kind of powerful system in addition to using new technologies and a new Arm core than the old SpiNNaker was.
Geof Wheelwright: Yeah, and you were talking about the European Union and the Human Brain Project, maybe you can give our listeners a bit more detail on what that project is and how it came about?
Christian Mayr: So the way I see it, there’s different stories, but the way I see it, it came from two projects, sort of one was Henry Markram’s Blue Brain Project.
So making very detailed brain models and simulating them on large IBM derived compute machines. And the other one was the one we were part of this Wafa scale analog system theoric system that got built in Heidelberg, basically Karlheinz Meyer, the head guy in Heidelberg, as well as Henry. They had proposed their own versions of projects for this flagship program of the European union.
And I remember the session in CapoCaccia and this neuromorphic workshop, the yearly one where the two met up and basically said, let’s match this. Let’s do the hardware. That’s kind of neuro-inspired ICT hardware, plus the brain simulation. And let’s do a big project that basically homogenizes brain models and builds large machines that can simulate those brain models and derive new computational and AI principles from them. So that’s how the Human Brain Project to my mind got started.
Geof Wheelwright: Fascinating. And I understand that more recently you’ve evolved this to run in the cloud. Maybe you could both talk a bit about that, maybe starting with you, Christian?
Christian Mayr: So what we realized when we built SpiNNaker 2, is of course from SpiNNaker 1 arrived, it’s mostly machine for brain simulation, but we built an additional accelerators, which make the machine way more versatile.
Including, for example, multiply-accumulate operations, which is the standard thing used to do deep learning deep neural networks. So we just shopped around what the capabilities of that machine were and what it could be used for.
And my chairs traditionally connected to automotive industry to companies working in the industry for zero field and they need massively parallel real-time AI computing. And that’s what made us decide to offer this not quite as a cloud service, it’s more an on-site or at least geographically closed cloud, where you can still keep below the millisecond, even including the communication delays and basically run something like a smart city with a localized cloud in real time.
Including all the sensor data, all the actors that need to be driven in millisecond latencies. So it’s a real-time edge cloud.
Geof Wheelwright: And Steve, how do you see the evolution to the cloud?
Steve Furber: We’re already offering a form of cloud service through the Human Brain Project. So the Human Brain Project has this brain science infrastructure called E-brains and SpiNNaker.
And the other neuromorphic platform, the brain scans platform from the university of Heidelberg are the two principal neuromorphic computing platforms under E-brains. There are also many variants of high-performance computing and things like the brain knowledge graph many brain models. But we’ve developed an understanding of how to turn this large SpiNNaker machine into an open service. We have something like, I think 450 users out there submitting jobs and it’s free to use. We developed the software infrastructure for managing the machine as a cloud service and we are in the process of porting that capability onto the SpiNNaker to hardware, which is very recently become available in its full form. And so we have the background been developing that and we will carry over to SpiNNaker 2.
Geof Wheelwright: Now, Christian, you mentioned the automotive sector as one of the commercial opportunities for the technology, but perhaps you could both talk a bit about the more broad commercial opportunities and maybe Steve, you could start and then Christian add to what you’ve already said?
Steve Furber: Christian has a much more direct connections with industry than I do but I think that there is growing interest in neuromorphic technology and its potential to compliment, if not in some cases displace mainstream AI solutions because of the ability we discussed at the beginning of the talk. To deliver AI functions at very low power I think these applications will occur in a range of areas.
There’ve been demonstrations of applications such as keyword recognition. This is when you say Alexa or Hey Siri or whatever the keyword is for your particular system, this is the thing that the system has to do permanently switched on to respond to. So it is very important that that runs with as low power as possible. But I think Christian has a broader perspective on where the industrial applications might go.
Christian Mayr: So in essence, as Steve mentioned, the beginning, the brain is fundamentally different from current deep neural networks. In the sense that for example, the neurons in the brain at any given point in time does only 1% of them active.
So the brain is very good at just doing the computation. It absolutely needs to do to solve a problem and that really just starts with the eye. As a hundred megabits or so impinging on your retina the entire time, what gets transmitted across the optical nerve is only about a megabit. So there’s a hundred to one compression already.
There is stuff like saccades, which is the eye movements you’re doing, which is driven by the visual cortex, which is basically a region of interest selection. You’re just looking at stuff that interests you about image. So that happens at every step in the human brain or in other brains. Condensing the information driven by what you actually need to do to solve a certain task and that generalizes across any number of fields, basically.
So I’m not talking necessarily about offline batch-wise processing kind of shopping and preferences analysis, but anywhere you have real-time sensors, like cars, again, industry force, euros, smart cities et cetera. All these kinds of fields that need this kind of streaming AI processing mostly at real time, this is where inspiration from the brain and in particular SpiNNaker 2 really comes in fits like a glove.
So very small microchips basically that do the necessary pupil assessing again, a little bit like your eye, but we do it for radar visual audio, et cetera, to really driven by the training algorithm actually.
So solution driven, you’re extracting specific features about the input and it’s only that stuff that you then transmitting to this kind of cloud-scale processing at SpiNNaker because that way you use the computational load for all the subsequent stages and make essentially be more energy efficient and way lower latency. So that’s the big game.
Geof Wheelwright: Where do you see further potential opportunities? So maybe Christian, you want to start, and then Steve jump in?
Christian Mayr: As I already said. Why are new companies spin cloud systems? And they are securing contracts at the moment with the first smart city customers? So definitely smart city. Any type of online monitoring improvement, like predictive maintenance. Edge clouds or even the I-driving around in the car, driving around in robots. We are also developing showcases where we really run the brain of a humanoid robot for example, in the machine. Connected in real-time, of course, embodied inside a robots. So that’s for kind of assistance jobs, human interaction, tele operation, even telemedicine. So we branching out to all those areas. Essentially,
Steve Furber: I would say that we’ve only fairly recently received the full functional SpiNNaker 2 chip. There’s a whole lot of work to do and in the coming years to build this up into a fully functional and serviceable system.
What happens beyond that? There’s still potential for shrinking the technology to more advanced processes, improving the density and energy efficiency. But I think before that what we have to do is find the killer application for this technology and then that will open up a whole potential future of new developments and applications.
Christian Mayr: Maybe another remark, we’ve recently started talking to international astronomy people and there seems to have quite a potential. I mean, this is more science, it’s not a real commercial application, but they are moving away from processing astronomical data offline, basically they needed online and they need fast feedback.
For example, to the telescopes in order to also adjust the kind of data extraction that they’re doing, this kind of edge data extraction. So basically we will probably process data from the square kilometer away on SpiNNaker in the near future. So, yes, we it’s just spreading out all over the place.
Geof Wheelwright: Well, there’s a lot going on and this is probably a pretty broad question, but maybe you can just kind of tell me a bit about what you see as what’s next for the SpiNNaker project? Steve, you want to kick us off with that?
Steve Furber: I’m still really interested in, in using this technology to advance our understanding of the brain.
I think understanding how the cortex performs its magic and how it interacts with all the other brain subsystems will represent a huge advance in human knowledge. It also has the potential to facilitate the development of treatments for diseases of the brain, which I’m told cost the developed economies more than diabetes, heart disease, and cancer put together.
So they’re hugely important economically as they have a huge impact on the lives of those affected and those around them. So there’s a potential for a huge quality of life benefits from understanding the brain and I hope that if you like our original vision for SpiNNaker it will continue and contribute to those developments.
Christian Mayr: Completely agree with Steve. So definitely, I’m also very focused on the neuroscience applications which I think SpiNNaker 2 is a very good fit for because we can really do scale models with the machine. So I mean, SpiNNaker 2 will support maybe up to 10 billion neurons, a thousand times that in synapsis. So maybe about 10% of the human brain, but if you want to do full human brain simulation, you don’t necessarily need to do that at this detailed level.
You could approximate enter areas of the brain that you’re not interested in, in black box models and just use for example, the deep neural network accelerators to train a black box IO model input output model of a brain area and plug that in at a high level in this model.
So for other parts of the brain we could go very detailed down to individual kind of molecular channels. So that’s where I see us going in the neuroscience direction, to implement really large multi-scale models to understand the brain even more and deriving from that kind of new processing paradigms.
First one inference. That’s what we were already doing on SpiNNaker 2, but moving more towards SpiNNaker 3 I’m very much interested in learning. Because learning right now, the spec propagation learning, it’s very inefficient and it’s definitely not what biology does. And when you see those very large natural language processing or transformer models, they’re scaling is at the moment held back by learning.
When you look at those papers, they’re basically saying “yes, we would have liked to run this data bit more, but we already took months for training the system and we can just rerun it and there, the brain just does not do it like that,” so we will also look a lot at kind of brain-derived learning algorithms that are as efficient as back propagation in terms of absolute performance of the network, but way more efficient in an energy and delay basically. And those will then enter, for example, the SpiNNaker 3 design. We will definitely do numerical learning accelerators also in Spanish.
Geof Wheelwright: Moving away from SpiNNaker technology and moving to the topic of AI as a whole. What excites you the most about what can happen in the future with AI and perhaps Christian, you could start us off and then close out with Steve?
Christian Mayr: So I’m actually very excited about what DARPA calls the third wave of AI. Which in essence just means, combining old strands of AI again because I think all of those like the expert systems, like symbolic AI, they describe parts of what the brain is doing.
Deep neural networks. It’s maybe what the brain is doing in the lower sensor stages, but certainly not in the associative cortex. That’s more like symbolic AI. So I’m very much excited about kind of bringing together these different facets of what the brain probably is doing into one homogeneous model. So the efficiency that you get from this neuromorphic spike-based that we were discussing paradigm, the deep neural networks make you able to interact with real sensor data. What symbolically I never could do back in the seventies and eighties, and you need the symbolic AI for this high-level kind of abstract reasoning. So that’s where we are really moving towards generally because those engines, they were very good in a controlled environment, symbolic AI, like game engines, et cetera.
With deep neural networks are very bad again, but in turns, poly gear was very bad at running robots, for example, in a real environment with noisy or incomplete data, but deep neural networks are better. So I’m very much excited about bringing all of this back together.
Steve Furber: Yes. And continuing from that throughout history there’s been concern over the term AI or artificial intelligence, and whilst we’ve used it quite flexibly in this discussion, what we’re talking about really still isn’t AI.
Okay. It’s not artificial intelligence, it’s machine learning. It’s kind of advanced and very sophisticated pattern matching. But I go back to Turing’s 1950 paper, where he proposed the test which he called the mutation game which we simply call the Turing test for human-like artificial intelligence and still no machine has convincingly passed this test even 70 years later.
And many workers since him thought that developing human like artificial intelligence was only a matter of writing the right code, building the right programs. I think there is still fundamental piece missing. In understanding what it is the brain does that makes us us, and of course this boils down to self-awareness or consciousness, and I would love to see some insights emerging into what is the foundation of consciousness?
Is it simply an emergent property of a complex system of spiking neurons or is the some higher level of science that we don’t fully understand that has to be brought into this?
I think these are fascinating questions and they’re about understanding ourselves which I think is always of interest to humans.
Geof Wheelwright: Thanks so much to you, both for those amazing insights. I feel smarter already, although it sounds like SpiNNaker 2’s already smarter than me and it’s not even deployed yet. And I’m sure our listeners are excited to learn about this fantastic research and its potential impact on all our lives. Thanks to everyone for joining us today.
We hope you enjoyed it and look forward to joining you again soon on the next episode of Arm Viewpoints