Geof: Good day and welcome to the Arm Viewpoints Podcast, Episode Two, Ethical AI leadership for Turbulent Times. In this podcast, we strive to deliver a thought-provoking discussion, insight, and hopefully a little inspiration about what people can achieve using technology. So our topic today is extremely timely, let’s face it, the last year or so has been challenging, ever changing, and sometimes just downright rough. But in a number of ways, it’s also been a time of innovation, inspiration, and cooperation. It’s reminded us all that we’re all incredibly connected and that good things can happen when we work together. Technology, of course, has played a huge role in enabling us to do that, and we relied on it more than ever to manage in these turbulent times. Our guest today, Carolyn Hertzog, is someone who has had a front row seat, in fact, has been an active participant in finding the best ways for people to use technology to meet today’s challenges. Carolyn is EVP General Counsel at Arm and Chair of the AI Ethics Working Group at Arm.
Carolyn: Well, thanks very much for inviting me to be here with you today. I’m really excited to talk about this topic. It’s is very timely, as you said, and really, really delighted to share our story here at Arm and to talk about what we’ve been doing to navigate through these turbulent times.
Geof: Well, thank you. Now we all know 2020 was a turbulent year for everyone. So perhaps to kick us off, you can tell us how you adapted to this personally, and how you seen the technology industry change.
Carolyn: I think we were always travelling, we were always working on a pretty flexible schedule and, you know when we moved into a remote environment we were actually surprised in some ways how well people did. It was not that it wasn’t difficult, it was extremely challenging but people really moved into the remote environment. We were able to support our customers were able to connect with each other and that connectivity, I know is something that we’re going to talk about, but it has been obviously a challenging year for people. It is very anxiety driven, you know, environment, people are very unsure about what was happening in the global political environment. Arm has been through a lot of changes, as you know, as a company as well and that’s something that I think as leaders we’re all called upon to deal with.
Geof: Yeah, I think all those things are really true. And particularly that whole thing of everybody having to navigate it together because we’ve had that blurring of lines between home and office, we’ve had the structure in a lot of ways kind of stripped out from under us in some respects. So even a thing like travelling to work in the morning, you might have had a commute and that might have been like your thinking time before the day really started. But all of a sudden, you’re just your commute is maybe from your bedroom, to your home office and so that thinking time you no longer have and you need to kind of use other tools as a way of recreating structure. That’s a good segue into our next topic of AI. AI is becoming more ubiquitous as our lives have moved increasingly online in the past year, given its increase in use, how well do you think the average person understands what role AI plays in their lives?
Carolyn: I think the average person doesn’t think of it at all. In fact, you know, we talk about it a lot at Arm we talk about it a lot in larger technology companies, I think governments are starting to think about it a lot more. But most people haven’t thought about it and weren’t thinking about it when it was being created. People thought about it in, in the context of, you know, movies, you know, they thought about it in the context of sort of, you know, the sensationalization of AI and probably not in such positive ways. And yet, AI has been in our lives for a very long time. You know, AI has been used in lots of incredibly positive ways. You know, it’s been used by doctors to assist in surgery, it’s been used by doctors to assist in diagnosis, it’s obviously been used in the legal field, we’re using it every day to help us move faster, and to use technology to assist us in many different ways. And, and yet, I mean, obviously we were using it daily to open our phones into use it from facial recognition, it’s being used in airports. But it’s also been used in ways that perhaps didn’t work out so well until we see it in the news when facial recognition technology was used in a way that perhaps created an unfair bias and so that’s why you know, we’ve been talking about it at Arm and and why other companies are talking about it to make sure that we’re thinking about this kind of framework and thinking about well how can we, you know, technology is always going to move faster than regulation that is the nature of technology.
Geof: So I know that arm launched an AI ethics manifesto to ensure artificial intelligence is engineered to be ethical by design. And my understanding is that this manifesto provides an ethical framework to help ensure that AI is developed in a fair and responsible way. Can you tell us a bit more about why arm did this,
Carolyn: The objectives have really been to encourage community participation, so influence with other technology companies, and we’ve had other technology companies who actually, you know, will work on similar manifestos. And actually in the end, the pillars of the outcome that from the manifesto, which are things like eliminating bias, encouraging transparency, promoting security, or not that unique to what other companies have come up with in similar ideals, but the operationalization of how, once in a while I get that word out. But the operationalization of how we implement it will be very different. And so because, we are not producing a product that sells directly into the market, we are encouraging, we will operationalize as well at Arm, you know, in terms of the work that we’re doing, and I’m happy to share some of how we’re doing that. But, you know, it will be slightly different than how, a Microsoft or an Oracle or other companies that are, you know, are an Amazon or others that are doing similar work might do it their companies. But other companies are thinking about similar ideals.
Geof: Arm clearly has a key voice within the technology sector, and is able to use its position to promote positive initiatives in the industry, in particular, in its recent AI Ethics Manifesto. But what does it learn from this initiative?
Carolyn: I think what we learned is that purpose for it is very much needed, the technology will move faster than regulation, regulation couldn’t possibly keep up with this, there are certain overarching regulations like the GDPR, you know, in Europe, that will be necessary to supplement in a framework, but that without a framework for ethics and thinking about these big topical issues, again, like eliminating bias in the machine, that, you know, regulation will never be able to keep up and you will end up with fragmentation of regulation and artificial intelligence will never be trusted. And trust and a technology that can do this potential good in in society is going to be necessary, and then artificial intelligence will exist. And it will, it will complement what humans can do in the legal community for instance, we’re using artificial intelligence in our in our work every day, we’re no longer using lawyers and spending lots of money reviewing contracts every day, we’re using artificial intelligence to review reams and reams of paperwork. And we’re getting better and moving faster. And it’s amazing what technology can do at Arm we’ve used, we’ve enabled a technology with an asthma inhaler, that helps children and helps people suffering from asthma, to know exactly how much medicine they need to help, you know, cure, cure their problems, and it’s saving lives, and that’s remarkable. But the potential for harm is out there and the potential to weaponize technology is out there and the challenge without the oversight. So for instance we have an oversight committee that is diverse by selection, and there’s using guidelines to look at the technology and say, well do we have security in mind when we developed it? Because the technology may not be a technology that is being used for security? But could the technology potentially be used for harm if security is not designed in mind? And so you’ll have wonderful devices that are being used for clean air or for clean water. Well the purpose is for clean air and clean water but could it be weaponized for an a various purpose? and so we’re not thinking about security if somebody is not making sure that that that particular pillar is being leveraged in and thought about, well, then we’re missing that ethical framework. And if we help regulators, and if we think in advance and enable regulators to regulate around that, then we will avoid that fragmentation.
Geof: Yeah. And I think too, when you when you look at things like self-driving cars, there’s some interesting questions there too, in terms of an algorithm and what the ethics are, and how you solve problems. Like there’s somebody right in front of the vehicle, but you turn to the right and there’s somebody else there and what do you get the vehicle to do? and what kind of inputs are needed for to make the right safe judgement?
Carolyn: They are very interesting choices. And if you’ve ever gone on to, you know, MIT’s website into the Harvard websites and taking any of those tests, they are truly fascinating tests. Because, you know, certainly from my own perspective, some of the things that I’m learning is that, you know, we’re not willing to take human ethical choices out of even things like self driving. And it’s truly fascinating, because you know, what it tells us is that there are human moral choices that are quite different than a legal choice. That is a really interesting social experiment.
Geof: Yeah, yeah, it is fascinating. And it kind of gets me to think about the whole idea of how much people actually understand about AI. Like the whole kind of black box AI, the notion that people may see the data that goes into an AI decision. And the resulting decision, the intervening steps are a bit of an impenetrable black box to many people. It kind of reminds me of kids learning how to do math, and no teacher really has any idea whether or not a student properly understands the principles they’re trying to teach, if all they get is a result to a question that was asked, the real proof is when they show their work, how they got there. So how does Arm approach this whole black box AI problem?
Carolyn: When you think about the black box, right, that is a closed system, you know, it takes in input, and it produces output, and it offers absolutely no clue as to why or how. So what are we really afraid of? Right, you know, should we be afraid? and in what is really innovation? but that marriage of science and innovation – right? And so, you know, the black box is the thing, you know, that’s hidden in there that should explain all the answers. And when we talk about AI and these pillars of explainability, that you know, what’s in the black spot? Is there this fear is that that AI will become more intelligent than humans. But what is it really intelligence? Well, it’s not really intelligence, it’s about how it’s learning as the algorithm that continues to learn from what the engineers have told it to learn. And so you know, it when we think about what needs to be explained, it is how it is made its decision. Right. So, you know, it’s not about it’s not about an invisible, you know, not every decision has to be explained fully, you just have to explain the algorithm.
Geof: Well, I saw that you wrote last year about the work being undertaken by the US government to establish a set of AI regulatory principles. So what’s the state of play in that? And what would your advice be to President Biden and his administration, how they should tackle or what they should tackle first in relation to those principles?
Carolyn: I mean, first, I think, you know, we all anticipate that the new engine administration is going to be more collaborative and engaging with the technology sector in dialogue. And so I think we are going to, you know, it’s very difficult, you know, we expect regulators, you know, are there to regulate and that, you know, they’re, they’re not all technologists, and so it’s very difficult for regulators to understand the technology, so we’re expecting more dialogue around the technology and more understanding about what the technology is capable of. And I think that to try and overregulate is usually a mistake. You know, I’ve talked about that a number of times in terms of that ability to create a fragmentation of regulation would be, you know, tying the hands of the technology companies in terms of the potential of AI. So I think paying really close attention to data privacy, in particular, I think, is a very important area of regulation that we need to focus on. I think security standards, again, is a very important area of regulation that we need to focus on. So those are two areas that I think would be a primary importance.
Geof: So as you talk to other companies and you see what they’re doing, are they taking a position or a view on AI ethics? And if so, how does Arm’s approach differ?
Carolyn: The more that we have honed in on this, the more that we’re seeing that companies are actually coming to very similar standards. We think the principles are very similar. And then the question is, how do you operationalize it? How do you make it work? And that’s where companies I think, struggle, it’s very difficult to make it work. We’ve established a steering committee for launching any artificial intelligence and machine learning product to look at all of the pillars and say, are we are we actually testing across and making sure that everything that we do is, is consistent with the standards and you know, we’re really in the infancy around that and I think other companies are doing very similar things, and some of the stuff that we’d like to see you say, well, now that we’ve been doing this can we compare? You know, benchmarking is always a great way to see how others are using, you know, these standards and saying, you know, did you did you learn something that we haven’t learned yet I think a lot of companies are really in the just beginning stages of implementing these standards and seeing how they work. And certainly we are very interested in learning from others and seeing how it’s working.
Geof: Let’s take a look ahead of it. We’ll start I know it’s dangerous at after the year, we’ve just had to do any kind of predictions at all. But what do you see happening for Arm and AI ethics in the year ahead?
Carolyn: We are really still in the early stages of testing our production, testing how we are launching our own operationalization of our manifesto, our standards, we’re doing training, we’re finding out what other companies are doing, other companies I think are enjoying talking to us about what we’re doing. So I would say in the years ahead, I’d like to see this becoming more of a standard, I’d like to see companies saying, you know, yes, this was, this was a great idea – now, we’re all bought in maybe, you know, maybe companies can audit against this. So this could be an auditable standard, we’re interested in seeing what governments are doing. So companies, you know, we’ve seen governments say, Well, we’d like to create some policy around this, or we’re creating, you know, governance around this sort of talked about Singapore, the UK, the US, we’d like to maybe see some more coalition around that. So that this was once an idea and we’d really like to see this becoming, you know, once a spark, now something that really is truly implemented as a standard way of operating.
Geof: So what will AI ethics look like in the real world?
Carolyn: I think it will look, you know, kind of the opposite of what we’re seeing in other areas and technology, we have had, you know, we’ve seen techlash, because technology, I don’t think companies, you know, truly intended to do harm. They move fast, and they and they and they did things that created some harm in ways that they didn’t set out to do but you nevertheless moved too quickly and that created a techlash that perhaps was deserved in some areas. And you know, Arm is a company that believes in creating technology to do good and it really permeates in everything that we do, and in the decisions that we make, and I feel very proud of that. And working in something like this is because you want to get ahead of that that potential for harm. And, you know, and I think that the technology sector has to own that techlash, we as a society have to own this and have to make sure that for our future societies we are thinking about these ethical frameworks and are thinking about that potential for harm. You know, we are living in a world where climate change is an important, you know, it’s a critical factor for the current administration. And the current administration has been very clear about what’s important. Race is a primary factor for this agenda as well. These are all factors that are part of the AI agenda as well.
Geof: Yeah. And I think that wraps up kind of the three big themes that I’ve heard here about trust, transparency, and reassurance that you mentioned earlier. And those are somewhat inspirational. So I’m going to leave you with a chance to do a bit of further inspiration. So if you can achieve your goals with ethical AI, what would the result be for the industry and for individuals?
Carolyn: For the industry, I think there’s a huge opportunity to collaborate while still achieving its technology goals. So the industry has an opportunity to both promote its technology and promote trust and protect its technology and have growth for the industry while doing something that’s incredibly good. So it’s an opportunity for trust and collaboration, and an opportunity to work with governments and governments around the world where we can avoid fragmentation of regulation, and really work together to think about, well, how do we want our technology to be received on a global basis while, making sure that it is an incredibly successful technology and avoid the potential for harm that was never intended. So we can really think together about how technology can be used for the intention in which it was created. And avoid potential harms that we never thought could happen. For myself it’s seeing something through that was just an idea that others seemed to relate to. It’s been an incredibly interesting project. It’s really made me think beyond my initial boundaries and what I was, you know, thinking of that one day in the boardroom and it’s sparked passionate ideals for others as well. So it’s been it’s been a tremendous project that I’m really happy that has got to this point and I hope that it continues and a great success.
Geof: Thank you, Carolyn, that is inspiring. And that brings us to the end of our Arm viewpoints podcast and we look forward to seeing you again. Thank you