Arm Newsroom Podcast
Podcast

Tech Unheard Episode 7: Mark Chen

Tech Unheard Podcast · Mark Chen: On AI’s New Frontiers

Podcast eposide also available on the following channels:

Summary

Mark Chen, Chief Research Officer at OpenAI, joins Arm CEO and host Rene Haas for a conversation about pioneering new frontiers in AI.

Mark tells Rene about his roundabout path to AI research and how he integrates research and product development to drive scientific progress at OpenAI.

Before leading research at OpenAI, Mark Chen was a self-proclaimed “late-bloomer” to computer science. He pivoted from an initial career in finance to heading up OpenAI’s frontiers research, where he led the teams that created DALL-E, developed Codex, and incorporated visual perception into GPT-4.

Speakers

Rene Haas, CEO, Arm

Rene Haas, CEO, Arm

Rene was appointed Chief Executive Officer and to the Arm Board in February 2022. Prior to being appointed CEO, Rene was President of Arm’s IP Products Group (IPG) from January 2017. Under his leadership, Rene transformed IPG to focus on key solutions for vertical markets with a more diversified product portfolio and increased investment in the Arm software ecosystem. Rene joined Arm in October 2013 as Vice President of Strategic Alliances and two years later was appointed to the Executive Committee and named Arm’s Chief Commercial Officer in charge of global sales and marketing.

Mark Chen, Chief Research Officer, OpenAI

Mark Chen, Chief Research Officer, OpenAI

Mark Chen is the Chief Research Officer at OpenAI, where he oversees advanced AI initiatives, driving innovation in language models, reinforcement learning, multimodal models, and AI alignment. Since joining in 2018, he has played a key role in shaping the organization’s most ambitious projects. Mark is dedicated to ensuring AI developments benefit society while maintaining a focus on responsible research.

Transcript

Rene Haas[0:07]
Welcome to Tech Unheard, a podcast that takes you behind the scenes of the most exciting developments in technology. I’m Rene Haas, your host and CEO of Arm. Today, I’m joined by Mark Chen, Chief Research Officer at OpenAI. You might know them as the creators of ChatGPT and DALL-E. Before serving as Chief Research Officer, Mark was the Head of Frontiers Research at OpenAI, focusing on multimodal modeling and reasoning research. Mark has led the teams that created DALL-E, developed codecs, and incorporated visual perception into GPT-4. Now in his current role, Mark’s goal is to push the frontier for open AI science and research.

Rene Haas[0:43]
Mark Chen, welcome to Arm. You’ve come all this way to meet with me. Thank you so much.

Mark Chen[0:47]
Yeah, thank you for having me.

Rene Haas[0:49]
Yeah, no, good to see you in person. You know, they say when you read autobiographies or biographies, you know, start at the beginning, because the beginning will tell you a whole heck of a lot. So maybe just starting with where you grew up, how you got into technology the way that you did.

Mark Chen[1:03]
Absolutely. So it’s a really hard question to answer. You started with probably the hardest one, but I was born on the East Coast, but my parents were very nomadic, so we moved around a lot. My parents worked at Bell Labs and then they moved over to the West Coast. So I did part of my schooling in the West Coast-

Rene Haas[1:22]
In Holmdel, New Jersey?

Mark Chen[1:24]
They were in Edison. But in the Holmdel site.

Rene Haas[1:26]
In the Holmdel site. Yeah.

Mark Chen[1:27]
Exactly, yeah. So after that, you know, my dad got the startup itch, he moved over to California. We were there for a couple of years and then we went back to Taiwan after that. So that’s where I did my high school and part of middle school.

Rene Haas[1:39]
Oh, my gosh.

Mark Chen [1.39]
Yeah.

Rene Haas[1:40]
How old were you when you moved?

Mark Chen[1:41]
I think, uh, probably 12 or 13 years old.

Rene Haas[1:44]
So coming from the U.S. public school system, when you got into Taiwan, you’re like, Oh, my God.

Mark Chen[1:48]
Yeah, no. It was rebellion for half a year. But I think there is a kind of love where, you know, it starts as hate and you know, after six months, I really love Taiwan. You know, it’s just the center of the chip ecosystem. So yeah.

Rene Haas[2:03]
For sure. I am quite curious about coming from the- was it a public school or private school in the U.S.?

Mark Chen[2:08]
It was a public school.

Rene Haas[2:10]
So coming from the U.S. public school system at age 12 or 13 into Taiwan, did you find like, oh, my gosh, I got to catch up?

Mark Chen[2:16]
Well, it was a little bit of a culture shock, but I already liked math and science a lot. So a lot of it was just, you know, going from more of a kind of a free-spirited teenager to an environment where everyone’s wearing uniforms.

Rene Haas[2:29]
Right.

Mark Chen[2:30]
You have fairly strict, uh, kind of teaching style. So that was the big adjustment. But I think it was good to get experience from both of these worlds, right? One where it was more about kind of learning for yourself and another environment where it was about discipline.

Rene Haas[2:45]
Gotcha, gotcha. And then did you do university in Taiwan?

Mark Chen[2:49]
No. Then I went back. So I did my college at MIT. There I studied math and computer science as well. But I was a late bloomer to computer science. Yeah. I only really started towards the end – programming towards the end of my college career. One of my roommates kind of goaded me to do it. And I was like, you know, hey, you know, I think I’m going to be a mathematician, but I’ll try this thing out. And of course, it’s addictive right here for sure.

Rene Haas[3:13]
Yeah, yeah, yeah. So, so math and science were your proclivities, but not so much into computers until you-

Mark Chen[3:20]
Yeah. Not, not into practical programming until later in life.

Rene Haas[3:23]
Yeah. Gotcha, gotcha. And then and then after that, you know, tell us kind of what you did was your first-

Mark Chen[3:28]
Yeah. So my first career was in finance. It really was a little bit of an accident as well. So I thought I was going to go into theoretical computer science and then on a whim, in my senior year of college, I took an internship at Jane Street and it showed me kind of the appeal of working in industry. It’s very pragmatic, but you still have a lot of the really exciting problems that you do studying in a more self-contained kind of academic environment. So that was exciting to me. I spent a couple of years at a hedge fund and then a couple of years at a high frequency trading firm as a partner.

Rene Haas[4:06]
We talked about this once before, but I think for the listeners it’d be quite interesting because that’s not exactly what OpenAI does.

Mark Chen[4:13]
No, not at all.

Rene Haas[4:15]
But there’s a lot of similarities in terms of just the way of thinking from a mathematics perspective, from what goes on with high frequency trading to AI models. What were the things that helped you bridge that from the stuff you had done in the finance world, which you did, what, for five, six years or so-

Mark Chen[4:29]
About five or six years.

Rene Haas[4:30]
To OpenAI?

Mark Chen[4:32]
Yeah. So I think the biggest thing is it teaches you rigor and experimentation. So, in the financial markets, there really is no kind of fudging the benchmarks or anything, right? You have a very hard evaluation, which is how much money your models generate. And you really have to be honest, principled, rigorous in your experimentation. I think a lot of that carries over to the science that we did at OpenAI in the early days.

Rene Haas[4:57]
Yeah. Were the traditional AI – traditional is a funny thing to say in such a new industry, but – transformer based models doing the work when you were in the finance world or how would you kind of think about neural networks relative to the thinking world versus the quant slash financial space?

Mark Chen[5:15]
Yeah, so I would model finance as about two years behind the state-of-the-art in AI at any given point in time. So I remember when it was 2017, 2018, the time that I was exiting finance, people there were starting to catch onto neural networks, they were building out their first clusters. And, yeah, I was also looking into kind of neural based modeling in the finance world, as well.

Rene Haas[5:41]
Right. And just looking now back where so much is happening in terms of obviously in the AI space, we want to talk about, how has the finance world picked up on that from what you’ve seen?

Mark Chen[5:49]
Yeah, so they’re also all in. But I will say one dividing line is you tend to get more AGI true believers on the tech side and people on the finance side today actually are still more AGI skeptical.

Rene Haas[6:04]
When you say AGI skeptical from the finance world. What do you mean?

Mark Chen[6:07]
Right, I think it’s just straight up when you feel like, hey, do you believe that AI will fundamentally transform the shape of the economy in five years? They tend to have a more pessimistic view.

Rene Haas[6:16]
And why do you think that is? It’s funny you say that, because I do find that also. Not finance per se, but other industries that I interact with, who have different, either methodologies for scientific research or product development. However they do that, the skepticism bar is a lot higher than I would have intellectually anticipated. But in the finance world, why do you think that is?

Mark Chen[6:40]
I can only answer for the high frequency trading world where a lot of it is extrinsic to the pure modeling. So, I really do believe that AI can improve the modeling side of things. But there’s so much on latency, there’s so much about your private sources of data. So, I think there’s just so many extrinsic factors of alpha that maybe AI and modeling itself just really isn’t a huge part of it yet.

Rene Haas[7:05]
As I said, I run into this, you know, in other industries. And some of the things I hear back is, our problem cannot be modeled. The data sets either don’t exist and/or the problem is a far more complex model than AI or AGI could ever, ever assist with. You think that applies to the thinking in the high frequency trading world?

Mark Chen[7:23]
Well, I don’t know. We hear this refrain over and over again. I think you have to be in the world to really believe in it. Sometimes you just have to see the tech to start feeling the AGI, as we call it. And I think at OpenAI actually going in, I would say I was somewhat of a skeptic when it came to AGI. I think much of the world was back then and really just seeing the progress of the models seeing the capabilities that really opens your eyes to it.

Rene Haas[7:50]
Yeah. So, the story as to why you joined OpenAI, is a cool one. Why don’t you maybe share that with folks.

Mark Chen[7:56]
Yeah. Well, I think over time it has a lot to do with finance and it also has a little bit to do with AI as well. But, you know, finance, it’s, I think, a hard industry for someone who wants to make impact. And what I realized, having been in finance for many years, was that the set of competitors, it’s the same. Everyone gets a little bit faster. But at the end of the day, you’re still competing against the same people, you’re still competing on the same objectives, and extrinsic world doesn’t change too much for it. And I felt like, you know, that wasn’t a very satisfying way to live out the rest of my life. And it felt like I needed to make some sort of change. At the same time, I saw what happened with AlphaGo, and that was equally inspiring, scary, you know, so many feelings wrapped up in one, and I just felt like I had to really get into that. So, some of the first projects I did were in reinforcement learning, just figuring out how do I train these Deep Q-Networks to play Atari games? And once you start taking on some of these projects, it’s very addictive.

Rene Haas[9:02]
But OpenAI in 2018 was a very different company, obviously, than they are now. You know, it’s much, much, much smaller. Charter was a little bit different, and I think that drew you a little bit, too, to kind of what-

Mark Chen[9:13]
It did. Yeah, it did. So I joined a nonprofit at the time, and in some sense you can view that as a reaction away from finance. So, I felt like I was in a very materialistic world and I wanted to do something with impact and I do feel like a lot of early OpenAI had that mindset of we believe in transformative AI, we want to bring it about to the world, we want it to benefit humanity. And that persists to the leadership today.

Rene Haas[9:41]
Absolutely. Yeah. It is very clear that Sam is a big believer in that. So, you’ve been there now seven, eight years. Are you surprised at how far it’s come in seven years, or do you look at it and say, you know what? There’s so much more to go? I mean, how do you think about the last seven years?

Mark Chen[9:56]
I think both are simultaneously true. So, I don’t think anyone could have predicted this trajectory of AI. All we could do was predict kind of what we call perplexity. So this is the ability for you to model, let’s say, human language or a measure of accuracy and some sense of being able to model human language. And you can predict for a particular scale what kind of accuracy you’re going to get. But what you can’t predict is the emerging capability that falls out of getting that level of accuracy. So, you know, when we got to GPT-2, it wasn’t obvious that, you know, this level of perplexity would mean that you get coherent paragraphs. And then with GPT-3, not clear that that level of perplexity would give you the ability to do in context learning. And then GPT-4, you know, the ability to crush all of these college level exams. So really it’s really inspiring to see all of the emerging capabilities that come out of the model.

Rene Haas[10:55]
One sense of scaling and wonder how you think about this, how much of the progress or scaling of the capability, as you say, is a function of breakthroughs of the models versus we just have access to far more compute and we can just throw more dollars at it, either whether it’s chips or power or whatever it is.

Mark Chen[11:15]
I think this was studied once and clearly both factors are very important. But I think the algorithmic insights and efficiency improvements have slightly outpaced the compute contributions.

Rene Haas[12:28]
And can you say a bit more about that without giving away, no secret sauce to be given away here but, anything just that would give the indication is where the algorithms have just gotten smarter.

Mark Chen[11:37]
Right. Right. So I think today, you know, there’s a sweeping generalization of, we use transformers to train language models, but transformers themselves have evolved a lot over time, too. And I can talk about some earlier work. You know, we evolved the attention patterns of transformers. So I did some work early on with Ruan at OpenAI about fact-raising attention and that that’s an efficiency gain. There are other subtle things too I think where you do normalization in transformers, how you set up the aspect ratios. All of those things are things that you have to figure out and be very careful about.

Rene Haas[12:10]
Is it transformers till the end of time or do you think we get where there’s an S-curve, where transformers run out of runway and there’s something next?

Mark Chen[12:18]
It may be possible that something takes over, but I think the longer that transformers are here and the longer that they remain dominant, the more we co-design around them. So we will build chips that are efficient there. They become the benchmarks, right. And we will build kernels for them. So it just – the bar keeps getting higher for something to overtake transformers and it’s hard to see that-

Rene Haas[12:39]
Are we there already? Because I do wonder, to your point, the model has been established, hardware architectures, you know, morph to it, scientists know it. And you may have this, this bias that starts to run in to, say, if everything is transformer-based in terms of how I develop the next data sets and models – that’s how it’s going to get done.

Mark Chen[13:03]
Yeah, I think transformers, they’re popular because they’re a really good balance of simple and expressive at the same time. So they get all of the mixing primitives that you need a highly expressive mixing primitive, but they’re also very simple, which allows you to scale it and engineer it in a very fairly straightforward way without too much gymnastics. So I think it’s a very well-suited architecture.

Rene Haas[13:26]
Yeah. I don’t know how much you interface with potential clients, but do you hear the safety thing increasing, now, given with the capability of certainly [GPT-]4.5 and [OpenAI] o3?

Mark Chen[13:35]
Yeah, I think so. And you know, today is a world where we’re moving towards AI with connectors. So these are AI models that can plug in to your email, your Google Docs, your Slack. And I think that poses real risks. People today are fairly good at jailbreaking models, too, if there’s a very motivated hacker. So what is the implication, right? You could imagine that someone motivated extracts all that information away from you or they’re able to launch some kind of coordinated attack.

Rene Haas[14:03]
Right, so then literally at the source code level, you could just simply put things inside the model that when those queries are requested, be rejected.

Mark Chen[14:12]
Right. Yeah. So I think there’s even the possibility you allude to where, if there’s a bad faith model developer, they could create a model and bake in certain behaviors into weights where there’s some kind of trigger for that happening.

Rene Haas[14:27]
Yeah. No, it’s interesting because I remember attending the very first AI Safety Summit which was, gosh, almost two years ago and there was a lot of talk on it then. And maybe it’s just the circles I’m moving in, I haven’t heard as much on it, even though the models have progressed incredibly since then. So that makes a ton of sense. You’ve done a lot of work, I know yourself, on multimodal and that’s yet another yet extremely cool frontier. Are the architectures today well suited for multimodal in terms of, it’s a big data problem, right? The data sets are really gigantic, back to the transformer piece. Just talk a little bit about multimodal and where you think that goes.

Mark Chen[15:09]
Absolutely. Yeah. This is a topic near and dear to my heart because one of the first projects I did was trying to apply transformer architectures to multimodal data sets. So one of the big papers that I’m still very proud of today is called Image GPT. And that was a proof of concept that you could do image generation using the same text transformer stack, right? And all you have to do is view images as a language in this very special vocabulary of pixels. And I think at the time, you know, it was, you had different architectures like GANs or VAEs that generated images and used transformers for text. And I think the importance of proving out this approach is it paved the path for something like DALL-E, right? Where when you have image generation, you want it to be steerable image generation. So you want text to be the language by which you specify the image you want. And in that world, if you have a different architecture for text and images, it’s just not satisfying, right? You would want to be able to just throw all of your data into the same model and train it to be able to output the image. So since then, we’ve expanded on that idea and created [OpenAI] 4o and this is a model we launched last year and it really showed the strength of a fully multimodal integrated approach where you just throw audio, images, video, text all at the same model and you can get, you know, really emotive speech. You can get images like the image generation launch that we just did and that was so fun.

Rene Haas[16:42]
Yeah, super, super cool. But it also seemed like it’s – and I guess if you’re in the world of selling power or selling data centers or selling chips, a good thing because the data set just explodes in that world.

Mark Chen[16:54]
Right. Yeah. I think there’s so much intelligence locked up in multimodal data and just the ability to expand your data set by orders of magnitude through that and be able to try to figure out how to unlock that is important.

Rene Haas[17:06]
One of the things that when I talk to researchers in different fields and different areas back to this problem is too hard for AI/AGI to solve – when you think about real world problems, whether it’s around chip design, whether it’s around pharmaceutical research, whether it’s around drug discovery, link to that. Where are areas that you look at and say these are really, really tough problems, that we’re not there yet, but we can get there?

Mark Chen[17:35]
Right. So I think everything above that you mentioned falls in that class. And to back up there, this is one of the big motivations of us working on reasoning. So our reasoning models today, they give us this hammer where we can take smaller amounts of data and efficiently learn on a smaller amount of data to get the same level of capability. So, many of these verticals, like you mentioned, they’re not going to have as much data as the whole Internet. And you need a technique that allows you to productively spend compute, learning all that data. And that’s what reasoning gives us. It gives us a tool for doing that. So we’re actually very excited for the possibility of models to get very specialized in these areas. We’ve already seen that with things like math and computer science, right? These models are so good at solving math problems right now and so good at, you know, coding up very difficult algorithms. And I think a lot of these techniques can be adapted to things like drug discovery, like you say.

Rene Haas[18:32]
Right. Are they good at invention? On one level, they’re awfully good at solving problems that are bounded problems,

Mark Chen[18:38]
Yes.

Rene Haas[18:39]
Right, where the answer is known or there is an answer that you can get to. But what about areas where there isn’t an answer today. Invention being a broad term for that.

Mark Chen[18:51]
So I have the slightly controversial claim that they’re better at invention than we think. And I- the reason I think that is, you know, we’ve entered these models into some of the hardest algorithms, competitions in the world. And, you know, I have a lot of experience in these competitions. They’re often designed to be anti-pattern problems. Basically, what makes a good problem is you can’t fit it to a set of known techniques, right? Otherwise, all the contestants are going to know how to solve it. So what we find is oftentimes these models surpass our expectations on these more ad hoc, like, you have to come up with something that really doesn’t fit any pattern. And it does really surprise me in some of these problems. It can be very creative and in spots where you don’t expect it to be. And I think you do speak to a good gap, though. There is some sense in which they are models which take tasks and then give you responses to tasks. And you would ideally love them to propose tasks and have taste in some sense of this kind of things are hard, you know.

Rene Haas[20:02]
Can they be solved? And how much of that is we need more data and/or our models in terms of how the human brain invents based upon having either not seen something or been exposed to something?

Mark Chen[20:15]
Right. Right. And I think today, models probably already do have a sense for what intuitively is, you know, aesthetic to a human, right? And I think you can leverage that to figure out hypotheses that, you know, would be interesting to humans as well. So I think it probably has this latent notion of what innovation looks like, but we should just figure out how to tap into that.

Rene Haas[20:41]
Yeah, it’s always a thing I’ve thought about with AGI, you know, if I make it analogous to a human brain, where the AI today is incredible – given enough data and enough time, it’ll figure out any problem. Yet humans aren’t always exposed to everything.

Mark Chen[20:53]
Right.

Rene Haas[20:54]
Yet somehow we figure out paths and ways to learn things, having actually never been exposed to the entire data set. And to me that to some extent everyone’s got their own kind of weird definition of AGI. To me that is a bit of AGI, is being able to learn and invent without actually having all the data fed into it.

Mark Chen[21:11]
Absolutely. And that’s the way we’re going, right? It’s like we’re developing more and more data efficient algorithms where you’re learning how to reason and draw insights from less and less data. I think one challenge, though, that I’ve personally reflected on for a while is how much of invention is interpolation. And I think it’s actually a higher fraction than we think, you know, having seen the math world for a while, actually, a lot of very cool results are, you know, you take one person who’s a geometer or some person who’s an algebraist and they link together some patterns that they’ve discovered in their own fields and they bring it together. And I, I think maybe there’s a lot less true pure innovation than we think.

Rene Haas[21:59]
Can AGI replace or augment entrepreneurship?

Mark Chen[22:02]
Great question. So I think it accelerates entrepreneurship and- in the sense that I think if we really do coding models right, we create some sort of interface where a human doesn’t have to learn how to code, to produce something that they can ship to the world, right. If you can say, hey, I want to build an app that connects writers to drivers, right, that can just get built. You know, the model thinks for a long time it makes it robust and deploys it to the world.

Rene Haas[22:33]
So many industries have been washed out by technology advancements. And there’s always the doomsayers who say this is going to eliminate jobs, etc., etc., stenographers and typewriter tools, for example, that were that were replaced. Coders are a different species, obviously. Um, and there’s one axiom that says the efficiency of these models will generate ability for more coders versus another axiom that says, five years from now, we won’t need as many as we’ve got today. Where do you lie on that?

Mark Chen[23:04]
Yeah, I mean, I just think there’s a lot of demand from services that isn’t met. And AI will be a point of leverage for people, for professionals especially. They’re going to be able to be 2X more productive or 3X more productive now and service that many more people. I think it will drive the costs down, but maybe the demand is there at a lower cost, right. So I think it could be different for different industries, but I do think there’s a lot of demand out there given a cheap enough cost.

Rene Haas[23:29]
That’s kind of what I think, too. I mean, we’re in a bubble from the standpoint of where we live physically and the world that revolves around AI that is Silicon Valley. So much of the world yet still has a long way to go to embrace technology, let alone AI. But the speed at which this is moving is just amazing. I mean, just looking at that, just your tools, your deep research tools and reasoning tools, the answers are not they’re not textbooks anymore. They’re succinct, you’re actually talking to, to a human being.

Mark Chen[24:01]
Yeah, absolutely. I feel like really the bar for deep research is hours of economically useful work. And I think it lives up to that. We’ve gotten testimonials from people in biology, for instance, who say when it comes to drug discovery, I have a panel of scientists and having them in a room for a couple of hours, they produce similar analysis as the model. So I think it really is exciting that we can do that kind of work.

Rene Haas[24:26]
If you think about the next number of years, what are the things that are in the industry’s way, not OpenAI’s way, in the industry’s way. That said, gosh, if we had more blank – it’s not you know, GPU is not a valid answer here – if we had more blank we would be able to go a lot faster.

Mark Chen[24:43]
Honestly, I think for now we need a lot more researchers and ideas. Right now is such a fruitful time to be doing research. I think we’re limited just by the number of ideas we can come up with. Like you said before, if we could get the models to help automate that and help accelerate that, that would be phenomenal. But we need people to come up with ideas, implement ideas. But I do think another big thing is embodiment.

Rene Haas[25:12]
What do you mean by embodiment?

Mark Chen[25:13]
Right. Right. So we have a product called Operator. And what it does is it’s a computer using AGI, meaning it takes as input computer screens and as output emits keyboard strokes or it emits click actions. And you can think of it as it’s an interface to your digital work or your digital life, right? And when you think about the future, the extension of that is robotics, right? Like, essentially you’re building an AI brain for a robot which can act in the real world. And I think there, you know, you’ll have bottlenecks around how quickly can we make the hardware work?

Rene Haas[25:51]
Do you think agents become everything over time? And we don’t really have apps, we don’t really have operating systems in the classic sense today in terms of you have to pull information out from a lot of different sources to get the answers. Do you think we get to a world where it’s just an agent that exists somewhere?

Mark Chen[26:10]
Absolutely. I would love to have an agent where you just query for all of the information that you need. It should just be behind this layer, right, and you don’t have to deal with it. I think Operator really does fulfill that promise if it works. And you ask, hey, you know, what, what is this information? It can check your Gmail and it can operate through any kind of interface you use in a computer.

Rene Haas[26:31]
And does it need to be a computer? In other words, can it be kind of anything moving – one of the movies I love to watch are the Back to the Future movies, which just in one sense, a lot of things are wrong, but a lot of things are rather clever. In Back to the Future 2, there’s a scene where Marty McFly comes into his house and as he moves from room to room, the appliances in the room know the context. And they’re either making his meal or they’re turning on the news, or they’re getting his boss on a video Zoom call, which, by the way, a video Zoom call that they were showing in the 1980s. Could you see a world where these agents are suddenly just now intelligent, running kind of everywhere, and back to your point, it’s all kind of hidden underneath the hood?

Mark Chen[27:10]
I think they should be. Yeah. And you can imagine we have a model in the cloud, right? It’s your OpenAI agent and all of these things are plugged into it. Right. And so they can understand the context as you go from one place to another.

Rene Haas[27:23]
You are, you know, on a personal professional level, working at the hottest company in this space in one of the hottest times in this space. What is it like? Do you pinch yourself saying like, Oh my God, like I can’t believe I’m actually in the middle of all this?

Mark Chen[27:35]
Yeah, every day. I mean, it’s a great privilege to work at OpenAI. You know, I started as a resident, actually at OpenAI. And what that means is someone who wasn’t a Ph.D. in the field, right. And I’m grateful Ilya took a bet on me back then and he trained me up in machine learning. And, it’s completely unpredictable, you know? And I’m just so grateful.

Rene Haas[27:58]
Yeah. No, it’s exciting to watch. And, you know, having been around Silicon Valley my whole career and watching these shifts from PCs to Internet to mobile phones, seeing what’s going on here is, it’s a thrill to be part of. Mark Chen, thank you for schlepping all the way down to San Jose from San Francisco and joining us today. I really enjoyed it.

Mark Chen[28:18]
Absolutely. Me, too. Thanks so much for having me.

Rene Haas[28:27]
Thanks for listening to this month’s episode of Tech Unheard. We’ll be back next month for another look behind the boardroom door. To be sure you don’t miss new episodes, follow Tech Unheard wherever you get your podcasts. Until then, Tech Unheard is a custom podcast series from Arm and National Public Media. And I am Arm’s CEO, Rene Haas. Thanks for listening to Tech Unheard.

Credits[28:48]
Arm Tech Unheard is a custom podcast series from Arm and National Public Media. Executive producers Erica Osher and Shannon Boerner. Project Manager Colin Harden. Creative Lead Producer Isabelle Robertson. Editors Andrew Meriwether and Kelly Drake. Composer Aaron Levison. Arm production contributors include Ami Badani, Claudia Brandon, Simon Jared, Jonathan Armstrong, Ben Webdell, Sofia McKenzie, Kristen Ray and Saumil Shah. Tech Unheard is hosted by Arm Chief Executive Officer Rene Haas.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo