AI is changing the way we work.

Sign up for our newsletter and be the first to know about exclusive events, expert insight, and breakthrough research—delivered straight to your inbox.

Submit

Please share a few additional details to begin receiving the Valence newsletter

By clicking submit below, you consent to allow Valence to store and process the personal information submitted above to provide you the content requested.

Thank you! Your submission has been received!
Please close this window by clicking on it.
Oops! Something went wrong while submitting the form.

What Leaders Need to Understand About AI

What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Video Transcript

00:00  Personal Assistants as Proxies

Parker Mitchell: When we last chatted in, I think it was November or maybe October, you were two weeks into the university giving you a personal assistant. Can you share more what it's been like having that and we may be able to extrapolate to everyone having the equivalent of that? 

Geoffrey Hinton: So fairly recently, I woke up earlier than usual. And up until that point, I've been thinking, maybe I don't need the personal assistant anymore because when I look in my mailbox, there's only about sort of things to be dealt with. But the morning I woke up early, I discovered there were hundreds of things to be dealt with because my personal assistant was just dealing with them. That was kind of essential. 

00:48  Learning Personal Preferences

Parker: And when you look at how she has learned about you and the way you might answer questions or how you would assess a situation, what has it been like if she's gotten to know you personally better? 

Geoffrey: It's been good. She's getting much better at knowing which questions I want to answer myself, which talks I might be interested in giving, and which talks I'm definitely not interested in giving. She can pretty much recognize my former students. 

To begin with, one of my former students would send me mail, and they'd get a very polite answer saying I was busy. And I remember talking to students. I got this answer from you. Didn't sound like you. And so now I tell my students, if you ever you get a really polite answer, that's not me. 

01:38  AI as a Proxy & Specialized Assistants

Parker: Don't have time to write the full polite answer. And so in some ways, she's acting as your proxy. She's learned how you see the world and is placing that as a first filter. How would AI develop that ability to do the proxies to help people navigate how work and life might change as AI is able to automate more things? Do you see a world where people will have many different specialized assistants or just one that knows them? Any thoughts on that? 

Geoffrey: It's a very good question. Why do you need a train one being neural net to do everything? Because that's more efficient in the long run, because you can share what different tasks have in common. So there's always this tension between, having a small neural net specialized to one thing, which doesn't have much training data. If you got enough training data, that's a sensible thing to do. And so we have huge amounts of training data. 

It's quite sensible to have many small neural nets, each of which is only trained on a tiny fraction of the training data, and a manager who decides which neural net should answer each question. If you don't have that much training data, it's typically better to have one neural net that's learned on all the training data. And then maybe after you trained on all the training data, you might fine tune it to be a specialist in different domains, and that seems to be a good compromise. Train one neural net on everything, and then in particular domains fine tune it for that domain. 

Parker: I mean, it sounds like if I look through the history, people said, you know, it might do this, but it won't do a, b, c. And I think your answer sounds like it could be just a matter of time and scale, maybe data. 

Geoffrey: Go back ten years and take anything that people said it couldn't do. It's now doing it. 

03:18  How AI Helps Doctors & Patients

Parker: And so if we fast forward now ten years in the future, obviously, the implications for society are huge. But on the positive use cases, health care is one. Tell us a little bit about how why that is so personally important to you and how that could evolve over the next, let's say, five years. 

Geoffrey: What a family doctor does, the sort of first line. The family doctor knows quite a bit about you, maybe knows something about your family, maybe even knows a few things about your genetics. But she's only seen a few thousand patients. I mean, almost certainly, she's seen less than a hundred thousand patients in her life. There just isn't time. An AI doctor could have seen the data on millions of patients, hundreds of millions of patients, and so and also could know about a lot about your genome, a lot about how to integrate information from the genome with information from tests. So you're gonna get much better family doctors with AI. And we're gonna get all sorts of things like that for CAT scans and MRI scans, where AI can see all sorts of things that current doctors don't know how to see. 

Parker: I brought up that example to a doctor who had looked at the interaction between radiologists and AI, and there were a few different scenarios. 

So one is, you know, AI is confident and the doctor is confident. Same diagnosis, obviously easy. 

Geoffrey: But not, I would trust the AI. 

Parker: So they were doing a study on this. But what was interesting is if a doctor is, you know, confident it's x and AI is confident it's y, the doctor chooses to go with their own diagnosis. 

Geoffrey: Fair enough. 

Parker: Now if the doctor is not confident and the AI is also not confident, the doctor chooses the AI solution with the sort of human thinking of, like, well, if I'm not sure, I'll blame it on the AI for being wrong. I just thought the human nature of that feels so real and dangerous at the same time. 

Geoffrey: Yeah. I think that's telling us more about human nature than about what the optimal strategy is. 

Parker: Absolutely. And ways that we might misuse AI in the human-AI interaction. 

Geoffrey: The thing I know a bit more about from a paper that's more than a year ago now is you take a bunch of cases that are difficult to diagnose. So this isn't scans. This is, you're given the description of the patient, and the test results. And on these difficult cases, doctors, get 40% of them right, an AI system gets 50% of them right, and the combination of the doctor and the AI system gets 60% right. 

And if I remember right, the main interaction is that, the doctor would often make mistakes by not thinking about a particular possibility, and the AI system will raise that possibility. It'll have a list of possibilities. And when the doctor sees that possibilities, the doctor will say, oh, yeah. The AI system is right there. I didn't think about that. That's one way in which the combination works much better. The AI system, doesn't fail to notice things in the same way a doctor often does. But there, it's already the case that, and this was more than a year ago, with the combination of AI system and doctor, it's much better doing diagnosis than the doctor alone. 

Parker: And what it sounds like the AI is doing is generating a scenario-specific checklist. Here are a range of different things, and it could do that very quickly, and a doctor can just look at that and go, no. No. No. Oh, maybe this. And it sort of allows it to do a little more system one intuition on those and then pay more attention to the ones that it thinks is important versus difficult system two thinking across every possibility. 

Geoffrey: Yeah. So that's certainly one of the things that's going on. The other thing that's going on, of course, is you get the ensemble effect. If you have two experts who work very differently and you average what they say, you'll do better then.

07:13  Personalizing Education & Healthcare with AI

Parker: Anything that's processing vast amounts of data, finding patterns and similarities, and then identifying sort of promising candidates, for humans in that sort of collaborative model you mentioned, that's gonna power things. 

Part of that leads to my next topic, which is around personalization. And so we're in a, we'll be in a world where your biology is different from mine, it's different from someone else's. And so that intervention on the medical side can be more tailored to each of us. Is there research currently going on around, how that might, you know, how that might change sort of health outcomes? 

Geoffrey: I believe there is. I don't know as much as I should about this. But for example, in cancer, you'd like to use your own immune system to fight it, and you'd like to sort of help your immune system recognize the cancer cells. And there's many ways of doing that. I think AI is already being used to choose which things to mess with. 

Parker: Are most likely to work for your particular area. 

Geoffrey: So that would be individual therapy based on AI. And then, obviously, in education, AI is gonna be very useful. And, again, it's gonna be individual therapy for misunderstandings. An AI system that's seen thousands or millions of people learning about something and there are different ways in which different people misunderstand, that will be very good at recognizing for an individual person, oh, they're misunderstanding in this way. It's what a really good teacher can do. They're misunderstanding this way, and here's an example that will make it clear to them what they're misunderstanding. 

AI is gonna be very good at that, and we're gonna get much better tutors. We're not there yet, but we're beginning to get there. And I I'm now happy to predict that in the next ten years, we'll have really good AI tutors. I may be wrong by a factor of two, but it's gonna, it's coming. 

Parker: You mentioned on the AI tutor side of things for students. I think there was a study that you referenced about how much better the outcome is when people get individualized tutors. 

Geoffrey: Yeah. I can't, I don't have the citation for it, but the number I remember quite well, and I've seen it quoted elsewhere too, which is you learn about twice as fast with a tutor as in a classroom. And it's kind of obvious why. First of all, you don't, your attention doesn't lapse. You're interacting with somebody, so your attention stays on it. You don't just stare out the window and wait 'til the lesson ends. I spent a lot of my time at school doing that. 

Secondly, the person's attending to you and can see what you're getting wrong and give you, correct it. And in a classroom, you can't do that. So it's sort of obvious why a human tutor is gonna be much more efficient than a classroom. An AI tutor should be better than a human tutor eventually. Right now, it's probably worse, but getting there. And so my guess is it will be three or four times as efficient once we have really good AI tutors because they would have seen so much more data. 

Parker: There's probably another element too, I would guess, which is around motivation. And what we found is if, you know, I'm sure for you and many students, if it was an interesting topic, if it was framed in such a way that captured our curiosity, we'd pay more attention. I guess AI tutoring will be able to do that at mass scale. 

Geoffrey: Yeah. So for most of us, interacting with other people is the most important thing there is and the most motivating thing there is. And I think AI tutors, it'll be pretty motivating. Even though they're not people, you'll get the same kind of effect of someone paying attention to you and telling you interesting things. It will be very motivating. 

Parker: And 30 kids in a class might have 30 different things that are quote, unquote interesting to them, and AI tutoring will be able to tailor it to them. 

Geoffrey: Yeah. 

10:52  AI & People Symbiosis

Parker: So as you know, what we're doing at Valence is we're building a, an AI leadership coach. And so the goal is to help personalize that learning and guidance at work. One of the things we're talking to an education company, and they said it's such a shame that everything we've learned in education about how to help people learn concepts seems to fly out the door the moment they step into the work world, and they're just left mostly on their own to learn. 

So we're excited about that. Can you share how you can see that thread of learning continuing throughout someone's career and not just ending when school's over? 

Geoffrey: So I would relate this to the longer-term development of AI. AI is gonna be used everywhere, and it's gonna get to be very intelligent. If we can reach a situation where we get a symbiosis between people and AI, AI is gonna make the world much more interesting for people. Mundane things will just be done by AI, and this symbiotic relationship will allow people to learn much faster, have much more interesting lives. That's the good scenario, and I'm hoping we can get there. 

12:01  AI's Economic Impact

Parker: How should policy makers and CEOs be thinking about and paying attention to the wide range of outcomes that could emerge? 

Geoffrey: This very quickly gets you into politics because what's gonna happen is mundane intellectual work is gonna be done by AI, and that's gonna replace a lot of jobs. In some areas, that's fine. In health care, for example, if you could make doctors and nurses more efficient, we could just all get more health care. There's a kind of more or less endless capacity for absorbing health care. We'd all like to have a doctor on the side who you can ask questions about all sorts of minor things you wouldn't bother your own doctor with, but you're quite interested to know why does your finger hurt today and stuff like that. 

Health care is great because it's elastic. You can absorb huge amounts of it, so it's not gonna lead to joblessness there. But there's other things where, there's just only so much of it you need, and it's gonna lead to joblessness there, I believe. Some people think it won't. Some people think it'll create new jobs. I'm not convinced. I think it's gonna be more like, people used to dig ditches with spades, and now people who can dig big holes in the ground with spades aren't in much demand because there's better ways of doing it. 

The worry is you'll get a big increase in productivity, which should be good, but the increase in goods and services that you can get from that big increase in productivity won't go to most people. Many people will get unemployed, and a few people will get very rich. That's not so much a problem with AI. It's a problem with AI being developed in the kind of society we have now. 

13:47  Techno-Optimism: Competing Views for the Future

Parker: So what would you say to the techno optimists? Because I think that everyone can see a scenario in which AI can make, you know, take the mundane off your plate, give you personalized learning, personalized tutoring, support you as you navigate this transition. And it seems like our social and political setup is not going to lead to that outcome. So how would you square that circle? What advice would you give to people who just say it's gonna work out? 

Geoffrey: Yeah. My first piece of advice would be, do you believe that because it's convenient for you to believe that, or do you really believe it? Now, people are very good at believing whatever is convenient for them. I've seen a lot of that recently. I just think they're being very shortsighted. 

Parker: And if someone was self-aware enough to say, okay. I recognize that this might be convenient for me, and I'm willing to ask myself a question or two. What question would you want them to ponder? 

Geoffrey: One big question is, should AI be regulated? And I think regulation is gonna be essential, if we're gonna avoid some of the really bad outcomes. 

14:56  Media Coverage of AI

Parker: If you think of the media, what's one, if you had a magic wand, what's one change you would make to how they portray or cover AI? 

Geoffrey: It's interesting. I haven't thought about that because I don't have a magic wand. But I wish they'd go into more depth so that people would understand what AI is. People have used ChatGPT and Gemini and Claude, and so they sort of have some sense of what it can do, but they understand very little about how it actually works. And so they still think that it's very different from us. And I think it's very important for people to understand it's actually very like us. 

So our best model of how we understand language is these large language models. People, linguists will tell you, no, that's not how we understand language at all. They have their own theory that never worked. They never could produce things that understood language using their theory. They basically don't have a good theory of meaning. And these neural nets use large feature vectors to represent things. It's a much better theory of meaning. So I wish the media would go into more depth to give people an understanding. 

16:11  AI & Policy

Parker: If people did understand that, how do you think it would adjust the lens through which they view AI and the policy importance of regulating it? 

Geoffrey: I think they'd be much more concerned and much more active in telling their representatives we've got to regulate this stuff and soon. And in fact, people have talked a lot about will AI be able to regulate AI? I think that's wishful thinking. I think that's about as hopeful as having the police regulate the police. 

Parker: We've talked to some scientists who've been part of trials who have AI generates concepts and scientists evaluate which ones seem to be the most promising. And it seems like it's a more effective way of making progress.

Geoffrey: Right now, yes. Right now, having AI suggest things and people make the final decision seems pretty sensible. I don't think it'll stay like that. 

17:07  Superintelligence and Creativity

Parker: Then it will continue to just go up the ladder and just get better capabilities. And what is superintelligence? Explain that to a layperson.

Geoffrey: More or less everything intellectually is just better than us. If you have a debate with it about something, you'll lose. 

Parker: And what about creativity? What about those things that we consider essentially human, just as good at us? A thousand Picassos? 

Geoffrey: Maybe it'll, that'll come a bit later. Many people have suggested that because it's not mortal and they have a different view of things, the idea that it's not creative, I think, is silly. I think it is creative. It's already very creative. It's seeing all these analogies, and a lot of creativity comes from seeing weird analogies. 

17:48  AI & Subjective Experience: A New Model

Parker: Is the LLM or the AI that we have today conscious? 

Geoffrey: I would rather answer a different question. I know this sounds like being a politician, but there's three things people typically talk about. Is it sentient? Is it conscious? Does it have subjective experience? They're all obviously related. There are a lot of people who say very confidently, it's not sentient. And then you say, what do you mean by sentient? And they say, I don't know, but it's not sentient. That seems to be a silly position to hold. 

I would rather talk about subjective experience because I think it's clear there that almost all of us have a wrong model of what subjective experience is. When I, suppose I have a lot to drink and I say, I have the subjective experience of little pink elephants floating in front of me. Most people think the words subjective experience of work like photograph of. And if I have a photograph of a little pink elephant floating in front of me, you can ask where is the photograph and what's it made of? 

So if you think subjective experience of works like photograph of, then you can ask, well, where is this subjective experience and what's it made of? And a philosopher will tell you, it's in your mind, which is a kind of theater that only you can see and an inner theater. So let me give you an alternative model of what the word subjective experience of me. I believe my perceptual system is lying to me. 

So I say to you, my perceptual system is lying to me, but, what it's telling me would be true if there were little pink elephants floating in front of me. Okay. So I just said the same thing without using the word subjective experience. And what I'm doing is trying to tell you how my perceptual system is lying to me. We think there's this inner theater. There is no inner theater. The inner theater is as wrong a view of how the, what the mind is as the view that the Earth was made six thousand years ago is of how the real world works. Almost everybody has this wrong view. They think that there's an inner theater with funny stuff in it that only I can see. That's just rubbish. And once you see that, you see that these chatbots, a multimodal chatbot already has subjective experience. 

So I'll give you an example. Suppose I have a chatbot that can see and has a robot arm and can talk, and I train it up, and I put an object in front of it. Let's say point at the object, points at the object. Then I put a prism in front of its camera when it's not looking. And I put an object in front of it and say point at the object, it points off to one side. And I say, no. The object's not there. It's straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh, I see. The prism bent the light rays. So the object's actually straight in front of me, but I had the subjective experience. It was over there. 

Parker: Fascinating. 

Geoffrey: That is the chatbot using the word subjective experience in exactly the way we use them. It's saying, my perceptual system was lying to me because of the prism. But if it hadn't been lying, the object would be over there. 

21:04  The "Manhattan Project" for AI Alignment

Parker: If you had a Manhattan-style project to try to address some of the challenges, either socially or from a research or regulatory perspective on artificial intelligence. What would that Manhattan Project be? 

Geoffrey: Oh, I think too there's one really essential question we need to figure out in the long run. There's lots of short-term things we need to do, but in the long run, we need to figure out, can we build things smarter than us that never have the desire to take over from us? We don't know how to do that, and we should be focusing a lot of resources on that. 

Parker: You know, alignment is a core a core of the sort of Manhattan Project. Is there any KPI? I know that's gonna sound sort of, you know, mundane, but any KPI that we could track to say, are we making progress on these alignment questions that you have? 

Geoffrey: Well, my worry, my main worry about alignment is how do you draw a line that's parallel to two lines at right angles? It's kinda tricky. And humans don't align with each other. 

22:10  AI Concepts: Probability Distributions

Parker: Is there a concept that is really important for people to grasp that is hard for you to explain in a way that a layperson can viscerally understand it? 

Geoffrey: I think often it's to do with probability distributions. The whole idea of a probability distribution people find hard to understand, think of it as a thing. And so in a large language model, you give it a context and it's trying to predict the next word, and it has a probability distribution over words. And people find that hard to grasp. 

Parker: And crucial because that's—that science.

Geoffrey: And it's perfectly straightforward if you understand probability. But unless you understand the idea of a probability distribution and changing what you're doing when you change the weights within the neural, the connection strengths is changing the probabilities that will assign to all the various words or word fragments. That's a concept ordinary people find difficult to grasp. 

23:02  AI is Underhyped, Not Overhyped

Parker: What would you say is the most overused buzzword in AI right now? 

Geoffrey: Well, the most overused buzzword by critics of AI is definitely hype. So for years and years, we've been saying AI is overhyped, and my view has always been that it's underhyped. 

Parker: I think that's a, that is a very important message to get out to people. I've seen that same thing. Oh, there's hallucinations. AI is never gonna catch up. Exactly. And there's, we've talked about sort of the rough edges of technology. There's always rough edges, but you have to look at the central sort of engine of it and the possibilities are so powerful there. 

Really appreciate the conversation. It's enlightening. I enjoy it so much and I know that our viewers and listeners and watchers will as well. So thank you. 

Geoffrey: It was a lot of fun.