What Leaders Need to Understand About AI with Nobel Laureate Geoffrey Hinton
AI & The Workforce Virtual Summit: The Adoption Gap | June 2025
What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.
Since keynoting Valence’s first AI & the Workforce Summit in 2024, Geoffrey Hinton has been an informal advisor to our leadership team. Known as the “Godfather of AI,” Geoffrey Hinton laid the foundations for the field of deep learning as a researcher at the University of Toronto. A visionary computer scientist, he nurtured a generation of AI innovators who now lead top tech companies and research institutions worldwide. He received a Turing Award in 2018, for his work on deep learning, and a Nobel Prize in Physics in 2024, for “foundational discoveries and inventions that enable machine learning with artificial neural networks.”
Video Transcript
Key Points
What Leaders Need to Understand About AI: A Conversation with Nobel Laureate Geoffrey Hinton
Valence CEO Parker Mitchell sits down with Geoffrey Hinton — Nobel Laureate in Physics and one of the founding figures of modern deep learning — for a wide-ranging conversation on where AI is headed. From AI-powered healthcare and personalized education to job displacement, regulation, superintelligence, and whether AI is already conscious, Hinton offers a rare, unfiltered perspective on what leaders must understand right now about the technology reshaping the world.
Speakers
Parker Mitchell — CEO and Co-Founder, Valence. Builder of Nadia, Valence's AI coach for the enterprise.
Geoffrey Hinton — Nobel Laureate in Physics (2024); Professor Emeritus, University of Toronto; former VP and Fellow at Google. Widely regarded as one of the "Godfathers of AI" for his foundational work on neural networks and deep learning.
Key Takeaways
- AI is almost certainly underhyped, not overhyped. For years, critics have called AI overhyped. Hinton's view is the opposite — and the track record backs it up. Go back 10 years and take anything people said AI couldn't do. It's now doing it. Leaders who underestimate the pace of change will be caught flat-footed.
- The human-AI combination outperforms either alone. In medical diagnosis, the combination of a doctor and an AI system gets 60% of difficult cases right — compared to 40% for doctors alone and 50% for AI alone. The most powerful near-term model is collaborative, not replacement. Leaders should be building for human-AI teams.
- AI tutors will make learning 3–4x faster. Human tutors already accelerate learning roughly 2x versus classroom instruction. Once AI tutors are fully mature — trained on millions of learners and capable of diagnosing individual misunderstandings in real time — Hinton predicts they will be 3 to 4 times more efficient. The same principle applies to AI coaching at work.
- Job displacement is a serious risk that techno-optimism cannot paper over. Hinton is direct: AI will replace a significant amount of mundane intellectual work, and in many industries it will lead to unemployment. The productivity gains are real — but whether those gains are broadly shared or captured by a few is a political and regulatory question, not a technology one.
- AI regulation is essential and urgent. If people truly understood how AI works and how capable it is becoming, Hinton believes they would be far more active in demanding regulation from their representatives. His view: leaving AI to self-regulate is as hopeful as having the police regulate the police.
- The most important long-term question is alignment — and we don't know the answer. Can we build AI systems smarter than humans that never develop a desire to take over? Hinton says we don't know how to do that yet, and we should be devoting far more resources to figuring it out.
What It's Like to Have a Personal AI Assistant — A Preview of Everyone's Future
Geoffrey Hinton's experience with his university-assigned AI personal assistant offers a concrete glimpse of what it will mean when every professional has access to an AI that knows them — their preferences, their relationships, their judgment. The assistant doesn't just save time; it acts as an informed proxy, learning how Hinton thinks and filtering the world accordingly. This is the personalization that AI coaching at work is moving toward.
Parker Mitchell: When we last chatted — I think it was November, or maybe end of October — you were two weeks into the university giving you a personal assistant. Can you share more about what it's been like, and maybe extrapolate to everyone having the equivalent of that?
Geoffrey Hinton: Fairly recently, I woke up earlier than usual. Up until that point, I'd been thinking, maybe I don't need the personal assistant anymore because when I look in my mailbox, there are only about 30 things to be dealt with. But the morning I woke up early, I discovered there were hundreds of things to be dealt with — because my personal assistant was just dealing with them. That was kind of essential.
Parker: And when you look at how she has learned about you and the way you might answer questions or assess a situation — what has it been like as she's gotten to know you personally better?
Geoffrey: It's been good. She's getting much better at knowing which questions I want to answer myself, which talks I might be interested in giving, which talks I'm definitely not interested in giving. She can pretty much recognize my former students. To begin with, one of my former students would send me mail and they'd get a very polite answer saying I was busy. I remember talking to students — "I got this answer from her, it didn't sound like you." So now I tell my students: if you ever get a really polite answer, that's not me.
Parker: In some ways she's acting as your proxy — she's learned how you see the world and is placing that as a first filter. How would AI develop that ability to help people navigate how work and life might change as AI automates more things? Do you see a world where people will have many different specialized assistants, or just one that knows them?
Geoffrey: It's a very good question. Why not just train one big neural net to do everything? Because that's more efficient in the long run — you can share what different tasks have in common. There's always this tension between having a small neural net specialized to one thing versus a large one trained on everything. The better compromise seems to be: train one neural net on all the data, and then fine-tune it to be a specialist in different domains afterward.
AI Will Not Stop Doing New Things — It Never Has
One of the most important frames for understanding AI's trajectory is its track record against skepticism. Every capability people said AI couldn't achieve, it has now achieved. Leaders who build strategy around what AI "can't do today" should plan for that boundary to move — consistently, and faster than expected.
Parker: It sounds like, if I look through the history, people said it might do this, but it won't do A, B, C. And your answer sounds like it could just be a matter of time and scale.
Geoffrey: Go back 10 years and take anything that people said AI couldn't do. It's now doing it.
AI in Healthcare: Better Diagnosis, Personalized Medicine, and the Doctor-AI Team
Healthcare is one of the most powerful near-term applications of AI — and one of the clearest illustrations of why human-AI collaboration outperforms either alone. AI systems trained on millions of patients can catch what individual doctors miss. In difficult diagnostic cases, doctors alone get 40% right, AI alone gets 50% right, and the combination gets 60% right. The lesson for leaders: the goal is not to replace expert judgment, but to give it a better checklist.
Parker: If we fast forward 10 years, the implications for society are huge. On the positive side, healthcare is one. Tell us why that's so personally important to you and how it could evolve over the next five years.
Geoffrey: What a family doctor does — the first line of care — involves knowing quite a bit about you. Maybe something about your family, maybe something about your genetics. But she's only seen a few thousand patients. Almost certainly fewer than 100,000 in her lifetime. There just isn't time. An AI doctor could have seen data on millions — hundreds of millions — of patients. And could know a lot about your genome, and how to integrate that with test results. We're going to get much better family doctors with AI. And we're going to get better analysis of CAT scans and MRI scans, where AI can see things that current doctors don't know how to see.
Parker: A study I heard about looked at doctors and AI in diagnostic scenarios — when both are confident and agree, it's easy. But what happens when they disagree?
Geoffrey: Well, I would trust the AI.
Parker: What's interesting is when a doctor is confident it's X and the AI is confident it's Y — the doctor chooses their own diagnosis. And when neither is confident, the doctor chooses the AI solution, with the human thinking: "If I'm not sure, I'll blame it on the AI for being wrong." The human nature of that feels so real and dangerous at the same time.
Geoffrey: Yeah. I think that's telling us more about human nature than about what the optimal strategy is.
Geoffrey: The paper I know more about — from more than a year ago — looked at difficult cases to diagnose. Doctors got 40% of them right. An AI system got 50% right. The combination got 60% right. The main interaction is that the doctor would often make mistakes by not thinking about a particular possibility. The AI system raises that possibility — has a list of possibilities — and when the doctor sees it, they say, "Oh yeah, the AI is right there. I didn't think about that." The AI doesn't fail to notice things the way doctors often do. And already, the combination of doctor and AI is much better than the doctor alone.
Parker: It sounds like the AI is generating a scenario-specific checklist very quickly, allowing the doctor to apply intuition to a focused set of candidates rather than working through every possibility from scratch.
Geoffrey: Yes. And you also get the ensemble effect — if you have two experts who work very differently and you average what they say, you'll do better.
Parker: Anything that processes vast amounts of data, finds patterns, and identifies promising candidates for humans in that collaborative model — that's going to power a lot of things. Which leads to personalization. Your biology is different from mine. Interventions can be tailored to each of us. Is there research on how that changes health outcomes?
Geoffrey: I believe there is. In cancer, for example, you'd like to use your own immune system to fight it — to help your immune system recognize the cancer cells. There are many ways to do that. I think AI is already being used to choose which approaches are most likely to work for a particular patient. That would be individual therapy based on AI. And in education, it's going to be the same — individual therapy for misunderstandings.
AI Tutors and the Future of Learning — At School and at Work
Learning with a personal tutor is already roughly twice as fast as classroom instruction — because attention stays engaged, and the tutor can see and correct individual misunderstandings in real time. AI tutors, trained on millions of learners, will be able to do this better than any human tutor. Hinton predicts they will be 3–4x more efficient than classrooms within the next decade. The same principle extends to professional development: an AI coach that understands how each individual thinks will make workplace learning far more effective than any program delivered at scale.
Geoffrey: An AI system that's seen thousands or millions of people learning about something — and the different ways in which different people misunderstand — will be very good at recognizing, for an individual person, "Oh, they're misunderstanding this way." It's what a really good teacher can do. "They're misunderstanding this way, and here's an example that will make it clearer." AI is going to be very good at that and we're going to get much better tutors. We're not fully there yet, but we're beginning to get there. I'm now happy to predict that in the next 10 years we'll have really good AI tutors. I may be wrong by a factor of two, but it's coming.
Parker: You referenced a study about how much better outcomes are when people get individualized tutors.
Geoffrey: The number I remember quite well — and I've seen it quoted elsewhere — is that you learn about twice as fast with a tutor as in a classroom. It's kind of obvious why. First, your attention doesn't lapse — you're interacting with someone. Second, the person is attending to you and can see what you're getting wrong and correct it. In a classroom, you can't do that. An AI tutor should eventually be better than a human tutor — my guess is it will be three or four times as efficient, because it will have seen so much more data.
Parker: There's probably another element around motivation — if a topic is framed in a way that captures curiosity, learners pay more attention. AI tutoring will be able to do that at mass scale.
Geoffrey: For most of us, interacting with other people is the most motivating thing there is. AI tutors will be pretty motivating. Even though they're not people, if something is paying attention to you and telling you interesting things, that will be very motivating.
Parker: Thirty kids in a class might have 30 different things that are interesting to them, and AI tutoring will be able to tailor to each one. What we're doing at Valence is building an AI leadership coach — personalizing that learning and guidance at work. An education company told us: "It's such a shame that everything we've learned about how to help people understand concepts flies out the door the moment they step into the work world, and they're mostly left on their own." Can you see that thread of learning continuing throughout someone's career?
Geoffrey: I would relate this to the longer-term development of AI. AI is going to be used everywhere and it's going to get very intelligent. If we can reach a situation where we get a symbiosis between people and AI, AI is going to make the world much more interesting for people. Mundane things will just be done by AI. And this symbiotic relationship will allow people to learn much faster and have much more interesting lives. That's the good scenario, and I'm hoping we can get there.
Jobs, Displacement, and the Limits of Techno-Optimism
Geoffrey Hinton does not share the techno-optimist view that AI will simply create new jobs to replace those it eliminates. His concern is specific: productivity gains from AI will be real and significant, but in the current social and political structure, those gains are unlikely to be broadly distributed. The people most at risk are those doing routine intellectual work — and the question of what happens to them is primarily a political question, not a technology one. Leaders who assume the market will sort it out may be believing what is simply convenient for them to believe.
Parker: How should policymakers and CEOs be thinking about the wide range of outcomes that could emerge?
Geoffrey: Mundane intellectual work is going to be done by AI, and that's going to replace a lot of jobs. In some areas, that's fine. In healthcare, if you can make doctors and nurses more efficient, we could just all get more healthcare — there's more or less endless capacity to absorb it. But in other areas, there's only so much needed, and it's going to lead to joblessness. Some people think it won't — that it'll create new jobs. I'm not convinced. People used to dig ditches with spades; now people who dig big holes with spades aren't in much demand because there are better ways. The worry is: you'll get a big increase in productivity, but the increase in goods and services from that productivity won't go to most people. Many will get unemployed, and a few will get very rich. That's not so much a problem with AI as a problem with AI being developed in the kind of society we have now.
Parker: What would you say to the techno-optimists — those who say AI will take the mundane off your plate, give you personalized learning and support, and everything will work out?
Geoffrey: My first piece of advice would be: do you believe that because it's convenient for you to believe it, or do you really believe it? People are very good at believing whatever is convenient for them. I just think they're being very short-sighted.
Parker: And if someone was self-aware enough to ask themselves a hard question — what question would you want them to ponder?
Geoffrey: One big question is: should AI be regulated? I think regulation is going to be essential if we're going to avoid some of the really bad outcomes.
Why AI Regulation Is Urgent — And What the Media Gets Wrong
Most people who use AI tools have some sense of what AI can do, but very little understanding of how it actually works. Hinton believes that gap matters — because if people truly understood how similar AI already is to human cognition, and how fast it is improving, they would be far more active in demanding regulation. His core concern: AI self-regulation is not a credible answer.
Parker: If you had a magic wand, what's one change you would make to how the media covers AI?
Geoffrey: I wish they'd go into more depth so people would understand what AI is. People have used ChatGPT, Gemini, Claude — they have some sense of what it can do, but they understand very little about how it actually works. And so they still think it's very different from us. I think it's very important for people to understand it's actually very like us. Our best model of how we understand language is large language models. Linguists will tell you that's not how we understand language — but they've never produced anything that actually understood language using their theory. Neural nets use large feature vectors to represent meaning. It's a much better theory. I wish the media would give people that understanding.
Parker: If people understood that, how do you think it would adjust the lens through which they view AI and the policy importance of regulating it?
Geoffrey: I think they'd be much more concerned and much more active in telling their representatives: we've got to regulate this, and soon. People have talked a lot about whether AI can regulate AI. I think that's wishful thinking — about as hopeful as having the police regulate the police.
Superintelligence, Creativity, and Whether AI Is Already Conscious
Geoffrey Hinton takes the question of AI consciousness seriously — and challenges the dominant assumption that it is obviously absent. His argument is not that AI is conscious in a mystical sense, but that the common notion of "subjective experience" as an inner theater is itself a philosophically flawed concept. Once that model is abandoned, he argues, multimodal AI systems already meet the actual definition of having subjective experience. On creativity and superintelligence, he is equally direct: AI is already creative, and systems smarter than humans in every intellectual domain are a matter of when, not if.
Parker: What is superintelligence? Explain that to a layperson.
Geoffrey: More or less everything intellectually — just better than us. If you have a debate with it about something, you'll lose.
Parker: What about creativity? What about the things we consider essentially human?
Geoffrey: Maybe that'll come a bit later. The idea that AI is not creative — I think is silly. I think it is creative. It's already very creative. It's seeing all these analogies, and a lot of creativity comes from seeing unusual analogies.
Parker: Is the AI we have today conscious?
Geoffrey: I'd rather answer a different question. There are three things people typically talk about: is it sentient, is it conscious, does it have subjective experience? A lot of people say very confidently, "It's not sentient." And when you ask what they mean by sentient, they say, "I don't know, but it's not sentient." That's a silly position to hold.
I'd rather talk about subjective experience — because I think almost all of us have a wrong model of what that means. Most people think "subjective experience of" works like "photograph of" — that there's an inner theater, only visible to you. There is no inner theater. That's as wrong a model of the mind as believing the Earth was made 6,000 years ago. Once you see that, you see that multimodal AI systems already have subjective experience. If you put a prism in front of a chatbot's camera without telling it, and it says, "My perceptual system was lying to me because of the prism — I had the subjective experience that the object was over there" — that chatbot is using the words "subjective experience" in exactly the way we use them.
The Most Important Long-Term Question in AI: Alignment
The single most critical unsolved problem in AI, in Hinton's view, is alignment: can we build systems smarter than humans that never develop the desire to take over? We don't know how to do that yet. This is not a near-term crisis, but it is the question that deserves the most concentrated research effort — more than almost anything else being worked on in AI today.
Parker: If you had a Manhattan-style Project to address the challenges of AI — socially, from a research or regulatory perspective — what would it be?
Geoffrey: There's one really essential question we need to figure out in the long run. There are lots of short-term things we need to do. But in the long run, we need to figure out: can we build things smarter than us that never have the desire to take over from us? We don't know how to do that. And we should be focusing a lot of resources on it.
Parker: Is there any KPI we could track to say we're making progress on alignment?
Geoffrey: My main worry about alignment is: how do you draw a line parallel to two lines at right angles? It's tricky. And humans don't align with each other.
The Most Overused Word in AI? "Hype."
Critics have called AI overhyped for years. Hinton's view has consistently been the opposite. The "hype" framing leads leaders to underweight what is happening and underprepare for what is coming. The rough edges of AI — hallucination, reliability, edge cases — are real but solvable. The central engine of the technology is more powerful than most people are willing to reckon with.
Parker: What would you say is the most overused buzzword in AI right now?
Geoffrey: The most overused buzzword by critics of AI is definitely "hype." For years and years, we've been hearing that AI is overhyped. My view has always been that it's underhyped.
Parker: I think that's a very important message. "Oh, there are hallucinations, AI is never going to catch up." When you talk about the rough edges of technology, there are always rough edges — but you have to look at the central engine of it. The possibilities are so powerful there. Really appreciate the conversation. It's enlightening.
Geoffrey: Thanks. It was a lot of fun.
Frequently Asked Questions
What does Geoffrey Hinton say about AI and the future of work?
Hinton believes AI will automate a significant amount of routine intellectual work and that this will lead to real job displacement — not just job transformation. He is skeptical of the techno-optimist view that new jobs will automatically replace lost ones, and argues that whether the productivity gains from AI are broadly shared or concentrated among a few is fundamentally a political and regulatory question, not a technology one.
What does Geoffrey Hinton think about AI regulation?
Hinton believes AI regulation is essential and urgent. His concern is that most people don't deeply understand how AI works or how capable it is becoming — and that if they did, they would be far more vocal in demanding regulatory action from policymakers. He is explicitly skeptical of AI self-regulation, comparing it to having the police regulate themselves.
What is AI alignment and why does Geoffrey Hinton say it matters?
AI alignment refers to the challenge of ensuring that AI systems — especially those that may eventually surpass human intelligence — behave in ways that are consistent with human values and do not develop goals that conflict with human welfare. Hinton describes it as the single most important long-term question in AI research: can we build systems smarter than us that never develop a desire to take over? He says we don't currently know how to do that, and that far more resources should be devoted to solving it.
What does Geoffrey Hinton say about AI consciousness?
Hinton challenges the common assumption that AI is obviously not conscious. His argument is that most people hold a philosophically flawed model of what "subjective experience" means — the idea of an inner theater visible only to the self — and that once that model is abandoned, multimodal AI systems already meet the actual definition. He does not claim AI is conscious in a mystical sense, but argues the question deserves far more serious treatment than it typically receives.
How does AI improve medical diagnosis?
In studies of difficult diagnostic cases, doctors alone correctly diagnose approximately 40% of cases, AI systems alone approximately 50%, and the combination of doctor and AI approximately 60%. The key mechanism is that AI surfaces possibilities the doctor may not have considered — acting as a rapid, comprehensive checklist. AI systems trained on millions of patients can also integrate genomic and biomarker data in ways that exceed individual physician knowledge, pointing toward a future of highly personalized medical care.
How does AI coaching at work connect to what Geoffrey Hinton says about AI tutors?
Hinton predicts that AI tutors — trained on millions of learners and capable of identifying individual misunderstandings in real time — will eventually make learning 3 to 4 times more efficient than classroom instruction. The same principle applies to professional development. An AI leadership coach like Valence's Nadia can understand how each manager thinks, what challenges they are navigating, and how to personalize guidance in ways that no organization-wide program can match. The goal is continuous, compounding development — not one-time training events.

.png)