Virtual Highlights with Live Q&A
Register Today
In this keynote session from the Valence's 2026 AI & The Workforce Summit, Wharton professor and Co-Intelligence author Ethan Mollick explores the accelerating shift from AI chatbots to full agentic AI systems capable of completing hours of knowledge work autonomously. Ethan introduces live demonstrations of agentic AI in action, shares landmark research on AI's impact on task completion and workforce performance, and makes the provocative case that HR — not IT — is the function best positioned to lead organizations through this transformation. Talent leaders and CHROs will leave with a sharper framework for measuring meaningful AI adoption and a challenge to build impossible things.
Key Takeaways
Full Session Transcript
The Gap Between AI Pioneers and the Frontline Workforce
[00:00:00]
Parker Mitchell: Ethan is the author of Co-Intelligence, this exploration of what the world will be like as we have another intelligence available to us. I want to begin by asking about this concept of frontier and frontline. You have a foot in both worlds. Have you seen this type of divergence between the sense of possibility and the frontline reality?
[00:00:31]
Ethan Mollick: What's really interesting about any diverse organization is there's almost certainly some people in the organization using these tools as the most advanced users on the planet in whatever industry you're in. Because there are always curious people, people who get how AI works and start working with it. And often they're just not telling you they're doing this. The most advanced users are actually inside organizations by and large — they're just doing it secretly.
When I talk to these people, they're often very excited to tell me what they're building. And then you ask, 'Who do you go to?' And they have no idea who to talk to inside the company. Or they're afraid to say anything anyway, because there's a policy from 2023 banning AI use that requires you to go to a council that then decides on the use case. And within five to seven months, you'll get a hearing in front of the council. And then they'll end up buying a vendor product instead.
Why Advanced AI Users Hide Inside Organizations
The most advanced AI users in any industry are often already inside large organizations — but they're using AI secretly. According to Wharton professor Ethan Mollick, these employees hide their AI use because unclear 2023-era AI policies create fear of punishment, and because employees know that revealing AI-driven productivity gains may threaten their jobs or reputations. This hidden adoption creates a major blind spot for organizational leaders trying to understand their AI readiness.
The Leadership, Lab, and Crowd Framework for AI Success
[00:01:20]
Parker Mitchell: What are the implications for a leader thinking about their whole workforce when a small number of pioneers are driving AI forward, potentially in unofficial ways?
[00:01:34]
Ethan Mollick: My informal method is: leadership, lab, and crowd. You need three things to make AI succeed. And one thing you need is leadership — and that's often what's lacking. People desperately want clear answers, like, 'How do we navigate AI?' And they're not forthcoming. The AI labs are making stuff up and throwing things against the wall. Most consulting companies are just on their first projects. The technology is changing really quickly — I'd argue we went through another step function change in the last six or eight weeks. Leadership needs to realize we're on uncertain territory, but we do need to guide this in some direction, and set the incentives up so people can help get guided.
[00:02:26]
Parker Mitchell: Do you have an example of leaders who have guided that in ways you think are most positive?
[00:02:33]
Ethan Mollick: Sure. Nicolai Tangen, who runs the Norwegian Sovereign Wealth Fund — the biggest pool of money on the planet — overcame his risk managers and said, 'We need to start using AI, and everyone gets access to ChatGPT Enterprise.' And every meeting he asks how people are using it. He's told me that 50% of their office is now writing code, and only 20% of them are coders. By asking and modeling use, you get big advantages.
I've also been impressed by what's going on inside Walmart's corporate offices. They're in a similar situation where they realize it's kind of a big deal, and there are a lot of interesting experiments happening throughout the organization. It's been interesting to watch the contrast with Amazon, where they block any external agents, while Walmart is thinking about how to embrace them internally. But it has to come from the leadership level or it gets stuck.
How Leadership Behavior Drives Enterprise AI Adoption
Leadership modeling is the single most important factor in enterprise AI adoption, according to Ethan Mollick. At the Norwegian Sovereign Wealth Fund, CEO Nicolai Tangen mandated universal ChatGPT Enterprise access and asks in every meeting how staff are using AI — resulting in 50% of office employees now writing code, despite only 20% being trained coders. Organizations where leadership abdicates this responsibility, delegating to consultants or councils, consistently see AI adoption stall.
Agentic AI in Action: A Live Demonstration
[00:03:31]
Parker Mitchell: You mentioned another step change in the past six or eight weeks. Those of us at Valence who are close to it feel the same thing. Can you showcase what's possible?
[00:03:51]
Ethan Mollick: I've got Claude Code running here locally, and I gave it access to a folder full of fake information about an entire company's AI transition plan. Claude Code is basically an agentic AI system that can run on your computer. I pointed the AI at this folder and said, 'Figure out any issues with the documents, any risks, and come up with a high-level strategy presentation I can give to the CEO right now about risk-proofing this.' And I just gave it that instruction. It will go off and figure out how to do this — writing files, reading files, going online, invoking research agents. Just go do the work.
[00:05:31]
The most important academic paper of last year — that I didn't write — is called GDPVal. They brought in people with an average of 14 years of experience in various industries, representing 5% of the U.S. economy, and had them create complicated tasks from their regular jobs. It took humans about seven hours on average to do this work. It took the AI 5 or 10 minutes. Then a third set of experts blindly judged the outputs — they didn't know whether they were AI or human created — and picked which they liked best.
When this came out last summer, the best model won about 48% of the time. When GPT 5.2 came out this past December, it won or tied 72% of the time. And what that means is: the way you should do work has changed pretty dramatically.
For any intellectual task that you think AI may be able to do that takes more than a few hours, assign the task to the AI, then check the work later.
Even if it doesn't work out and you end up doing it yourself — the 28% of the time AI fails — you'll still save three times as much effort and time as if you had done it yourself. That's a pretty radical change.
[00:09:27]
The real change is agents suddenly became real. That's because the models got better, and the harnesses and systems agents operate in got better. And now you're actually getting real work done with AI. It used to be a chatbot model — working back and forth with AI. That increasingly is not the model. It's almost a management or organization model. And that's a big change.
What the GDPVal Study Reveals About AI's Impact on Professional Work
The GDPVal study is one of the most significant benchmarks of AI capability in professional work. Researchers gave experienced professionals — averaging 14 years of experience across industries representing 5% of the U.S. economy — complex real-world tasks that took humans roughly seven hours. AI completed the same tasks in 5 to 10 minutes. Blind expert judges preferred the AI output 72% of the time as of late 2024, up from 48% just months earlier, signaling a fundamental shift in how knowledge work should be approached.
From Prompt Engineering to AI Management Skills
[00:10:01]
Ethan Mollick: Prompt engineering as a task has gotten easier. All the tricks you used to teach people — telling the AI to take things step by step, bribing it, whatever else — that no longer matters. It has no effect anymore. So you don't need to do any of that, which is great. But instead, if I'm assigning the AI a seven-hour task, suddenly this looks a lot like management. The right way to assign a task to AI looks like writing a PRD, or a standard operating procedure, or a product design document. The better you are at explaining what you need, designing the kind of test you want, and assessing the work — the better the results are going to be.
What's happened now is AI systems are smart enough to actually self-prompt themselves. You can have skill files — written in plain English — and the AI can just pick up a skill when it needs it. Start to imagine libraries of these things inside your organization that the AI picks up or not. Your competitive advantage is going to be how good your skills are in a lot of ways.
Why Assigning Work to AI Looks Like Good Management
As AI agents take on longer, more complex tasks, the skill of working with AI has shifted from prompt engineering to management. Ethan Mollick explains that directing an AI agent on a multi-hour task now resembles writing a product requirements document or a standard operating procedure: the clearer the instructions, the better the output. Good managers, he argues, will likely be good at managing AI agents for the same reasons — clarity, delegation, and output assessment.
Why HR Is the New R&D in the Age of AI
[00:14:56]
Ethan Mollick: I don't think this is an IT solution. I think it's an HR solution. People don't trust the AIs enough to be smart about giving advice, smart about advising people individually, smart about helping you make decisions. So, they tend to view this like an IT technology — we have to implement this and get people used to it. This is not a static endpoint, and we're going to have to change how work operates.
A lot of executives are just abdicating the responsibility to do this. They're hoping someone will tell them. The ones I see that are successful, they mostly don't want to tell you about it. And to the extent they're willing to, it's not useful to you because they're changing how they operate in a way that's not a universal tool. For the people in this room, this is your moment — not just as HR, or R&D, but because we have this blockage about how we incentivize people, how we reward people, how organizations work, that we have to guide people out of. And the only way to do that is with HR leadership at the center of things.
Why HR — Not IT — Should Lead Enterprise AI Transformation
Ethan Mollick argues that AI adoption in enterprises is fundamentally an HR challenge, not an IT one. The core obstacles — unclear incentives, fear of job loss, distrust of AI, and employees hiding productivity gains — are human and organizational, not technical. He frames HR as the new R&D function: the team uniquely positioned to redesign incentive structures, model new behaviors, and guide organizations through the kind of change that has no established playbook.
Managing AI Agents and Humans: EQ, Theory of Mind, and the Skills That Transfer
[00:16:42]
Parker Mitchell: If I come in on a Monday morning, I'm going to ask the humans on my team how their weekend was. And I'm going to ask my AI agents how much work they've done in the 72 hours since we left on Friday. It feels like those are almost two different skills. Are we going to see someone able to manage both humans and agents as agents become more powerful, or is there going to be a bifurcation?
[00:17:29]
Ethan Mollick: I do not know. I suspect that the EQ skills translate over. There's a paper suggesting that being good at AI is equivalent to having a theory of mind for the AI — which is what a good manager does for people. Understanding what's frustrating it. The AI has things it's stubborn about — it expresses that in words. If you can get a sense of what it's stubborn about, where it gets hung up, where you need to give it answers — that's a kind of common similarity to understanding where people might be stuck, what they need to know, why they're messing up. It's obviously not the same as humans, but there is a real parallel.
There's also a new paper on AI negotiation agents that found the agents amplify the variance between people. If your agent isn't good at negotiating, you lose out a little every time — and that's a multiplier effect. The demographics of the people involved, their experience with AI, all predicted how good their agents would be. Women turned out to be better at building agents than men in this study — even though in most studies on negotiation, women do worse than men for various reasons. Women's negotiation agents outperformed men's. We don't understand all of this yet.
How AI Agents Amplify the Performance Gap Between Employees
Research on AI negotiation agents found that the agents amplify existing performance variance between employees — meaning those who are better at directing AI agents compound their advantages over time. Surprisingly, women built better-performing negotiation agents than men in the study, outperforming in a domain where women typically underperform in traditional settings. Ethan Mollick cites this as evidence that the demographic patterns of AI advantage are still poorly understood and may not follow conventional assumptions.
The ROI Trap: Why Productivity Metrics Lead AI Deployments Astray
[00:20:02]
Parker Mitchell: Many folks in this room are facing the ROI world. They're being asked for the dollars and cents on the investment. How would you help make the argument for R&D?
[00:20:35]
Ethan Mollick: My colleagues have a tracking study at Wharton, and they're finding 75% of companies report positive ROI. I don't think it's the problem it was before. My fear is it's a trap, though — because ROI forces you into a dangerous pattern. Here's my nightmare scenario. I wrote, 'Write a memo based on this PowerPoint, then turn the PowerPoint into a memo, then more PowerPoints.' And it did that. And I kept saying more PowerPoints, and I got 21 of them. And they're good PowerPoints — that's the problem with them.
My fear is that if you aim for ROI, if you aim for productivity gains without thinking about organizations, you are going to drown in a sea of PowerPoints. You don't want PowerPoints, you want outputs. You want change. When ROI becomes the goal, and you don't ask 'productivity for what?' — I can give you an infinite work slop. If that's how you judge performance, you're in trouble.
[00:22:42]
Parker Mitchell: So it automates the tasks that have been designed in an old world of work, versus the investment you need to rethink.
[00:22:50]
Ethan Mollick: Right. My most depressing story: I spoke to someone at a very large company whose job was to lead a team of 14 people producing a compliance report every week. During COVID she couldn't produce the report — but the team was kept together. After returning to the office, they started producing it again. For a year and a half, she hadn't sent the report to anyone — just curious if anyone used it. No one ever asked. Fourteen people producing a report every week that nobody read, nobody wanted, nobody knew what it was there for.
I really worry that if the goal was the PowerPoint, what part of the system are you part of? It's not just change management around how AI does stuff. It's going back and asking why we're doing the things we're doing. Is there value in that? What was the human need?
Why ROI-Focused AI Deployment Produces Output Without Impact
Deploying AI with a focus on productivity ROI risks optimizing for work outputs that were never valuable in the first place. Ethan Mollick illustrates this with a live demo showing AI generating 21 high-quality PowerPoints on demand — and argues that if 'more slide decks per minute' is your AI KPI, you are in trouble. He calls this the 'sea of PowerPoints' problem: AI amplifies existing organizational dysfunction rather than eliminating it. Real transformation requires asking not just 'how do we increase productivity?' but 'what is this productivity for?'
The Risk of Underestimating AI: Weak Models and Anchoring Too Low
[00:27:29]
Parker Mitchell: ChatGPT use is 800 million or something, and paid versions are a tiny fraction of that. People are being exposed to very different versions of AI. What happens if people try less powerful versions, make a judgment about its capabilities, and underestimate the power?
[00:28:01]
Ethan Mollick: It makes me deeply nervous. If you want AI to fail, it will fail — because it doesn't work the first time through. You have to iterate with it. My wife does a lot of training and teaching on AI, and she was at a meeting where she was teaching the CLO of a very large organization about AI use. After two seconds, he pushed away his computer and said, 'It doesn't work,' and walked out. There's an existential crisis you have to be kind to people about.
AI was bad at math six months ago. It is not bad at math anymore. AI was bad at research. These are solved problems. Give someone a frontier model and things go very differently. You have to be paying 20 bucks or 200 bucks and using one of these systems. You can't be using Copilot and say, 'I understand how AI works.' You have to use a frontier model. That's where the advantage is.
[00:30:55]
Parker Mitchell: Anchoring just too low.
[00:30:57]
Ethan Mollick: Anchoring too low.
Why Using Outdated AI Models Leads Organizations to Underinvest
Organizations that evaluate AI using free or outdated models consistently underestimate its capabilities and anchor their ambitions too low. Ethan Mollick warns that AI models from even six months ago had significant limitations — poor math, unreliable research — that have since been solved in frontier models. He recommends leaders personally use frontier models (paid versions of leading AI tools) to calibrate their own understanding, rather than drawing conclusions from weaker systems they or their teams briefly tried and abandoned.
Looking Ahead: What AI Adoption Should Look Like by 2027
[00:31:09]
Parker Mitchell: If we're all together here in February 2027, what do you think the big surprise of 2026 would have been?
Ethan Mollick: If you want to show people the future of AI for 99.9% of people, just show them what AI does right now. Agentic work, long-duration agentic work with specialized tools for knowledge workers. We will see more specialized tools. More people will be shifting to directly using these models. And we're going to start seeing real disruption as the difference between vendors who do very specialized things where it makes sense that an outside vendor does the work — versus vendors who are just reselling you OpenAI products at a premium — becomes obvious.
[00:32:33]
Parker Mitchell: What would your hope be 12 months from now for the folks in this room for the progress they were able to make in having AI adopted across their workforce?
[00:32:37]
Ethan Mollick: Stop counting adoption as the percent of people who've touched your AI systems. You're making a very bad mistake that way, especially if you have bad AI systems. I've talked to one very large company where the senior management told me great things — and then a junior manager told me, 'Yeah, we have to do 90% of our work with AI, so all we're doing is summarizing every meeting in Copilot because there's no other instructions on how to do it, and we have to hit our metric.'
If you're not hacking the reward system, opening up the metrics, you're in trouble. You have to turn your team into R&D people. And that means you can't just check off AI use, increase productivity 17%, more slide decks than ever. These are real problems you have to deal with.
I'd like to see you have much more interesting metrics and KPIs than 'X% of people use this, or we've produced X more lines of code.' If you don't have innovations, if you haven't built an impossible thing — you should be building impossible things. And if you haven't jettisoned at least one thing that was critical to your organization, everybody could do this now. Stop doing it.
One place where I could see the biggest implications is L&D and mentoring. Mentoring at scale now — that changes stuff. If you think products like Nadia and others could do the mentoring you need to do, but you haven't changed how your organization works when you have mentoring at scale, something is wrong. That changes how your talent pipeline should work. You should be thinking much more ambitiously.
[00:34:29]
Parker Mitchell: Build impossible things. Jettison critical things. And if you haven't gotten to either of those two — probably on a monthly basis — you're not close enough to the frontier. Thank you, Ethan.
[00:34:42]
Ethan Mollick: Thank you so much. Awesome, thank you.