Virtual Highlights with Live Q&A
Register Today
In this conversation at the Valence's 2026 AI & The Workforce Summit, former IBM CHRO Diane Gherson and WSJ Leadership Institute President Alan Murray diagnose why AI adoption consistently succeeds at the product level but stalls inside organizations. Alan brings a unique vantage point: he co-hosted a year-long series of off-the-record CEO roundtables with IBM Chairman Arvind Krishna, and heard the same frustration repeated across every dinner — companies winning with AI externally, failing with it internally. Diane unpacks why, drawing on her experience leading one of the most ambitious AI-driven workforce transformations in corporate history at IBM. Together they make the case that the real barriers are people, culture, and organizational design — not technology — and map what it takes to redesign work for a world where humans and AI agents operate side by side.
Full Session Transcript
Why AI Succeeds in Products But Stalls Inside Organizations
[00:00:00]
Alan Murray: I spent a good part of last year hosting a series of six or seven dinners with very large company CEOs — Arvind Krishna was my co-host — about 12 at a time. We did one in Chicago, one in Houston, one in New York, one in London, one in D.C. Off-the-record conversations about what's working with AI and what isn't. And every dinner had kind of the same pattern. Many of the companies were having amazing success integrating AI into the products and services they took to market. But with maybe one or two exceptions, they were all very frustrated about what was going on within their own companies. They saw the huge potential to reinvent internal processes, but they weren't getting the payoff. And within 15 or 20 minutes, each of those conversations, I realized: we're not talking about technology. We're talking about people. We're talking about culture. We're talking about how you organize. What are those CEOs missing?
[00:01:15]
Diane Gherson: Let's start with psychological safety. We heard a lot about the fact that people are maybe pretending they're not using AI, but they really are — underreporting because they don't want to look like they're not smart, or because they went to school somewhere it was viewed as plagiarism. But the real issue is that leaders need to help people understand why they're valued. And somehow that gets lost in the discussion.
Google has done a nice job of this — saying: we value people because you can think critically, because you can collaborate. They have a long list of things that only humans can do, at least at this point. That gives people a sense of courage. And when you pair that with 'we're going to upskill you,' they feel taken care of.
[00:02:07]
Alan Murray: That is not a skill most CEOs have developed.
[00:02:17]
Diane Gherson: Right. It needs to come from the CEO — and that's critical. That's where you get the psychological safety. What we did at IBM was say: we hired you because of your curiosity. We screen for curiosity. So people think, 'Okay, I can do this learning. I can do continuous learning.' That builds psychological safety.
The second thing is trust — and that requires having principles around displacement. I'm sorry, but that's on the table. Don't pretend it's not.
[00:02:59]
Alan Murray: You can't say it's not going to happen.
Diane Gherson: There's going to be displacement. What are your principles around it? At IBM, we said: your job may go away. We have about 9% attrition — those people won't be replaced. But other jobs will go away too over time. That's why it's important to upskill, because there are higher value jobs you could be doing. It wasn't happily received, but people got it. We were honest and straight up about what we saw happening. And I think that's often missing. Companies are silent on the topic — and then employees imagine the worst.
The last thing — which always seems to be missing — is clarity around why we're doing this. Is it to get to market faster because competitors are pulling ahead? Higher customer satisfaction? Allianz, for example, went from claims reimbursement in 21 days down to 4 hours. That's the number one thing an insurance company has to do. When people understand the why, it changes everything.
Why Internal AI Adoption Fails While Product AI Succeeds
Internal AI adoption consistently underperforms relative to AI in customer-facing products and services, according to former IBM CHRO Diane Gherson. The root cause is not technology — it is people, culture, and organizational design. Employees hide their AI use for fear of judgment, leaders fail to articulate what human capabilities they value, and organizations remain silent about displacement. The result: employees imagine the worst and resist. The fix starts with psychological safety, honest communication about displacement, and clarity on the business rationale for AI adoption.
Continuous Learning and the Skills Challenge in an AI-Driven World
[00:04:20]
Alan Murray: One of the questions is: we'll help you into this future that may require you to train for new jobs. But we don't see that future right now. People are afraid of it, and they don't know what to train for. What are the skills people are going to need to survive in this new world?
[00:04:45]
Diane Gherson: That is the sad truth. At IBM, we said 'we're going to upskill you,' and some people did get into higher-banded jobs. But two years later, those jobs were automated too. So you have to keep learning. Ongoing learning, ongoing upskilling — that is the absolute truth. It's not a one-time investment. It's a permanent operating model.
Why Continuous Upskilling — Not One-Time Training — Is the Answer to AI Displacement
Upskilling employees for an AI-driven future is not a one-time initiative — it is a permanent organizational capability. Diane Gherson, former CHRO of IBM, shares that even employees who successfully reskilled into higher-value roles at IBM found those roles automated just two years later. The implication for HR leaders is clear: organizations must build cultures and systems that support continuous, ongoing learning as a baseline operating model, not a periodic response to disruption.
Business-First AI Design: The Alternative to AI-First and People-First Thinking
[00:05:10]
Alan Murray: People think of building an AI-first company. But that's not really the big challenge, is it?
[00:05:16]
Diane Gherson: I'm really scared of that. AI-first means: if AI can do it, give the job to AI. That's optimizing for AI. There's also optimizing for people — which is also a mistake in a lot of cases because you'll fall behind your competitors. But there's something in the middle, which I'd call business-first: optimizing simultaneously for both.
Let me give you an example of where people-first makes sense. Customer service is a burnout job, especially when customers are yelling at you. One big airline I worked with was losing their top customer service people. They put in AI — and AI had the same de-escalation skills as their top people, but it didn't take the abuse personally. Taking it personally caused people to take it home and eventually quit. So that's a case where protecting people was the right call.
But when you take all your entry-level jobs and say, 'AI can do it all, why hire entry-level?' — HR people, what's the problem with that? We're responsible for the talent pipeline. And it's not just whether someone had a job. It's the intrinsic knowledge you learn as an apprentice, the social intelligence you develop by osmosis. You know who to go to for certain questions, who to appeal to if you want to influence something. Political capital. All of these intangibles — you learn them. And you need them to be a great leader.
What Is Business-First AI Design — and Why It Beats AI-First or People-First
Business-first AI design means optimizing simultaneously for human and AI contribution, rather than defaulting to one or the other. Former IBM CHRO Diane Gherson defines AI-first design — assigning every automatable task to AI — as a strategic mistake that destroys the talent pipeline and eliminates the apprenticeship-based learning that develops future leaders. People-first design can leave organizations behind competitors. The business-first approach asks: what outcome do we need, and who — human or AI — is best suited to deliver it?
Redesigning Work for a World of Humans and AI Agents
[00:07:33]
Alan Murray: When I listen to you talk about this, it becomes very clear that we have to redesign the way work is done: what are humans going to do, what do we do to maintain the pipeline, what do we do to develop high-level talent to oversee AI in the future? What does that redesign work look like, and how do you even go about it?
[00:07:57]
Diane Gherson: Work design is something HR has not done for decades. The last time I touched it, it was called sociotechnical systems design, and it was for factories.
[00:08:12]
Alan Murray: This is Frederick Winslow Taylor you're talking about.
[00:08:14]
Diane Gherson: Post that — I'm not that old. But Frederick Winslow Taylor was the last one who did it en masse. He was a systems engineer who standardized everything, every motion. He micro-designed jobs so workers just had to do one little thing over and over again instead of making an entire shoe from cutting the leather.
[00:08:47]
Alan Murray: That era of work design was all about making human beings better machines because they had to be part of the production line. But we are in an age where the machines are going to take care of themselves. We don't need people to be better machines. We need people to be better people.
[00:09:08]
Diane Gherson: But look at all the jobs being created by AI right now: prompt engineers, audit trail people, data labelers — they're all serving the AI. Just like the people who sat at the assembly line twisting the knob. To some extent, we're doing the same thing. Instead of designing work around what AI does, we've got to design work around what we want people doing. That is the biggest challenge facing HR right now.
Why Work Redesign Is HR's Most Urgent — and Most Neglected — Priority
Work design — the discipline of determining who does what in an organization — has not been a serious HR function since the industrial era, according to Diane Gherson. The risk today is that organizations are defaulting to the same mistake: designing jobs around what AI does, rather than what they want people to do. Many AI-era roles (prompt engineers, data labelers, audit trail analysts) simply serve the AI, echoing the assembly-line model. The most important question for CHROs is not 'what can AI automate?' but 'what do we want humans to do, and how do we design work around that?'
Humans and AI Agents Working Side by Side: Real-World Examples
[00:09:48]
Alan Murray: One of the exceptions in my CEO roundtables — Robin Vince at Bank of New York — told me about it on the record afterwards. They now have hundreds of agents with email addresses, names, who log onto the system and work side by side with people. Sometimes people manage the agents. I assume agents sometimes manage people. What does that world look like, and how do you prepare for it?
[00:10:43]
Diane Gherson: It's happening at Kraft Heinz, which is a company here today. They've been using Nadia, but they've also been using an agent working on how to replace dyes with natural ingredients — the whole MAHA movement. At first it was going to be done by an outside firm, but the agent has done a perfectly good job of it. What they do is take the agent's outputs and then test them in the lab. All the scenarios, branching, composition testing, shelf life, appearance — all done by the agent. Then humans validate it in the lab. That's how they work together.
Another example: Athena, working in product development at a consumer company. A team member went on maternity leave, and the group decided the agent should take over that work. They were — friendly with it, I suppose. They liked it. And it wasn't impinging on their sense of agency. I think it's when it starts impinging on their sense of agency that it becomes a problem.
[00:12:09]
Alan Murray: And how do chief human resources officers oversee that?
[00:12:13]
Diane Gherson: Good question. Actually, I think that's a job Nadia should be doing.
[00:12:20]
Alan Murray: A task for Nadia.
[00:12:21]
Diane Gherson: Nadia is everywhere.
[00:12:22]
Alan Murray: Diane, fascinating conversation. Thank you for taking the time.
[00:12:26]
Diane Gherson: Thank you.