Virtual Highlights with Live Q&A
Register Today
On the sidelines of the World Economic Forum annual meeting in Davos, Switzerland, recently, a panel of business leaders and experts explored what we’ve learned about AI’s implications for leadership over the past year, and what comes next. The discussion, which included former IBM executive Diane Gherson, Valence founder and CEO Parker Mitchell, and Microsoft’s Katy George, was hosted by Charter in partnership with Valence and The Washington Post’s WP Intelligence unit.
Parker Mitchell, Founder and CEO, Valence. Creator of Nadia, an AI coach deployed across dozens of Fortune 500 companies. Previously founded Engineers Without Borders and served as deputy to the co-CEO at Bridgewater Associates.
Katy George, Corporate Vice President of Workforce Transformation, Microsoft. Previously Chief People Officer at McKinsey. PhD in business economics.
Diane Gerson, former Chief Human Resources Officer, IBM (2013 to 2020), responsible for 360,000 employees. Pioneered the use of AI across HR. Currently serves on the board of Kraft Heinz, is a senior adviser at BCG, and a senior lecturer at Harvard Business School.
At Davos, Valence CEO Parker Mitchell joined Microsoft's Katy George and former IBM CHRO Diane Gerson for a wide-ranging conversation on the frontlines of AI in the workplace. Moderated by Kevin Delaney, the panel moved beyond theoretical discussions of AGI to tackle concrete questions: How are managers using AI effectively today? What happens to expertise when AI levels the playing field? And how do you manage both humans and AI agents at the same time? The discussion draws on Microsoft's 75 internal case studies, Valence's experience deploying AI coaching across Fortune 500 companies, and Diane Gerson's pioneering HR work at IBM.
How has AI's role in the workplace evolved over the past year, and what does the trajectory look like for 2026?
Kevin Delaney: We were here a year ago and had a discussion titled "Leading When Everyone Has AI." Looking back at some of the themes we discussed, like the introduction of AI agents and AI coaches, potential bottom-up experimentation with AI tools, and the shifts in leadership approaches from more top-down executive control to more empowerment of workers, a lot of those themes have played out pretty well over the year, and the discussion has been a guide to what was coming.
So we're aiming to do the same thing and discuss today what we've learned over the last year and talk pretty specifically about some of the use cases and the management and leadership techniques that are working in places. We have Katy, for example, at Microsoft, who has just finished 75 case studies of uses at Microsoft. Diane is working with clients at BCG and others. And Parker has an AI coaching tool that's deployed across dozens of Fortune 500 companies. So there's lots of discussions about AI and AGI. That's not this discussion. This discussion is: how are leaders and managers actually using AI effectively? What do we know from the frontier of this, and how can we actually put it to work in our own enterprises?
Among other things, we're going to talk about the value of expertise in white-collar work, and there are some new developments that reverse some of the original thinking about whether having a statistics PhD is that valuable for someone to pursue anymore. And then there's a question which I haven't heard discussed in a format like this, which is: how do you best manage both AI and humans at the same time? There are different management approaches required for those things, and people are increasingly toggling between the two.
Parker, I want to start with you. Last year, one of your more provocative statements during this discussion was that 2025 would be the year that AI joined the org chart at companies, and the idea was that AI agents would start becoming parts of teams. How has that played out?
Parker Mitchell: Well, I think the arc is critical to pay attention to because you kicked it off by saying we're going to look at what is today, but I think the story of getting here is also valuable because it gives you an idea of the trajectory of where we might go. So I'll go back even a year before that. When we first launched our AI coach Nadia, the conversation was almost always: will someone actually want to talk to AI?
The skepticism was real, and we use the term from William Gibson, which is "the future is here, it's just not evenly distributed." People using our AI coach talk to it way more than they would sometimes talk to a human, certainly more than they would talk to a manager. One of the surprises that you've talked about is people show up in the morning like "Hello, Nadia," and Nadia would try to problem-solve and they're like, "No, I just wanted to say hi."
So two years ago was "will someone talk to an AI coach." Last year, our provocation was this will be the year AI will join the org chart. And this year, I think there's generally consensus that there will be a large number of people who will be managing both humans and AI agents. There's a world in which 2026 is the year that your AI coach will know you better than your manager knows you, and you might actually want to choose an AI manager over a human manager. I think the speed with which this has become normalized for a subset of the population, and the power-law distribution of that, I think that trajectory and where that might take us is a really important facet to keep in mind.
Kevin Delaney: On that issue, I just saw some Manpower research that showed 50% of people would prefer to talk to AI than their manager about some issue that is complicated. So we're on the cusp of that at the very least.
What are the key takeaways from Microsoft's 75 case studies on AI in the workplace?
Kevin Delaney: Katy, I want to turn to you. You've been at Microsoft for a little bit. You have this really interesting position where you're trying to figure out the future of work from the vantage of Microsoft and you've just done all this research. Can you share some of the takeaways?
Katy George: Well, building on the conversation around managers, lots of different takeaways, but one of the things we've talked about, I think this will be the year moving from a focus on individual adoption, individual upskilling, individual productivity to really shifting teams and systems of work. A lot of the best examples we're seeing in the way we're trying to codify best practices is really around: how do you actually help managers?
Yes, leaders have a role in being clear about what matters, what performance they're trying to drive, how we'll measure success. But managers actually make it happen. And how do you help managers really see the knowledge work that they're managing and reinvent it with AI and with their teams?
One of my favorite examples of what some of our product development teams are doing is we have something called Camp AI, which is a boot camp that a whole team goes to. It's very different from a traditional learning program where everybody does their learning at different times and it's disconnected from the actual flow of work. This is a team that goes as an intact team, redesigns how they're going to work with AI, and then shifts the way of working forever. I think this whole notion of teams and the role of the manager managing that shift for the whole team, many of whom are working with agents, is going to be one of the big changes this year.
I was talking to some of our folks in the sales force, and they talked about how previously they've been really monitoring usage, how many people are using Copilot and sales agent and all the different sales tools that we have that are now AI-enabled each day. And they said we no longer do that because we all know revenue per sales rep goes way up with AI usage, so now we just look at people's performance. Which I think is a really an example of the maturation of what we're seeing.
Kevin Delaney: That's really interesting. I saw Brad Smith from Microsoft earlier and he was talking about the two categories of AI uses. One is business processes, so you bring it in and you redo the claims backend using AI. And the second one is individual usage and application experimentation. What you described is maybe another category, a third category, which is not individual, not the backend stuff, it's how the team is working together.
Katy George: I think it's bringing those two together. So definitely when we talk to customers about the way that AI will create value, we talk about increasing everybody's individual productivity and performance and enabling and empowering them to do more than they could have, but also changing business processes, also redesigning the customer interface, and also really bending the curve on innovation.
I think what I was describing is, even the examples where we're talking about individuals leveraging tools like the sales force, for example, what we're seeing increasingly is: how do you ensure that you don't just have a bell-shaped curve of some people using the tools and becoming 300% more effective and some people 1%? How do you actually create more standards around what are the best practices and move everybody to those? Whether that's changing a core business process or just enabling each individual, it's moving people as a team, as a whole organization, to the best practice.
In a lot of our organizations, we have citizen developer teams that are structured so that anybody can come up with new ideas and innovations. But then those are celebrated and scaled to everyone. So again, it's moving from an individual focus to a team or an organizational focus and capability.
How does collective AI adoption reduce wasted knowledge work?
Parker Mitchell: This team focus I think is critical. Oliver Blum, the CEO of Schneider, was also on this panel. They've deployed Nadia first as a pilot, now to all their 11,000 to 12,000 managers. He talked about how when they did research, 30% of knowledge work was basically wasted work. So the commitment he and I were chatting about was: my goal at the end of the year is that however you measure it, that 30% goes down to 20%, and then we'll bring it down to 10%.
That requires collective use. Our DNA was actually around teams. Prasad Si, the author of Project Oxygen and Project Aristotle, has been an adviser for a while. So how do you help people collaborate? You talk about an intact team, which is wonderful, and I don't think many people would truly say they have a uniquely intact team. It's a matrix. You can collaborate with so many people. If you can have AI not just coaching you but across the interstitial spaces between people and between teams, reducing friction, identifying inefficiencies, the opportunity there is really powerful.
And then one quick thing: we talk about the bell distribution, the normal curve. Power law is I think the thing that we have to be paying attention to. A small number of people are getting an enormous amount of benefit and there's a long tail. Averages conceal that. We shouldn't be paying any attention to averages in this early adoption phase. We should find the outliers, understand the outliers, and then expand the outliers and scale their practices.
Kevin Delaney: I think it's really critical that we don't pay attention to averages just yet. And we're going to come back. Parker, what you mentioned about taking the 30% to 20% I think is made possible by something exciting, which is the new science of white-collar work, and the way that Katy, I know, is among the people who have done work on lean manufacturing and things like that. And this is now more possible with white-collar work.
How does what Katy and Parker have been talking about match up with your board and advising work?
Kevin Delaney: Diane, I want to pull you in. How does what Katy and Parker have been talking about match up with your board and advising work?
Diane Gerson: I guess when I think about this conversation, we're all in different phases. There's the tool, which is the adoption phase of "please use the tool." And I know companies that are measuring how often and how many people are using it. And then there's the whole process, which is what Katy was referring to, the whole workflow, and that is raising it to a whole different level. Certainly at IBM, what we did was we said we've got to decide what we're not going to do first and just eliminate it before we start talking about using AI, and we're going to simplify it before we use AI. So they're looking end to end, which usually involved more than one function, all the way out to the end product.
Kevin Delaney: When you said eliminate, I've heard some great examples from IBM where there's corporate barnacles and processes, and they had special categories in HR for people who are taking leaves to compete in the Olympics, and they realized that they could just get rid of some of that stuff.
Diane Gerson: There was plenty of that for sure. But I think also there was just overlap between different functions because different functions used to have their own databases. Now we're all in a data lake, we're all drawing on it, we're all using AI. You can take away those overlaps. So that, I think, is valid use for rethinking design.
And then there's the third phase which we're almost in, and I've actually seen some companies that are already in it, and that is where the whole process is run by AI and the humans are the overseers.
I stand back from it all and I go, I studied this once, and this feels a lot to me like the moment when electricity was invented. We took people out of the guilds and into the factories and productivity didn't go up. What happened was the individual workers worked the same way they used to work. And the factory owners who had put in all this investment were getting a little worried because the dividend was going to the worker. They weren't working as long hours as they did when they were in the master's homes.
And they didn't have a methodology. HR didn't exist then, and sadly, there was this industrial engineer called Frederick Winslow Taylor who came along and said, "I have a way to do this. It's a time-and-motion study." And what he said was, "Look, we just have to standardize everything and we're going to take the team that made the whole shoe, all the way from cutting the leather to sewing the sole on, and we're going to break it up into microtasks. And instead of organizing around themselves and having them take breaks when it makes sense, they're going to be organized around the machines."
And we took away their sense of pride, we took away their sense of agency, we took away their sense of autonomy, but boy, we were productive.
And I'm very worried that if we don't have a methodology, we have the Lego blocks. We know what tasks are going on in your company and which are the ones that can be augmented and which are the ones that can be replaced and which are the ones that can't. And we know what skills people have. That's great. So we have the Lego pieces. How do we put them back together again? And the thing that bothers me is that we're sleepwalking our way into a Frederick Winslow moment. I'm seeing it.
I'm seeing "oh yeah, we're creating new jobs." What are the new jobs again? Prompt engineer. Okay, so I went to journalism school to write beautiful columns. No, I'm a prompt engineer for a machine to write the article and I'll edit it. That's probably not why you went to journalism school.
Katy George: I love the analogy, but of course Frederick Taylor wasn't the end of manufacturing. When Toyota invented the production system, one of the things they did was to demonstrate how all of the workers had to understand their full connection to the customer and really understand the end to end. And although they were doing standard work, even more standard than in the best Ford and GM assembly lines, they were also contributing to continually improving the way things worked.
I think there's an analogy there for what we're seeing in AI, which is this notion of citizen developers not just improving against an error rate, like with 30% knowledge work lost, how do we get to 20, how do we get to 19, which is the Toyota way to get to zero defects and zero non-value-add, but actually to experiment to create more value.
We've actually been experimenting with this. If you think about the types of waste that lean identifies in manufacturing work, what AI does is it gets rid of all those same wastes, but it also flips the equation on its head by augmenting. What's wrong with the "break down the tasks and automate 30%" approach is that AI is adding all these new tasks in.
For example, our auditors, they certainly are more productive with AI, but what's far more interesting is that we're doing proactive risk identification in a way that we never could have before. The folks who manage Azure cloud vulnerabilities are now using AI to identify and manage vulnerabilities in a way that humans couldn't do. The sales reps who are preparing for a customer meeting, it's not just that we've eliminated tasks they used to do because we've automated. We're now adding all sorts of new ways of getting insight, ways of thinking about what the next ask is going to be, ways of role-modeling and practicing the call. We have an agent that is actually Judson Althoff's voice so that they can ask Judson to get advice about how to think about what they're doing.
So you think about what the job of preparing a sales call used to be, and it's now got all sorts of new things. I did a lot of work on Industry 4.0, which takes lean to the whole next level. There's a whole question about whether digital manufacturing will replace lean. It actually builds on it. And when you go to the companies that were most advanced in Industry 4.0, I remember Foxconn saying, "We are making every single operator into a technician. We're making every technician into an engineer. We're making every engineer into a scientist."
Diane Gerson: That's exactly where I'm seeing AI show up. The best examples of AI implementation are expert humans who are guiding and driving and quality-controlling, who know how to ask the questions and who know what is value. And I think that will continue. What we now need to do, to your point, is: how do we not dumb down human jobs into the Ford assembly line, but how do we upgrade human jobs into much earlier in your career and much more broad-based being the designers who design the really interesting value-add?
You need to have a vision that you share with your employees so it's not so scary for them. If it is "operate at the top of your license, not do all the measly things that you used to have to do," then that's okay. That's why I went to nursing school or whatever. I get to be an auditor that really does exciting things as opposed to the boring stuff. That's great if that's part of our message. But often it isn't. It's like we're just looking for productivity.
What I've seen is these end-to-end pieces of work that are all done by AI, even with an AI orchestrator, and they're just looking on. Unfortunately, we've seen too many earnings calls where CEOs announce that they're going to cut their workforce and stop hiring because they have AI. That is not a vision. That's a horrible message to employees.
Katy George: I won't claim that Microsoft has figured all of that out, just to be clear. But I do really love this notion that the future of work is not going to be decided by an AI model. It's going to be decided by leaders. And we have choices in that. It's very clear that we can create tremendous value from AI in a way that also makes the workforce better for humans. That is already clear. So leaning into that and being explicit about it, I love your charge really to all of us, to CHROs and CEOs, about what that looks like.
Diane Gerson: I work with a lot of small AI startups, and I'll introduce them to companies and HR's never there. They go and talk with the CTO. They bring in their pilots. And they're doing really good work, but HR is not on the scene.
How are managers handling the challenge of managing both AI agents and human employees at the same time?
Kevin Delaney: Parker, one of the things we talked about at the beginning and it relates to what we've just been talking about is that you have managers who are increasingly interfacing with AI. They might actually in some cases be messaging with an AI agent in Slack or Teams. I've talked to companies where a lot of the coding tasks are now handled by messaging an agent that will go off and do it. And then they're also managing humans. You also have humans who have a preference in some cases to interact with an AI rather than their manager. What's the best practice here? How are managers containing this all in their head?
Parker Mitchell: There's a lot to unpack there. I also want to comment on where this might go, and the framework that has grounded me as I think about systems is the simple, complicated, complex, chaotic framework. I think it's called the Cynefin framework. We have lived in a world where it is generally complicated, and we've built a muscle mass around: if we understand enough, we can control enough, and we can get to an outcome that we want.
Many people would say that we are now on the edge of complexity and chaos, and the small perturbations are going to lead to unexpected outcomes. So we might have, for example, a great principle, and then three other companies in Seattle might do something different and it might have huge ramifications on the workforce. It is so hard to predict what exactly will happen. I think unlearning this belief, the muscle memory of how to operate in a complicated system where we have more control, and how to reorient to a complex system, I think is a crucial transition for leaders.
I'm humble about not knowing the future. We've had a chance to interview Geoffrey Hinton, who's an adviser, and he said, "I am the closest of anyone to the speed at which AI has improved," and he said, "It shocks me. I never predicted that it would go this fast." And so I think it's just the humility to say we have no idea what the new models are going to bring. We have no idea how the workforce is going to adapt. And we just have to be constantly sensing and trying to make sure that we're paying attention to it.
Kevin Delaney: The other framework which I love is adaptive leadership. Heifetz is not in the models that much because Heifetz wrote in a pre-web era. You'll never see Peter Drucker quoted even though he's still one of the best thinkers on this. It's interesting that some of the most useful thinking, I interviewed Ron Heifetz recently and he hasn't really thought about AI, but it's very applicable.
To make the question more specific: with AI, as we know from working in these tools, you have to be pretty specific and be a micromanager and say "I want this and you're going to do it this way and you're going to take this persona." And then that's the opposite of being a good manager of an actual human, where you want to have emotional intelligence and give people autonomy.
Parker Mitchell: I joke that AI should actually be called "another intelligence" because it is another type of intelligence that we don't quite know how to operate. My best friend described it this way. He said, "I'm working with AI," and he's like, "I don't know if I'm going to get 246 toothpicks or I can't cross the road." But he said that AI will do magical things sometimes and it will utterly fail something basic the other time. Learning how to operate with this intelligence that's not quite like a human but certainly not a tool, that is a skill set.
Having an AI system, a bot, an agent, whatever you want to call it, that tries to help you operate in that way, recognizing imperfection, recognizing the stochastic nature, you might say something and get a different answer, and learning how to work with that. People are going to be in relation with intelligence because that's the only way we know how to be with intelligence. People talk about anthropomorphizing, but that is happening.
Diane Gerson: I think that's dangerous.
Parker Mitchell: It might be dangerous, but it is happening anyway. And if it's dangerous, how do you handle it?
Diane Gerson: Humans have the ability to sense in the physical world. There's so much that we can be effective with. If we can see someone's presence, see their posture, see how they're breathing. There's a connection that you build that's even, am I right, David? Chemical between people. So that's what makes us human. And it's really important to run a business to be able to understand people because collaboration, negotiation, all of those things are the hardest things to get done inside of a company. AI is not going to make that happen.
So to be able to say to your employees, "You've got something AI does not have and we want to make it even better." I know Google's done a very good job at this recently, saying these are the skills that we want you to grow. They're human skills and they're really important. Critical thinking, collaboration.
Katy George: Actually, we're finding in our analysis that those skills are showing up in job applications and even more in job descriptions, more and more.
What happens to the value of expertise when AI can level the playing field for less experienced workers?
Kevin Delaney: I want to quickly touch on this point of expertise. Katy, maybe you could kick it off because the early research about AI application in workplaces was among the call center research that Erik Brynjolfsson and other people were involved in. And what it found was that the less experienced workers benefited more from AI tools in a call center. They could handle more calls more effectively. And actually, the most experienced workers saw a drop in productivity from using AI. Part of it seemed to be that it was just a distraction. They knew what the answer was and they had this thing talking at them.
So one of the takeaways was: what happens to expertise? If you can have the people who don't have the expertise who can be more productive, maybe people who have special PhDs in statistics or incredibly specific tax code knowledge, maybe that's not valuable anymore because AI can supply that. Among the things you've looked at in your research, that challenges at least part of that, right?
Katy George: I think we're going to see different paradigms or different models emerging in different types of work for different goals. So certainly we see other examples like the call center one where people with less experience, less skill can operate at higher skill. I was actually talking to someone here from one of the big fragrance companies, and they can actually train their young noses to create perfumes faster, but they can't figure out how to take the top noses to the next level. So there's certainly examples like that.
But we also see that some of the less expert folks are the ones who are creating a lot of slop, lots of output and not necessarily higher quality or meeting the customer need. And so what we're seeing emerge more often is what I describe as a T-shaped human role, where people still need disciplinary expertise, but they're now able to connect horizontally much more. And teams are becoming more fluid and we all have to be much more experimental.
I'll give you one example that I think is a great example of this. We have a team that does very complex prototyping for our customers. So they come in, they're designers, software engineers, data scientists, and they help customers develop a really complicated engineering prototype. This is not off-the-shelf Copilot Studio.
They have leaned into their disciplinary lanes. Each of the groups have figured out how to use AI to create modular ways of speeding up and improving their part of the project. And then what's really interesting is they now do prototypes in a week that used to take them three to four months, and they're better prototypes. So you can imagine the demand for this group is higher. It's not like "oh, let's fire these people." Now people actually want more of it.
But what's really interesting is what they're most excited about, if you talk to them, they're almost zealots about how exciting it is to work in this new environment. They're creating much more horizontal connection. One of the reasons they're going faster is they've made each of their disciplinary areas faster. They are absolutely staying in their lanes. But because they can see across better and they all can participate in a more fluid way, and because they freed up more time, they're creating more human collaboration to go much faster and to work differently together. They now call themselves "Firefly teams." This notion of agile teams coming together in a really collaborative way leveraging AI, leveraging agents, but still maintaining real human expertise. Right now, I see the value of human judgment going up, not down.
Diane Gerson: There's another example. I'm thinking about the call center. I'm looking at Zapier, where they thought that they had low customer service NPS scores and they needed to improve them, but they needed to scale it. So hiring more customer service reps wasn't going to work.
So they did the study and what they realized was actually the kinds of questions they were stumbling over were best answered by an engineer. So they were better off actually cleaning up the engineers' jobs so that an engineer on call shows up when a certain kind of question gets asked. Well, you think the customer is pretty happy about that. They get to talk to the person who designed it.
So instead of this concept of very siloed groups, AI actually enables experts to show up on the moment and use their expertise. You don't have to keep replicating it.
The other thing I was going to say is BCG and Harvard Business School did a lot of studies exactly on this issue, finding that the lower performers did better, the average performers did a bit better, but the higher performers didn't improve at all. This was across multiple groups.
You have to ask: what are you going to do about performance management then? You're just raising the average. AI is helping them. They've got a coach. So what are you measuring anyway?
Katy George: I'm from McKinsey originally. One of the things they told me this week, which I had not seen when I was there: they now in their case study interviews ask applicants to use AI during the case study interviews. Because what they care about is who's really best able to use AI to get to great answers, not who could do it from first principles without a textbook.
I do think that BCG study is super interesting and it's similar to the call center one. I think what we're going to see is that as people are getting better at AI, the top performers are reinventing. And that also moves from an individual productivity thing to how do you make system change. What we care about is enterprise-wide core processes have to somehow deliver much better. I'm seeing that in some of the case examples where the team members who have high judgment, high perspective, experience play a very important role. By the way, so do the early careers who are bringing the hunger and the drive for using AI. So it's pairing up. We're talking to lots of companies about what reverse apprenticeship looks like and how you bring early career along in a different way.
Parker Mitchell: Can I touch on that apprenticeship question? Kevin, you remember the very first piece of research that we co-created or co-examined together was around: could an AI coach help with the apprenticeship model? Especially this idea of cognitive apprenticeship and the idea of setting a goal that is slightly outside of capability but is achievable. It's an emotional management side of things.
When we were building the coach, this idea of the leadership pipeline was a core idea, which is: you actually have to shift your values. You've been an individual performer, you've been rewarded for these sets of things, and then suddenly you're managing a team. You have to do other things, and you want to, and you try, and then pressure hits, and you miss your targets, and you do what most people do, which is go back to what they're naturally good at: they go in and micromanage.
How do you shift? You're shifting values. You're doing cognitive apprenticeship, and an AI coach could help you with that. That's what we discovered, and it is crucial to do especially as the whole processes get reinvented.
And I'm so wary of those studies. Because if you took the conclusion of that study, you'd go "oh, AI doesn't help top performers." I would say that's because they were measured in large companies where the top performers had no flexibility. And maybe the best ones left to go to AI-native startups, because our marketing department is probably delivering at a 15-person level with three hyper-AI users. It is impossible to tell you how well they integrate AI into everything. So I think we have to look at a range of things and say, "Hey, how might this play?"
How do we design AI-augmented work that keeps humans cognitively engaged?
Katy George: The BCG study was a controlled experiment, which was: give a specific consulting task to different consultants and then grade them on the quality of their work and look at how that changed from before and after AI. Which is really interesting. So the top performers already did good quality work, etc. And that's what they found. But to your point, I think if we look at a consulting team and many other work teams, what we're now seeing is a complete reinvention of what that output is to a different level. And that wouldn't have been something that you would capture in that experiment.
Diane Gerson: But something that came out after, because they did follow-up work, was that they found that people were mesmerized by the output because it looked so good. They didn't actually apply critical thinking to it. The high performers did about as well, but they were actually pulling back and not being as critical as they should be of what came out, which could have enabled them to be faster. There's what people call "brain rot," where people only used 35% of their brain that they would have normally used in the work they were doing.
I think, by the way, you're talking about how do we purposely design work of the future. Purposely designing cognitive engagement, I think, is going to be so important. Designing not just for productivity but also engagement and learning is going to be critical. And that has to be a design principle. If it fails, we're not going to do that. It needs to be a litmus test.
Parker Mitchell: There was a study quoted yesterday I thought was fascinating, which is they gave AI that was supposed to behave in different ways to a series of people and asked them to perform tasks. The best answer was a mostly Socratic AI coach, basically asking questions. Not giving the answer, asking questions. That is the cognitive engagement that is required for this. And every question, there's more drop-off because we are humans and we are lazy and you want the answer.
So there's a real threading of the needle: not just give the answer, which will just be the work swap because you're just looking at it like "I guess that looks good," but not just Socratic, because people are going to get tired and they're going to take the shortcut. And you could argue that the best managers are actually the ones who ask questions of their workers as opposed to telling them.
Audience Q&A
Dr. Joel Myers: I started the company 64 years ago and we keep reinventing ourselves. There are different types of work. The more rails there are and the less creativity, like in legal work, AI can and will be able to do most of it. Customer service, I think, follows pretty closely behind.
Then I think about AccuWeather and weather forecasting. We really used AI starting in the '80s by bringing in all the models and all the data that's available in the world, more than anybody else. But the human role is so important because it can get to a certain point of accuracy, but like with tornadoes, the government provides 8 minutes of advanced notice on average. We provide 16 on average because of the expertise of our meteorologists in being able to interpret their experience, years and decades of that work.
Contrast that with medicine, which is way behind where we are in weather forecasting. You now can put all your symptoms in and get an answer based on millions of cases. You go to your doctor who's had a hundred or a thousand cases. They may not do as well as the entry you're going to get. But if you go with all the results from AI and the doctor had the experience, they'll be able to do much better. So it depends on the field and the experts, and how much creativity and innovation and experience plays into it.
Kevin Delaney: Do you think we'll end up with everyone with an agent of their own? Like back to your concept of that doctor or your meteorologists. Can you train your own agent?
Diane Gerson: I have students who've done that. And they're like, "I'm taking my agent, I've spent three years training it, and I'm taking it to my new job and every job after that because it's embedded all my thinking and all my learning." Is that, do you think, something that would work in weather forecasting?
Dr. Joel Myers: Not for a while, because it's not only that we have teams that work collaboratively and it's the interaction. Nothing's certain. When you're making a forecast, there's uncertainty. It's not based on what's happened. You've got to make a forecast and how you communicate that, it's the creativity. For example, a snowstorm in Eastern Virginia that had people in their cars for 24 to 36 hours a few years ago. Everybody predicted snow, be a little more accurate. We headlined "chaos on the highways." We told our business cards, "Don't go to Eastern Virginia for the next 48 hours because you'll get stuck." It's going to be a while before that kind of creativity. It's a combination of science, judgment, and communications.
Kiana: A couple of different things, but maybe it all centers around this sense of humility. I'm wondering how we build that up in organizations to then have the resilience to understand what we do when we say, "All of our people start three levels higher than they did before and that is the new model."
But also, when you talk about that horizontal work, my concern around examples of call center work and what that means is: who talked to the call center people about what they would build if they had the support to build what they are identifying as the challenges, as opposed to what the research and the experts are saying are the challenges?
In our first hackathon, cut to the chase, the people that won were our call center workers. And that is something that we as a business would not have picked as the top priority. But when we did that, our CSAT scores went up and actually they've now gone out and won awards for AI advancement. That's a call center team. I just get nervous about how we have these conversations about imposing on people how we believe it should work best for them without bringing them into the conversation.
Katy George: Number one principle: the people who do the work have to redesign the work. So much of knowledge work is tacit anyway. Nobody can actually redesign it for you. You have to be part of it. There's no way you can do it. That's absolutely essential to all of our case studies as well.
Diane Gerson: The whole history of bringing technology into workplaces, there's so many examples of failure if you don't involve the workers. Including in the Taylorized factories when they tried to automate. Total failure. Toyota involving employees every single day in improving. Total success.
David Rock: My institute has been making organizations better for humans through science for about 25 years. Technology is very unpredictable, you're absolutely right, but humans are actually quite predictable, particularly as they relate to new tech. So no surprise about the gap between AI fluency and middle users. But one of the interesting things is those middle users who are a little ambivalent are actually becoming worse performers, not better. It's not just the critical thinking. It's not just the atrophy. There's all sorts of other effects that we're seeing.
The interesting question that we're studying next is: everyone says, don't offload, partner. And that's what I hear at this whole event this whole week, is let's not just offload, let's partner well. So we're actually thinking about the principles of partnering from a cognitive perspective.
Just to give you one quick example: the experience of having an aha moment, of having an insight moment, is critical for human development, for human motivation, for human learning, everything. And so we want AI to accelerate, deepen, and expand those insights, not rob people of them. That's a partnering principle.
Parker Mitchell: Can I comment really quickly on some of this idea of adoption. We have one customer and they're hyper data-focused, and they have Copilot and they have Nadia. So they have satisfaction scores, NPS scores. When they looked at the distribution, let's say they both were at 80, or 8 out of 10, the men scored 10% higher on Copilot on average compared to the women, and the women scored 10% higher on Nadia on the satisfaction. So tools that have similar generalized capabilities but the satisfaction scores had a visible gender difference. It was just fascinating to this person who's data-oriented, like "why is this?" So let's dig into it more. But just the unpredictability. I would never have predicted that. And the importance of unpacking that in the spirit of humility.
Anna Kich: We just published, as Kevin knows, a study of 300,000 people that we followed over the last five years. Two things that you said: I think AI is developing faster than we as humans, both as individuals and organizations, can keep up. And I think that's causing a lot of challenges. Everyone's going to have to show ROI this year, and we'll see how it goes.
But fundamentally, where we're seeing a lot of gaps develop is between what leaders think and what employees feel, whether that's on trust or to what extent they're communicating in an emotionally intelligent way and to what extent they're connecting and whether their employees are following along the vision. What are you doing individually as leaders, or what have you seen other leaders do, that's going to help us bridge that gap? Because those gaps, in a world where we're all trying to show ROI, become fundamental execution gaps.
Katy George: It's interesting. You just prompted another gap that we're seeing that I think is related. Just like we saw with the consumer versus industrial internet, consumer usage and tooling got so much better so much faster. All of us are using Copilot or ChatGPT or whatever in our personal lives and doing all sorts of cool stuff. Then you get to work and you can't make it work to make your work better. I think that's also contributing to this notion of gap.
I think leaders play an incredibly important role in role-modeling. One of the things we talk about is: we can't future-proof your job because everybody's job is going to change, all of our jobs are changing. But we'll help you future-proof your career. And really bringing people into the redesign of their own work changes mindsets and gets people excited far more than doing something to them, which doesn't work anyway.
Diane Gerson: I think there are some really important principles there. Involve people, if you're going to participate in the redesign of the work, so that gives you some sense of agency. We're committed to your continuing to grow, so we're going to give you upskilling or whatever it is.
The other one that I think is really important that I've seen some companies do well, certainly we did this at IBM, was we said: we can't guarantee your job's going to be here. We have a turnover rate of whatever it is, 8%, in this particular job family. There's low demand for this particular set of skills that you have, both inside and outside the company. You will need to upskill. And if you're prepared to upskill, we're not going to replace the people who leave. Over time, we'll probably need fewer. But you will need to upskill. And if you don't upskill, that's on you. But we are going to give you every opportunity to upskill.
Having some kind of displacement philosophy that you're able to explain up front creates psychological safety. And look, some people left. They're like, "I don't want to upskill. I'll just retire." But at least they knew what the stakes were at the beginning.
The last thing I would say is it does have so much to do with the leadership role-modeling, for the leadership to say "I'm learning too. My job's changing." One of the things we did was everyone had to learn AI, and top management were measured on our scores on AI Academy. And I would stand up in front of my team and say, "I failed chapter three. I've tried it three times. I'm at it again, but I'm really frustrated." But to be able to share that we're all doing this because it's all new for all of us. We're not just doing it to you.
What single thought should leaders focus on for the coming year?
Kevin Delaney: We just have a minute left and I'd love each of you just briefly to share a single thought on what folks could focus on for the coming year.
Parker Mitchell: I'm going to build off this question of trust because I think that's going to be the defining challenge of 2026. If there are tipping points and workers, employees, mid-level leaders lose trust in the commitment of their business, their executive, to this transition, they will begin to retreat. They will begin to use shadow AI instead of the official AI.
I think leaders have to pay very close attention to this and keep their pulse on it. And I would say the right CEOs will say, "My earnings call, the audience in my earnings call is not the analysts. Actually, it's my employee base. And the number one thing I need to do is bring them along on this journey." It will probably be the hardest change management and the hardest leadership challenge they've faced. And Heifetz's principles are probably core to that.
Kevin Delaney: I've been hearing more honesty and transparency than ever from CEOs about how they're talking to their workforce. And that might not be in the CEO's DNA.
Katy George: I love what Diane brought up as well. This notion of we need a new science of knowledge work. Knowledge work, we can't watch it. We don't know how productive we each were in the last hour or how we think about quality. It's one of the reasons it's hard to measure the ROI of AI, because we don't know how to measure the quality of the knowledge work or the productivity of it. And so this notion of investing in that science, but with principles. How do we even codify principles about how we choose to redesign work?
Diane Gerson: I think at the end of the day, we all have to make a decision about whether AI is serving humans or whether humans are serving AI. And we've got to be really careful about that. That should be one of the defining principles when we implement AI.
Kevin Delaney: Great. Thank you. This was an incredible discussion. We have Diane Gerson, Parker Mitchell, Katy George. Thank you all for joining us. We hope you come back next year for the continuation.