HR Is Now R&D
In this powerful conversation with Ethan Mollick — Professor of Entrepreneurship at Wharton and best-selling author of Co-Intelligence — one thing became clear: AI is already reshaping work, organizations, and leadership—and HR is now the R&D lab for that change.
As companies race to adapt and adopt, Ethan argues that HR leaders have an unprecedented opportunity (and responsibility) to guide the workforce through the AI transition. That requires urgency, but it also requires a clear vision for how to bring AI into the workplace. And the only way for leaders to set that vision is real, hands-on experience with frontier models and AI work coaches like Nadia.
Video Transcript
Key Points
00:00 AI's Immediate Impact on Work & Organizations
Parker Mitchell: Ethan, it's such a pleasure to welcome you today. I've been an avid reader of your book, Co-Intelligence. You've got it next to you. I've got it in front of me. And I think that topics that you are raising are such important topics for our CHRO audience. So first, welcome and thank you.
Ethan Mollick: Thanks for having me. I'm thrilled to be here.
Parker: I think the first idea that I want to explore is this idea of how much change we think AI is going to bring to the work that we do and the organizational structure that has been our construct for, you know, the past generation or two. What's your take on what an, how much change is this going to bring, both to work and to organizations?
Ethan: So I think, people tend to think of AI as like a future thing. My argument would be, and I work with lots of companies and do lots of research and talk to all the labs, is that everything we need for massive changes of work are already there with the current AI models. Like, we don't need more advancement, and I think we're gonna see changes in all aspects of work. It's not gonna happen instantaneously. It'll happen over time, but it's gonna happen, I think much quicker than people are expecting, which is a lot of what, calling it analytical work is already going to change deeply. That means we're going to change organizational structures, how we approach management, how we approach coaching and helping and tutoring. There's just a wide variety of changes. We've never had an intelligence prosthesis before, and now we do. We can summon a form of intelligence on demand. That changes how organizations work.
01:31 Overcoming Hurdles to AI Organizational Change
Parker: And so if we're at a stage where the base technology, the foundational models, and the LLMs have the capabilities now, what are the hurdles to the bigger changes? What are the steps that are gonna need to happen for this to begin to impact the organization?
Ethan: So the issue is that, I have a sort of three-part model that no one should take too seriously about how you think about AI and organizations, which is you need leadership, lab, and crowd. And so leadership is that you actually need senior leadership to articulate a vision of what the world looks like with AI. How do they wanna transform their work? They need to be users of AI to understand enough of how to operate this. They need to get a sense of how these systems work, but they also need to think about what the reward systems for using this are. Why would people wanna show that they're using AI, you know, and how are they doing it?
Then we talk about the crowd, which is how you're getting everybody using AI system. So that could be whether or not we're talking about the kinds of things that you guys are building, right, at Valence, which is like actual deployed, you know, like, package systems or whether they're using chatbots on their own, but how we incentivize them to use this because people are hiding AI use everywhere. People are turning to AI all the time as a coach, as a help with work. They're just not telling, you know, management about it because they don't want people to know they're using AI. They're worried about the job outcomes if they use it. They're saving time, and they don't wanna give that time back to their companies. So you wanna incentivize the crowd to do stuff.
And then you finally need a lab. You need to be experimenting about what the future looks like in building new tools and bringing in nontechnical people from the crowd who are really good at AI to do transformation inside your organization.
03:05 Leadership's Role in AI Transformation
Parker: And I wanna double click on each of those areas. So on the leadership side of things, one of the when I talked to CHROs, the, memo from the CEO of Shopify, which I know you cited in there, was one of the things that has gotten a lot of attention about saying, you know, we just need to move from, you know, let's explore it to I'm gonna mandate it. How influential do you think that memo's been? And do you think it's missing the mark in any ways?
Ethan: So, if you look at that, and Duolingo was the other one, I think, you know, I think it helped show people a model. Everyone always wants a model for what to follow, which is tough in a revolution because you kinda wanna carve your own path out while following in some areas, revolutionizing others. I kinda don't like the memo. Right? I think what it does is establish urgency that we have to do something about AI, but it doesn't articulate a vision. There's no what to do about it. It's just use AI and everyone becomes more efficient, and then that, you know, changes how we do things.
I expect leadership to articulate a vision of the future. I want them to, to be able to show you why you, as an individual worker, would want to transform your work with AI. What does my job look like? How do I get rewarded for using AI? What happens if I use it badly? Am I punished? What is this, what does my job look like a year from now when I come into work? And I think just saying urgency is great, but I'd hope for a vision as well.
04:28 AI & Employee Motivation: Beyond Efficiency
Parker: So on that individual motivation note, I think there's, you know, there's probably potentially two different paths. There's like a positive, and there's a side that might not be as great. And we talked to one of our people deploying our coach in the CHRO, it's an 80,000 person company, and she said one of the challenges is that, they've experimented with it, is people spend more time verifying output than they do creating something. It's much more efficient, but it's not as motivating. And I heard a similar study with researchers in the drug lab world and sort of, pharma companies. They don't wanna just review the output of a hundred different ideas that AI will generate. They, their value is coming up with their ideas, their identity.
Ethan: So I actually wanna push back on a little because we actually don't know that much about the motivation piece. The study, there there's a bunch of early studies on radiologists and other people using AI, but they were using old school machine learning forms of AI. Right? It's really important, even though the name is the same, they're very different, and that created algorithmic aversion because what would happen is the AI would just give you an answer to a question, and you'd feel deep, you know, as a doctor that it was overriding you.
Interestingly, the other study, which was, I, which was on scientists using AI turned out to be fake. There's a big scandal about this. It's the MIT study, and that's the only one we have of high-end AI use showing it's demotivating in the kind of way that we're talking about here. That doesn't mean that isn't. Right? And I think there's reason to worry about constructing work and meaning, but I think we don't know a lot. What we actually find in some of our studies is the opposite of algorithmic conversion, which is that people who use AI are actually much happier because they delegate out the stuff they don't wanna do rather than the high-end stuff.
So I think that this is where it really falls on things like the CHROs to think about how do we think about motivation in the world of AI rather than view it as like an on or off switch. There's a lot of different ways to interact with these systems that might be motivating or demotivating.
06:35 CHROs as AI R&D Leaders
Parker: I think that's such an important point. So it's really unpacking what it is that people find motivating about their work. How do we amplify that? How do we use AI to take away probably some of the things that are drags on their motivation? And using that framing feels like a much richer approach to articulate, as a leader.
Ethan: And it fits into a much larger issue that I think really lands on the laps of the CHROs overall. I think that you are the place where you're the place for somebody to stand and the lever to use to change the world. Right? Where is innovation coming? I the HR is R&D now because the people in your organization are gonna figure out how to use AI. Where is this, the skilling problem we're all about to hit, which is that everyone's using AI to do their work, so they're not learning anymore at an interim level. That's going to land first in the laps of CHROs. The idea that now, you know, that we have this new set of capabilities for education, for coaching, that's landing first in CHROs.
So, like, I think that that the leverage point for organizations is the HR function at this point. And the imagination about what to do with that is really a challenge for leaders in that space.
07:36 Leaders Have to Use AI Regularly, Not Delegate It
Parker: I love that lens and that take. So sort of, you know, the company is the R&D lab and the CHRO can be helping try and coordinate that. When you check the CHROs, how many of them grasp that and how many of them have embraced that?
Ethan: So I think what I'm finding at the C-level is that there's a very easy answer to this question about who uses it, who panics, and who feels urgency, and it's have they used AI a lot? Like, and the number one piece of advice I give in the book that turned out to be most accurate. Right? Because you never know when you write these things, but it's the most important piece of advice is you just have to use these systems. If you have not tried to use it for every business decision you have that legally and ethically can, you don't know what they can do. And you need to use a frontier model. You need to be using o3 or Claude 4 or Opus or Gemini 2.5. Use one of those three and use them for everything for a day and get ten hours in. And half of what I find I have to do is just show people, hey. Yeah. AI can do that. And then they start to be like, oh, wow. There's a real thing here.
So what separates us is CHROs who delegated using AI to other people and getting reports back or just reading about it, aren't getting it. And by the way, there's nothing to be ashamed of. One of the big, baffling things that I've been thinking about a lot, and my wife who runs the AI lab with me and does a lot on AI training and, you know, we've both been trying to figure out why are smart people avoiding using these systems? It's sort of a bigger kind of issue, and I think one that valence has been thinking about in really interesting ways, which is, like, some people just gravitate toward using these things, and some people want more instructions, and they get scared, and they get nervous, and they just walk away from them.
So I think, you know, the big issue is how do we get people to actually use these things? Because once the CHRO uses it for a bunch, or HRO uses it for a bunch, they start to get the idea pretty quickly.
Parker: One of the things that we talk about is sort of that personal epiphany moment. And I have found the people who are most likely to be vocal advocates for this have basically a personal epiphany moment once a month, if not more. And that's from personal usage with new models and the epiphany moment from twelve months ago might not be, you know, the same one as now, but it's so important to have that.
Ethan: Yeah. I mean, I talked in the book about three sleepless nights. Like, unless you've had an existential crisis, and I really mean this, you haven't got AI. Because you have to have this moment of, like, oh, no. Why is it doing this? This is so good. Right? Like, you know, if you haven't had that, if you haven't started worrying, like, what does this mean for work? What does this mean for my job? What are my kids' jobs? The people working under me, the state or nature of a company. If you haven't had that concern, then you haven't used these systems at all.
And I, you know, and one of the things I like about the conversations that we've had before this also is that I think you've got a deep sense of this, and, you know, people need to have this kind of crisis. And I think a lot of software vendors out there and consultants try and insulate you from this moment of epiphany. Right? They're like, just use the system. It'll do this stuff for you. But I think what makes the AI so interesting is, you know, to work with it, you have to have this sense of, like, oh, it is actually pretty smart. I do trust it to do stuff.
Parker: It is. I have had probably three sleepless nights, probably every two weeks for the past. That feels about right. Eighteen months. And I think, I couldn't agree more that it is not about saying, hey. Don't worry. We've got this. It's, we're in this together, and there are not clear answers, and we have to have more clear conversations. That's one of the reasons why we're bringing the CHROs together.
Ethan: Like, use voice mode and ChaptGPT 4 on the app, but just have a conversation with the AI about a problem that you're having. I talked to a Harvard quantum physicist who told me all his best ideas come from talking to AI, and I was like, is it really good at quantum physics? Like, no. No. But it's really good at asking questions. So I think having a conversation with it. Give it a bunch of documents you're having issues. Have it turn a, you know, create the documentation for this. Here's some documentation. Read this as a naive reader. Give me feedback on it. You know, create a PowerPoint for me on this. Sometimes it could just do things you wouldn't expect. Do this analysis. Generate 400 ideas for how to solve this problem. Have that conversation. Push it to do work with you. Push back, like, I think you can do more than this. Make it better. Like, this you know, like, I don't think you're taking into account the fact that the industry is changing so rapidly or you're not taking into account tariff risks. And, like, just push it and see what happens.
I think too many people use it like Google and ask for query, get a result back. The more context the AI has, the more converse the deeper the conversation, the better the answers are gonna be.
Parker: One of the things we've always said, which you mentioned there is using voice mode, and we say go from question to situation. Describe your situation in just stream of consciousness, voice mode, and then give it feedback, say, hey, exactly as you said, this is not how I would think about it. I would think about it that way, and the results are often just extraordinary.
Ethan: You'll be very surprised. I mean, every measure we have of the quality of these models is, you know, is much better than the average human across any, you know, whether you could like emotional intelligence, people vastly prefer talking to the AI versus talking to doctors. Right? And, you know, reading the mind in the eyes of, you know, theory of mind stuff, very good, creativity, very good. I mean, these are really good systems.
12:29 The Urgency of AI Adoption & Literacy
Parker: So we're gonna dig into adoption at scale in a moment, but let's talk for a moment about urgency. You've been advocating a position that we very much agree with, which is the time is now, even though everything will be changing and six or twelve months from now, it'll be different. Why is it urgent to get AI tools and AI awareness and AI literacy widespread in workforces today?
Ethan: I mean, there's a lot of reasons for this. First of all, again, the change is baked in. You're not, your job is not going to the same five years. Right? I mean, I think AI people sometimes think the world changes overnight. And even if we have super intelligent machines and people have argued that o3 already can do almost all human tasks if we just did it properly. That's a model from ChatGPT that's already out. It doesn't matter because of exactly the reasons that everybody I'm talking to knows, which is that organizations are complicated and contingent and all kinds of things happen slowly over time.
But I would argue the change is already baked in. Right? You know, you talk to anyone in a profession who's doing detailed work day and they're like, this already does a lot of work. We just have to figure out how to make it work inside our companies. So I would argue that you have to be aware of this change, the shockwave that's already happening inside the systems.
The second is there's huge advantages to getting this right. Like, there's unsolved problems, especially in HR, that we can solve. I mean, you're addressing coaching. I'm thinking a lot about tutoring and mentoring and teaching. These are things that were impossible to do at scale before. Now we can do them at scale. What does that mean? How does that change what we can do? There's crises that are already gonna be happening you have to deal with. The training crisis is already going on. Right?
This summer, if you've, you'll notice that all of your interns are using AI to do all their work because they're not dumb. They wanna show people that they're smart, so they're gonna use AI because AI is better than an intern. And all your middle managers are gonna stop turning to interns to help them and, you know, and learn, having the interns learn their jobs because AI does a better job and never complains.
So, like, you have crises that are already built in the system that you've addressed. There's opportunities. There's crises. There's change already in the system. I cannot emphasize enough how much things are already changing. And I think the worst thing you can do is assume somebody knows the answers to all these problems. I think we're trying to solve some of these issues in education. You're trying to solve some of those of the coaching, but I don't think we can or we can argue that there's any one person you go to solve AI for you. Because I talk to the AI labs all the time, and they tell me they use my Twitter feed to figure out what AI can do for you. Like, nobody has answers. There's no instruction manual.
So if you're waiting for someone to hand you a fully built out instruction on how to use AI, you're gonna be waiting until everybody else already figured this out, and that's too late.
14:51 Habits of Early AI Adopters
Parker: And the people that are doing this, you know, the outliers that have adopted this mentality, what are some of the habits or rituals or maybe how they allocate their time? Are there any patterns that you'd say, oh, these are what the people who are paying close attention to this. This is what they're doing.
Ethan: So that's that lab portion that we talked about. You don't need to be the person doing this all the time, but you absolutely need people in your organization doing this all the time. And it and people from an HR perspective doing it, by the way. One of the other mistakes we will make is they hand this to IT or even more general counsels. No offense, general counsels in the room. But, and it becomes a thing that is that gets wrapped around a technology or legal background. This is inherently like working with people.
The best users of AI I know are good managers, are good teachers. Like, the skills that are make you good at AI are not prompting skills, they're people skills. And so you need to be taking that approach of, like, having people experimenting with it on a people skill basis and trying to figure out what give me a feedback and, like, oh, you could do better than this. It honestly responds really well to the kind of feedback that you give a human to, the AI responds really well to.
So part of this is about experimenting personally, and I think you need to be experimenting more personally than you think you are. And by the way, it'll quickly become from time sink to, like, oh, I'm gaining time back. But then even aside from that personal skill, you need people who you think are very smart, who are very good, who are inside your domain, who are also experimenting with this and showing you what they learned excitedly on a regular basis.
To me, the easiest way to find those people is in the crowd. The people who are already using AI, and by the way, your organization is riddled with AI use. Like, all that's happening is unauthorized AI use because we know from a, like, the number of people reporting they're using AI at work in America in a representative survey went from thirty percent to forty percent over the course of two months. Like, everyone's already using it in your organization. So your idea is to destigmatize use and then, of the people, there's probably people who are already trying to evangelize people. You probably had to already had meetings with these people who are like, oh my god, AI does all this stuff. And I don't know whether you've, you know, pushed them off somewhere else or whether they're working with you, but those are the people who you, become part of your lab.
Parker: And I'm trying to recall a stat that I believe I read, in one of your posts, which is that, regular use of internal GPTs sort of, sort of plateaus at twenty percent. Is that, is that the number that you.
Ethan: Yeah. That's, anecdotally, twenty to thirty percent is the maximum I find from internal system use.
Parker: And so, anecdotally, if we're at, say, call it twenty, forty percent in a survey, that means at least twice as many people are privately using it than publicly. And it's just important. That's an interesting concept for a CHRO to think about. Every person I hear using it, there's actually a second one who's not sharing it.
Ethan: Yeah. That's a good way to phrase it. Right? It's probably at least double. And then of the rest of the people, some of them are just waiting on training, and some of them just need clear instructions. Like, not everybody is gonna be an innovator. They need to get ideas, either for the kinds of, you know, well-built products that, you know, that we're talking about here or else from things that come out of the lab.
17:50 Overcoming Resistance to AI Adoption
Parker: And I wanna spend a moment on people's motivations. I've sort of joked with someone that, you know, once you become, you know, senior enough in a company, your private job is almost to become a monopoly. You kind of wanna have a unique perspective, a unique network, a unique something that makes you valuable. I mean, people don't do that consciously, but I think unconsciously, they don't wanna give up everything that they are good at. If AI is a superpower for them, is there a real conflict of interest between people finding ways to make themselves way better and not necessarily wanting to share that with five colleagues down the hall?
Ethan: Absolutely. I mean I mean, it's worse than that. Right? Like, that's one of, like, six reasons you don't wanna share AI use. Right? Because, like, not only are do you not wanna give away your competitive advantage, but also people think you're brilliant right now. And, like, because you're using AI, you don't wanna show them you're not brilliant. And then also, like, look, people and companies are dumb. They know efficiency gains translate into headcount reduction. So they don't wanna show you the efficient, higher efficiency because there might be a reduction. Even in companies where they don't worry about that, the efficiency gains mean I'm expected to do more work than I did before. Why would I ever wanna do that? So I don't wanna show you that I'm showing efficiency gains. Right?
So there's layer upon layer of reason why people don't wanna show you why they're using AI.
Parker: And what are some of the techniques that either CHROs or leaders use to overcome those natural hesitancies?
Ethan: So there's a few things. There is, on the most basic level, it is that clear vision. Right? Having an executive level vision of, like, what does job, what does work look like with AI? Why should I feel safe that I'm using it? Right? Having executives role model their use. Another really important reason for CHROs to use the system is if you use it all the time for everything, other people will start to see it being used all the time, and that will let you do things differently. But then there is actual change in reward systems. I've seen the pretty crazy things ranging from, like, you know, one company I spoke to, I don't recommend this, but they got, the CEO really realized how big, he did this was six months or eight months after GPT four came out. He gave ChatGPT to everybody in the company and then fired everybody who didn't use it by the end of the month after arguing they should use it many times. And then gave $10,000 prize at the end of every week to everybody, to whoever came with the best AI idea.
I've seen CHROs who say, before you hire anyone in a company, you have to, the team has to spend two hours trying to automate that job with AI and then rewrite the job proposal or do the same thing with it when you're asking for money. I have seen, people whose model just is all channel AI use. Like, I, if you turn in a report and you haven't told me how you're using AI to do this, I'm gonna reject the report. If you have, if you are, you know, you everyone has to go through not just training, but that training is about hands-on use and you have to certify at the end of every class that you built something with AI. Like, it is hard to emphasize if you think it's as big a deal as I think you and I think it's going to be, it's hard to overdo your push for adoption.
20:43 Co-Intelligence: AI Augmenting Human Work
Parker: I think that could be the, you know, the headline. It's hard to overdo the push for adoption. We've, you and I have talked about the importance of, like, co-intelligence versus AI to automate. And obviously, AI will automate a number of parts of it. How did you come up with the concept of co-intelligence, and why is that such a central piece of the work that you promote?
Ethan: So right now, and I mean, really, I do have to put a caveat here. Right? I wrote Co-Intelligence, you know, a year and a half ago or so. I still think it's a really, it's like, it describes today's world really well. But I think models are getting better. There are jobs that the AI is doing that is better than human. But for the vast majority of work, the AI is a supplement to human work. Right? We've never had a way of making people smarter, right, on a general-purpose basis, and now we have the ability to do this.
So, for example, take, you know, a deep research report. There's, deep research reports are very powerful. They're really good. When I talk to lawyers about them, for example, they say they save forty hours of work producing it and then maybe takes an hour to have someone junior check the results. So, like, but now everyone has an analysis on tap. How does that change your job? We know code, like, we know AI fills in gaps that you have.
So we just completed a study at Procter & Gamble where we had 776 workers. This was my colleagues at MIT and Harvard University of Warwick. And, we had them work individually or in teams of two. Cross-functional teams on real work tasks. Individuals who have worked with AI performed as well as teams. Right? Absolute co-intelligence stuff. And they were, by the way, as happy as teams, just to make that, refer to the happiness point earlier. We like, these performance improvement things are very real, but they had come from working with AI, where humans use AI to check to do work for them, but they also have enough expertise that they know when the AI is weak at something or making something up.
22:29 AI's Impact on Entry-Level Roles & Apprenticeship
Parker: That level of expertise brings, sort of, it to mind the question that I think you were talking about earlier that the, the idea of some of the threats. The bottom rungs of the ladder are obviously from a career perspective, the most easily replaceable with AI, interns, entry-level roles. And many of those entry-level roles are where people learn that taste, that judgment, that ability to assess quality. How do you see some of that playing out if AI can do the things that early career people used to be focused on? And what recommendations would you give to CHROs on that, you know, that issue?
Ethan: Think about how the education system works. I teach at a great business school. I teach people to be generalists. Right? I don't teach them to work for Procter & Gamble or J.P. Morgan. I teach them to be, to do product development or to think about business analyst cases. And then they learn to be specialists by working the same way we've taught specialists for four thousand years, apprenticeship. Right? They work under people and they do junior work, and then, in return, mid-level people get help on how to do the junior work, and the young younger people get correction over and over again 'til they learn how to do something. They learn expertise, which is very hard to teach. That all is breaking right now as we talk. Right? Because every junior person is dumb if they're not using AI to do the work. Right? Why would they not use this to do all the work that they're doing? And meanwhile, more senior people are realizing that, you know, working with people is often really annoying. Right? Like, especially junior folks. Most people are not trained to be good mentors. They're not trained to be good coaches, so they have to figure this out on their own. And some people are good, some people are bad at it, they’re stuck doing this too.
So that training pipeline that was always implicit has broken. Right? The coaching pipeline has broken this summer, if it, or last summer. And it has to be reconstructed at the CHRO level. Like, we have to take this deliberately. That means taking L&D seriously as a thing that we have to do. That means thinking about what skills people need to learn in a deeper and more serious way.
Parker: We did a report with Charter. We sponsored a report with them on this apprenticeship model and asking whether an AI coach like Nadia could perform some of those functions of how apprenticeship works, which is providing you a framework, giving you a challenge that meets your level, having you get feedback on how well you did, giving you escalating that challenge and sort of role modeling in some cases what that looks like. And it was a fascinating idea. Nadia does a bit of that, but there's opportunities that AI can solve some of the issues that AI might cause as well.
Ethan: I mean, I think, you know, part of this is we haven't like, so much about AI is teaching us what we don't know about humans very well. It was good enough. Right? Like, if you were actually designing a way to build experts, you would never design the method we have right now. We hope you get paired with a good manager who has time to do it and build a relationship with you and is a good coach. I mean, you know, you wouldn't have teaching be done in lecture halls in the way we, like, there's a lot of broken systems already that were good enough because there was no alternative.
I think AI is both, you know, poison and cure. Right? It's, in a lot of ways, what it is a very bright light on organizations. And I think we're gonna see this, by the way, everywhere. Like, let's take one very narrow thing that is the first thing everyone does with AI in organizations, which is they do their performance reviews with AI. Why do they do performance reviews with AI? Because doing performance reviews is really annoying. And so the AI will write a performance review, and then I, they brag about this. Right? But then what was the whole point of the performance review now? If the AI is doing it, how can you not go and change performance review? So it shines a light on this issue. And in doing so, forces you to think about the process of what you should be doing better, how good you are at the job. So I mean, I think this is both poison and cure.
Parker: We talk a lot about, we call that the second bounce of the ball. I don't know if that's quite the right term, but the first bounce of the ball is, let's automate the things that systems that we put in place. But we put in place systems that are inherent compromises to the ultimate goal that we have been limited by the technology and performance reviews is an example that we talk about.
So now if, for example, if you're being coached by Nadia as a manager, you can have the performance review framework kind of built in and you can be nudged and say, hey, you haven't talked about how Ethan's done on this particular topic in this past month. Let me just ask you a couple of questions. I'll just keep that in mind. And so it becomes like an ongoing process that is woven in versus something you do on, you know, November 20 and you try to get it done as quickly as possible because you need to like check mark it.
And so those ideas of, like, how will we reinvent the talent function for first principles given the power of AI is an important topic. Do you have recommendations to chief talent officers on how they might begin to rethink the talent function?
Ethan: I think that's, you're gonna see the same crisis across function after function, which is we need to sit back and say, why are we doing this? The point of the performance review is not to do a performance review. It is to provide an opportunity for, you know, management to reflect back on how someone's doing, to sort people, to give people feedback to improve their job, all of those things. Maybe five other things inside your organization that it does. Right? Allocate who's a good manager or not. Maybe it serves some secret point value thing, right, of, like, you know, that tells that lets you win political games. We need to start exposing what these things actually do to rebuild them.
So as a chief talent officer, part of what you wanna do is think about, what are my goals here? Right? And process becomes goal in a lot of cases inside organizations. And that can't be the case anymore. So the really smart people, and this is, by the way, why you can't turn entirely to external consultants. You can bring in the smartest consultants in the world. They cannot solve this problem for you because it's an internal problem about how your organization operates at a deep level. It requires deep expertise in the subject. And I think this can be uncomfortable. Right? In some ways, it's more uncomfortable than realizing AI is really good, is realizing, like, oh, why am I doing this? Like, how do I think about this again? Or is the way I'm doing talent development only because of the way that it's allocated you know, talent development is, you know, learning as a reward rather than a tool. Right? Well, we have to change that or do we wanna keep that the same? So I think there's a really profound set of questions to ask.
28:43 Customizing AI Models to Organizational Nuances
Parker: One of the things you mentioned is that experts can't come in from the outside because you know your business the best. There's unique elements of your business. Would, could you make that same argument about AI models? AI models are almost like, if the expression is a zip file for the world, they have incredible IQ, but they don't know your organization. How would you help AI models understand the nuances of an organization so they'd be even more powerful in helping answer those types of questions as a partner to a chief talent officer?
Ethan: Okay. So let let's do this in kind of layers. The easiest way, not talking about sort of the deeper applications like the kind of thing you're building. The easy way to do this is just context. The AI thrives on context. So pace, like, the other thing that is really interesting that we haven't even talked about what chief talent officers and CHROs are thinking about is that, actually, the best instructions for AI are the manuals you use for onboarding employees. The best thing you can give an AI is an example of the best piece of work you do. You don't want to have the AI read every piece of work you've done. You wanna actually give it, here's an example of the perfect report that you should do and why it's great. If you can train people really well, the AI actually thrives on exactly all that training material.
So your starting point is really just starting to think about, like, you know, what do I have in hand already that I've spent all this time building for humans, and you literally just paste that in. So context, instruction manuals, rubrics, examples. Those are things the AI thrives on that change it from generic to specific. Because it's not that it's a zip file, but I mean, it is in some ways. It's a zip file with the entire web is a rep of that, everything. But, like, that latent space the AI contains, some of that is trained on your content. It's certainly on your industry. Like, it is it knows a lot about this. You can often tell it, write this in the style of, you know, and pick your favorite HR executive, and it knows how to do that.
So I think we have to, you know, you have to realize, like, it actually has not just the genericness in it, but specificity you have to invoke.
Parker: One of the things that is a feature that we began really rolling out for a couple of pilot companies that they're very excited about is take the best practices of your top performers. So Nadia will actually go interview top performers. How would you handle this situation? How would you give this kind of advice and build a bank of, exactly as you described it, best practices, but then, you know, that everyone's coach can access. And that is very, very powerful because it's not just the knowledge in the, in the, you know, in the rubrics and the reports, but it is the knowledge in the top performers’ heads about how they try to help situations specific to a company. And exactly as you said, the more context that, you know, an LLM can access, the richer that answer is gonna be.
So I think there's elements of that. How do you pick your best practices, curate those, and then make those available to the LLM AI instances that people have access to?
Ethan: Yeah. I mean, I think curation ends up being a really important point in all of this. Like, the idea that you, like, don't push the responsibility off to others necessarily as much as you wanna think through these issues yourself. Right? Curation is one of the most important things. I think people don't understand enough about how AI provides abundance. Like, you don't need 10 ideas. Ask for a thousand ideas. Ask for 500 ideas. Seventy-five slogans for this. I like 12, slogan 12. Give me 17 variations on this. I don't like the first paragraph. You know? So curation, taste, all that matters a lot, and at the executive level, there's a lot of that available.
So my wife is probably one of the best, product engineers on the planet. She is, like, Google uses her prompts as the gold standard to compare their new models to. So those prompt versus her but, like, she's never coded a day in her life. She has a doctor in education. She worked at, you know, in HR and training. Right? Like, that's her that's her background. And it turns out that if you're really good at theory of mind, if you're good at understanding what someone might be confused, what they're good or bad at, if you're good at breaking down tasks into steps, if you're good at troubleshooting when someone goes wrong, if you're good at thinking through the kinds of problems someone might mentally have and why they might be confused about a topic, you're going to be good at AI.
I think one thing that I like about the approach that you and a few of your other organizations are taking is trying to think deeply about one area that you, and realize that the models are very smart, they need scaffolding and help to accomplish something, but, like, we don't have broad-based expertise in this yet.
33:04 A 3-Part Model for the Changing Nature of Work
Parker: I know you've talked to a lot of C-suites. What are some of the things that resonate most with them when you have a conversation with, with these leaders?
Ethan: So I think that people are excited by the idea of, that this is a way to change the nature of work and change the game. And leaders who are seeing this and realizing, wow, there are advantages beyond ROI. Like, the thing that worries me most is when people start with a pure ROI perspective, which inevitably leads you to the same thing, which is tech, the same problem people always had, technology used to cut costs. Right? And I think when people start to realize, wow, this changes what we can do. It changes how we serve customers. It changes what we do to what we can provide to people. It's not a one to one replacement, for a person. It is an expansion. It is, it lets people do the impossible.
I have sort of a three-part test that I ask people for their internal AI use. Right? I say, what thing that was important to you have you realized AI now does for everybody on the planet and you no longer need to do? If you don't have anything in that category, you've made a mistake. There is something that you thought was valuable that's no longer valuable. What's that jettison? What impossible thing are you doing now that you haven't done before? And then, what are you building that doesn't work yet, but might work as AI gets better? And if you don't have an answer to those three questions, I don't think you're far enough along yet.
Parker: I mean, it's a fascinating future. I imagine every six months, the prognosis would change. Ethan, I just wanna say thank you. This was just such an energizing and invigorating conversation. I know that many of the CHROs in the audience are gonna find it really thought-provoking, and thank you for taking the time.
Ethan: Great. This was a lot of fun. Thank you.

.png)