AI Coaching in 2025

Nadia, Valence’s AI Coach, is live across the Fortune 500. In this demo, get a glimpse of what the future might bring as we explore purpose-built AI that’s designed to democratize coaching for every employee.

Parker Mitchell

Founder and CEO, Valence

Key Points

Parker Mitchell: We want to just briefly give folks a peek of some of the things that we are really excited about with Nadia. So as many of you know, we started off with Nadia as a, we were thinking of her as an executive coach. And what we did is, we thought, “Hey, can we program what an executive coach does in Nadia?” And we wanted to put it out into the wild and get a sense of what people's reaction would be. 

And so we talked to a bunch of coaches out there. One of the things that we found interesting was the ICF, one of the suggestions that ICF gives their coaches, one of the principles is a coach should evoke awareness. And so we built Nadia to do that. We put it out there. And one of the things we discovered, and this might not come as a surprise for folks who have worked with a lot of managers, is they don't, on a day-to-day basis, have a deep need to have their awareness evoked. They're pretty busy. They have a million things on their plate, and they basically wanted something that would save them time.

So we evolved from this idea of a pure reflective coach to a coach that is going to try to help them. So we've shared with you before, you know, Nadia, we're building this personalization layer. Nadia knows coaching, knows your company, knows you. And I want to spend one moment on coaching before we talk a little bit about some of the personalization that's going to be really exciting.

So the way Nadia evolved was she quite quickly became a thought partner. A thought partner that could try to handle anything that was on a manager's mind. And that blank canvas starting point was actually important, because it turned out each manager had a different challenge, honestly a different challenge every week, sometimes a different challenge at the beginning or the end of a day. And they wanted Nadia to be able to help them with that.

So we tried to train her as an expert, a coach, but with a bunch of different superpowers. And these superpowers are growing probably literally every two weeks or every month. We're adding a new specialization, a coaching module, a hat that Nadia can wear. So some of them that are out there right now: If you tell Nadia, “Hey, I've got a presentation in front of 200 people in a couple of weeks, and I want to practice executive presence,” Nadia will build a plan, a skill plan to help you do that. We've talked about performance conversations. If you have a performance conversation coming up, Nadia can help you practice it, can help you role model it. 

One of the things that we've heard back from users, one of the things they appreciate the most, is that Nadia isn't just a website that they go visit. But that she will be, in some cases, quite proactive about holding them accountable to a conversation that they had and a commitment that they made. And for some folks, they really appreciate that.

I think there's someone in the room who will appreciate the challenge version of Nadia. So, if you are feeling a little too smug about how you are doing, you know, there's a version of Nadia that says, “I'm gonna put on my challenge hat, and I'm gonna ask you if perhaps this is a pattern that you might have seen in the past few conversations you've had with this person or with other people. And maybe it's something you should reflect on yourself, rather than point the finger outwards.” So we have a range of different Nadia coaches. And the thing that's exciting about that is that we can then also start to tune those coaches for what companies are looking for. 

So just a couple of weeks ago, I was down at the Gartner Reimagine conference. I had the opportunity to have a long conversation on stage, sharing the stories of Prudential. And it was with Robert Gulliver, the Chief Talent Officer of Prudential. He also drew a lot on sports analogies, similar to Anna. He was I think the first CHRO of the NFL, and he believes wholly in coaching. Prudential's been a big adopter of Nadia, and they've been rolling it out in a number of different ways. And he just talked about the increasing pieces of material, of ideas, of approaches that they were bringing into Nadia to help them with everything from their end-of-year conversations, which they call 2+2 conversations, all the way through to their other parts of their talent development life cycle.

But the thing that folks were most excited about is, when we talk to users, they said three things over and over again: They wanted Nadia to learn more about themselves. So how can they provide Nadia as quickly as possible with information about themselves? The second thing, which I think is really fascinating and which we've released and rolled out now, is: Can Nadia know about others on my team, not just through me? And then, finally, people have said: I know that I should gather more feedback. I need a lightweight way to do that. Could Nadia help me with that? 

So we're going to get a quick glimpse of some of these Nadia 2.0 features. We're releasing them January, February, And, again, I just want to emphasize: Every company has control over which features Nadia does or doesn't have. You can control what's turned on or off. Users get control over their own profiles. So it's very much an enterprise-first approach. But I'm going to tell you a little bit about what it will be like to have a Nadia with a profile. 

So, right now, what Nadia does, in layperson's terms, at the end of a conversation, is she takes coach's notes. So she is summarizing what she's learned about me, she's making some hypotheses, she might generate a few questions, almost exactly the same way a coach would. So she's inferring pieces of information, but it still takes time to educate her. 

And so the new version of the profile is going to allow people to very, very quickly upload information to Nadia. So we have been told that people really want Nadia to know a lot about their job. So, if you're a frontline worker at Delta, you're manning a customer service desk, or if you're a knowledge worker, a content marketer at a brand agency, you're going to have a very different reality. And we can quite quickly give you the chance to bring Nadia up to speed on what that might look like.

The second thing that everyone wants their Nadia to know is: Know about my team. Know about who the people are that I work with. Know about the relationships. Know about the power imbalances. Know about the roles that people are playing. And so we've made it easy for Nadia to be able to understand that.

Now, the one thing I would say is, when we talk to people, absolutely zero of them have said that that team structure is contained within their HRIS. So they've said the org chart that we have does not at all convey that complexity. It's like Prasad said, we are forced to flatten the richness of the world that we live in on a day-to-day basis to try to put it into the old software systems. But genAI is able to maintain that. 

Nadia can also have information about your feedback and the feedback that you've gotten, we'll see that in a moment, as well as your skills and your career development aspirations. So, as you've got a fleshed out profile on that, the thing that people are asking about is now: I want my Nadia to know a little bit about other Nadias. And so what people will be able to do is they'll be able to choose, are they going to keep their profile private or are they going to allow Nadia to take their profile and translate it into a public version.

And again, you get to quality control it, you get to see what this looks like. But imagine that your Nadia coach says, “Okay, here's what I know about Parker. I know that he's fast paced. Sometimes he can be forgetful. Sometimes he can look like he's distracted in meetings, but he's really trying to parallel process.

He likes to get information with the big picture first and the details afterwards.” And I'd be very happy if my Nadia sort of created a public profile of mine so that I could be able to allow others to figure out how to work well with me. 

And so we've made that live for us at Valence and a couple of other early pilot organizations. So if you go to the team section of your app, if team members have made their profile public, you will get a chance to be coached specifically on it. 

And it is the most interesting, the most engaging feature that we've released. People just love to know, how can I interact with this particular person a little bit differently than I might with someone else? I think 40 to 45 percent of our conversations in Nadia are either about team and team relationships or about communications. And many of those are becoming much, much richer. Because if you know the two parties involved in that communication or that relationship, the coaching can be incredibly more rich. 

So that's knowing Nadia about yourself, knowing Nadia about others. And then the third piece that we thought was really interesting was, could Nadia be a way to gather feedback for you? And I just really love that one-dimensionality analogy because I think that also applies to feedback. How many people have filled out a feedback survey for a colleague in 2024? Hands up. Can't see. There's a lot of hands that are up. How many people enjoyed that process? Oh, I don't see that many hands up.

It's very hard to translate the richness of the person to trying to answer a seven-point Likert scale on 26 different questions. And so what we want to do here with Nadia is introduce very, very lightweight conversational feedback, focused on growth and development. And so what we will see here is, if I go through my profile and it's turned on, I have an option to collect feedback via a 360 review. And so this is what we think the future of 360s is going to be: conversational and no longer Likert based. 

So Nadia is asking me what I want to check in on. So I'm going to tell her that I want to get started with feedback. 

[To Nadia] “Hi, Nadia. It's the end of the year. I've been reflecting on my leadership. I want to get feedback from my team on how I'm doing.”

So, for those of you who don't know, Nadia can speak as well as text back. About half the people use text, half the people use her voice. And what she's able to do is look through the past conversations that I've had, look through my profile, and highlight that there are two things that have been important to me. One is about celebrating wins, and one is about bringing new people, Valence is rapidly growing, so bringing new people up to speed on the journey. So I think that sounds good to me. 

[To Nadia] “That sounds good. Can you email my team to ask them for the feedback?”

So we've introduced the ability now, if Nadia's hooked up to your systems, to be able to directly take these tiny actions on your behalf. Either she'll write the email for you, you can cut and paste it, or she can do it directly. So we're trying to make your life a little bit easier in some of these tasks to make you more willing and more likely to do them. Trying to take the friction out of it. 

So, what you're going to get a chance to experience, so I, as a crazy CEO, told my team: What I want to do is I want all 200 people to take a picture of the QR code, and then I want them to give me feedback on the phone, and then Nadia, real time, is going to consolidate feedback from 200 people who don't know me or who I am. And then we're going to get to see what my profile is. 

And my team convinced me that was a little chaotic, and we shouldn't do it. But if you would like to experience conversational feedback, it's live. And you can have a conversation with Nadia, and I promise I won't be offended at the feedback that you give me.

But what we're going to do is we're going to bring one of my team members up on stage to see what it's like. 

Alex McMurray: Hi, Parker. I'm excited to give you some public feedback at your AI Summit. Alright, so let me make sure that I'm on the right screen. I am. Okay. 

So Nadia has sent me a note, and she's asking me to reflect on whether or not Parker is celebrating wins and how well he's conveying this to me. All right.

[To Nadia] “Hey, Nadia. I think he's doing great. I feel super appreciated in my role, and I think he knows I'm working hard. He sends Slacks and texts to say thank you. And I know he's really proud of the product team.” 

[Reading Nadia’s response] Hmm, I'm wondering if those wins are being shared with everyone, or just with me.

[To Nadia] “Hmm, that's a good point. Some of these are very much direct messages to me, and we do hear quite a bit about product. So I wonder, I think about, I wish he could celebrate the relationships that the client team is building. I think they're doing hard work and doing an amazing job.” 

[Reading Nadia’s response]So Parker is thinking about this. That's interesting. And it seems like he wants to formalize some of this. Have I seen a difference? 

[To Nadia] “Well, he's definitely positive, but now that you mention it, I think it could be more formal. There were a few all-hands where he did do a bit of celebration, but it’s started to fade. I know he's been thinking about hiring a chief of staff, and I kind of wonder if that will help this a lot.”

[Reading Nadia’s response] Do I have five minutes? Hmm.   

[To Nadia] “I would love to keep chatting, Nadia, but I need to go present at an AI summit.” 

All right. So I'm going to give this back to Parker, and we'll see what all of the feedback he collected might say to him.

Parker Mitchell: So we just wanted to try to illustrate what it's like. This is what an executive coach does. If any of you have had an executive coach that collects feedback, they know a lot about you and your intentions, and they're able to draw out the type of nuances, that multi-dimensional nuance that is going to be truly helpful for you. And we heard from Anna and others that it's the frequency of feedback that is just so important.

And so if there's a world in which we can have our AI coaches go out, gather in really lightweight ways the feedback, probably more specific feedback than what's my end-of-year 360 review, but how did I do on preparing for this particular event, or how did we do on this project that we just launched? That kind of high-feedback world we think is going to feed a lot of the learning and growth that a lot of us aspire to. 

So this is a conversational AI. Obviously Nadia is able to synthesize it. I'm actually going to skip this bit in the interest of time because what I want to do is actually call up a few folks, who are our partners. I want to just share appreciation. We wouldn't be here if we didn't have the opportunity to have partners. Many of them have navigated their own internal processes. They've gotten, you know, a new AI-powered coach through IT reviews and AI councils. And they're getting terrific responses and also very excited to share some of the experiences they've had. So we wanted to just bring them up, introduce them one at a time, thank them, and have them share what is most interesting to them that they would like to be asked about at our cocktail hours. So Lauren, can I invite you up?

Loren Blandon: Hi all, thank you! Hi, I am Loren, I'm from VML. I lead organizational development. I have some of my colleagues in the audience, and essentially I just wanna open myself up during the happy hour and even after on socials, I'm very active on LinkedIn and all over the place, to have conversations about how we're using Valence and how we see Nadia playing a part in our strategy. And really curious to hear what you all are thinking about this.

We are in the beginning phases of implementation. We did a really successful pilot, got some amazing feedback. I can testify that folks did express that Nadia was giving them a lot more psychological safety to open up about things. So that part about it being empathetic, or at least feeling empathetic, is real and it's there. So now we actually just kicked off this week the effort to implement on a wider scale. We are looking at it as more of a super pilot on this next round, over the next year. So we're thinking about things like: How do we deploy through the influencers, or what we like to call learn-fluencers sometimes, across the company to bring people that are more resistant, get them on board and really using this in their everyday? How do we align the use of Nadia to strategic outcomes in where we want to grow the business or shape the business?  And we do have Lindsay Pattinson coming up, she's at WPP, she's our Chief People Officer. AI is very much at the forefront of where we want to head as a company, and so we really see the use of Nadia as maybe a safe, fun, enjoyable introduction to how folks can better themselves and their work through AI.

So really excited about the possibilities, really excited about our partnership with Valence, and looking forward to chatting with all of you. Thank you. 

Parker Mitchell: Thanks so much, Loren.

Next I'll invite Colleen up. Colleen leads leadership development at AGCO and has been a dear partner for six-plus months now. I mean, twelve months as we started kicking off the process, six months live. 

Colleen Sugrue: Yeah, that's about right. So, we actually kicked off Nadia in April. And we took a broad approach to implementation. We started offering it to 28,000 employees globally. And that was intentional, right? We wanted to go with a big bang, get some early adopters, and find out what was going to work. So we are, still feels like, in the beginning of that journey a bit. We have about 26% of our populated target audience who is logged in and using it. And now we're really headed into the stage where we're thinking about, how do we customize it? So if we have this foundation, now how do we really get Nadia embedded into our leadership development? How do we get Nadia leveraging our results from our action surveys, our voices surveys, and doing action planning to help managers figure out where can they target just for their groups? And we're also launching something, actually I feel like it's going out today, around how does Nadia help all of our employees really kind of build those individual development plans so that they can decide what they're going to target on, what they're going to develop in 2025. So we're in that performance management period right now. 

So we have lots to learn. We're still learning, but it's been a great journey. So if you want to talk about any of that, of course, I'm here and I can give you some tips. And we also have a lot of tips around works councils and GDPR and all of that. So if you're on that journey, buckle up. But I have some advice for you there, too. Okay?

Parker Mitchell: Thanks so much, Colleen. And next I'd like to welcome Matt up. Matt has been a core partner since we began conversations. I think it was at SIOP last year. So six months ago. And just done an amazing job, you and the team, shepherding it through and an exciting launch eight weeks ago. So, welcome. 

Matt Dreyer: Thanks Parker. And good to be here and see so many faces that we've been talking about a lot of stuff about AI with lately. So I'm Matt Dreyer, Head of Talent Management at Prudential. And at Prudential we've been thinking about a couple of questions as we went into this year. Chief among them for me was: How do I scale coaching, and how do I provide more democratized access to coaching, which was typically reserved for folks at the top? How could I also provide that coaching at exactly the time that people needed it? And how could I help to power our talent marketplace with more powerful [insight on] what's your next best action in terms of your development? So getting more to that 70/20/10 model, as opposed to always pointing people towards a leadership program.

And, as Parker mentioned, I was at SIOP and we saw this product, and we had heard about this product from some of you at some other places as well. And that was just a short 6 months ago, and we launched 8 weeks ago. We have had over 1,300 people who have used Nadia. The majority of them come back for a second time. We've got an NPS score, that's fresh off the presses, of 91. So people are really engaging with it and enjoying it. But the power in this is that people are coming back to us and telling us that it's answering the question that they need answered when they need it. It is acting like their personal coach or their personal assistant day to day. 

The use cases we've been going with primarily have been around, first off, the population. We launched globally. All of our countries in which we operate, we've launched Nadia. We have launched from the bottom up. So people who wouldn't typically receive coaching have gotten first access to this tool. We've launched, across all of our businesses, all of our functions, to both individual contributors and managers of people. And we've launched through our BRGs to make sure we have a really diverse representation of people who are getting access. 

There are a lot of use cases I'd love to talk about, but I'll wrap things up by saying, if you want to talk to me about things, one is: let's talk about how this has helped us provide tailored development actions at scale in a much more democratized manner. We are using this to support launch of a new leadership program called Leadership DNA and provide personalized coaching around that. As I mentioned, we launched globally, so there have been some really great opportunities and challenges with that. And we are integrating this into our learning programs. 

I can't see them anymore because of the lights, but I'm also going to call out that we have a couple of our HR technology partners here with us today. And if you want to know how we got from deciding to do this to doing this, this quickly, our HR technology partners, Kate and Allison, would be great people to touch base with during the cocktail reception. So, thank you.

Parker Mitchell: Thanks, Matt. And I'll echo that thanks to Kate and Allison. I know it's never easy to bring in a new technology. We really appreciate it. I'd like to now bring Brad up to the stage. Brad has been a stalwart partner. I think we met you at the NYU conference that Anna organized this last year. Explored some team tools, and when we rolled out AI, you were one of the first to say, “Hey, I think this is a really neat initiative.” So thank you for that belief, and welcome.

Brad Haime: Parker, it was a free meal. You gave me a free meal that day, and that's what got me hooked. My name is Brad Haime. I'm part of the team at Experian. And, like the other folks who chatted before me, we, about a year ago, started the journey with Valence, thinking about how we might use Nadia as an executive coach. 

But what I'm excited about, and what I'd be more than happy to chat a little bit about is, like many of you, we have a global leadership development model, right? With core development and with hi-po programs. And we had a gap. And the gap was at our mid-level leader or leader-of-leaders level. We have about 1,000 of these folks in our organization that we really didn't offer much for equally across the organization. And we know leaders like to learn best by doing. We know that leaders interact, and they learn through conversation. And I thought, well, what if we can use Nadia to support this need? 

And what we came up with, in partnership with Valence, is an idea that we're experimenting with right now, with about 300 of our leaders in all of our regions around the world. We have a bespoke 360 model. We have our own leadership characteristics, we call them our characteristics of great leaders. And what we'll do with our leaders is they’ll have an opportunity to complete their 360. Based on the results of the 360, we’ll recommend: these are your top four development areas. And each development area has a module that Nadia sits in the middle of.

The leader will have a conversation with Nadia. What are your development opportunities? What are you already working on? How has your team responded to you? Let's talk about what you learned in your 360. And then Nadia will co-create an experiment with the leader that they'll do in the real world. And about three weeks to a month after that, Nadia will follow up with the leader and say, let's talk about this. How'd it go? Did you have time for it? Did it go well? Do you need a little more time? What did you learn? What might you try differently in the real world? And are you ready to try something else with me? 

So far, it's been going pretty well. We're just rolling out this new type of solution. I'm excited to learn more. And we're going to have a series of focus groups and surveys in order to see what can we tweak as we go before we roll out to the rest of our global population. That's about it. I'm happy to talk more about it. 

Parker Mitchell: Thanks, Brad.

AI Coaching in 2025

Nadia, Valence’s AI Coach, is live across the Fortune 500. In this demo, get a glimpse of what the future might bring as we explore purpose-built AI that’s designed to democratize coaching for every employee.

Parker Mitchell

Founder and CEO, Valence

AI Coaching in 2025

The Future of Talent with Prudential and WPP

How is work best done in our organizations, and how is AI changing what roles are needed and the organizational structures of companies? Lucien Alziari (CHRO, Prudential) and Lindsay Pattison (Chief People Officer, WPP) explore how AI will change the way we think about work excellence and what is possible for the talent functions of tomorrow.

Lucien Alziari

Former CHRO, Prudential

Lindsay Pattison

Former Chief People Officer, WPP

Larry Emond

Senior Partner, Modern Executive Solutions

Key Points

Larry Emond: Okay. Hi, everybody. I will introduce myself third before we get into some topics. But, knowing that so many of you are wondering how you get into these seats at some point in your career, I thought that we’d actually switch off of AI for a second. And they represent two very different archetypes of a large-company CHRO, in terms of their background. And so I wanted you guys to introduce, you know, how you got to where you are, a little bit of the journey. And Lindsay, I mean, I'm sorry, not Lindsay, Lucien, we'll start with you. 

Lucien Alziari: Good afternoon. So, I've been a chief HR officer for 20 years, which is kind of scary when you say that. Grew up at PepsiCo. Lived in, so came to the States 30-plus years ago. Lived in Vienna, Middle East. Came back, ended up as head of talent and a big sort of HR partner, business partner. I've been a chief HR officer now in three different companies. So eight years with Avon in the New York area, five years with A.P. Moller-Maersk, the big industrial company in Copenhagen, in Denmark, and now seven and a half years at Prudential in New York.

Lindsay Pattison: So I'm the opposite. I've been a CHRO, or chief people officer we call it, at WPP for 10 months, nine and a half maybe. So, when I hear some of the words described today, I still kind of go, I don't entirely know exactly what you're talking about, but I'm doing a good job of pretending that I do. Lots of chat about skills is something that everyone seems to do in HR. But I've been with WPP for 14 years in various roles, the CEO of a media agency in the UK, then the global media agency CEO, chief transformation officer of Group M, our media business, and then chief client officer at WPP for six years, and took this role on. So really what I bring is deep knowledge of WPP, shallow knowledge of HR, but the combination is WPP is a people-led business. You know, 60-70% of our outgoings, or our costs, or our assets are people. So understanding people and understanding how people can work across the businesses is hopefully the lens that I bring as I learn more about the skill set of HR. 

Lucien Alziari: And for those that want to be CHRO, the first 17 years are the hardest.

Larry Emond: So to give you the data on the archetype, if you look at the big-company CHRO all over the world, Lucien is at about 30%, and that's the kind of career HR person. What's interesting about him, too, is that if you look at his original jobs within HR, of the 30% that do become a CHRO that are largely lifetime HR, the most common pattern today, he did it a long time ago, is the talent generalist pattern that you both had.

And that's kind of today very much the route to being a CHRO, if you've been largely a lifer. Lindsay's in the 10%, it's actually a little more than 10% of people that were never in HR until the day they became CHRO. That's actually, more common the larger and more global you are. And there are some companies in the world that have systematically done it that way. WPP was not that. This was a unique situation where they decided to do that. But two very, very different archetypes. 

Real quick, myself, I was at Gallup for a long time and three years ago joined Modern to start building a different kind of talent advisory. But I accidentally fell into something about a decade ago, where since that time I've managed what I believe is the world's largest CHRO community.

And somehow I've done about 400 in-person meetings of CHROs around the world. I met Lucien here in New York, I think it was November of 2018. You came to a meeting hosted by Diane of IBM, who you guys met earlier. And met Lindsay about a year ago, or less than that. She's hosted a meeting, she's hosting another one in January in London.

And so I do these meetings, but relevant to this topic is this. The main way we've done these meetings over the years is, you get 10 or 12 CHROs that can make a day or a day and a half. Then, after you get them, you ask them what they want to talk about. What do you want to put on the agenda? So it's been this you know, 400 meetings of a big focus group on what's on their minds.

And if you look at what they want to talk about over the years, you think about all the potential topics, and you can imagine what those would be: DEI, the future of the HR function, the future of the CHRO, HR analytics, how do we develop leaders, succession, blah, blah, blah. The single most requested meeting topic by a long shot has been something in the area of HR technology and automation.

And that's because there's so much of it, right? What do we choose? A few years ago it was, do I choose Workday or SAP? Are we going to have less technology? No, actually we're going to proliferate. That's not very good, but here it's happening. What are you doing? All those kinds of conversations. 

But I'll remember the meeting, if you go back before a couple of years ago, in all those conversations, let's say 200 different meetings where that was on the agenda, AI didn't come up. It just didn't come up. It wasn't there yet. And so you didn't hear about it. And then I was in a meeting in Zurich in April of last year. Big global manufacturing CHROs. Tina, your CHRO, Charise, of Schneider was there. 

[AI] wasn't on the agenda. We had a different agenda, and somebody said, “Have you guys started messing with this OpenAI chat?” And it took over the meeting. And ever since then, about 40 meetings since then, it's always the number-one requested thing on the agenda. And I've learned to put it at the start of a half a day because it'll usually move out the second agenda because we'll just keep talking about it.

So it's been kind of a fascinating thing, and I think it will continue to dominate. All right. We've heard a lot of detailed things today. Thought we'd back out and say, you guys have seen a lot of things in your careers. This is a big one. But maybe we'll start with you, Lucien. If you think out a decade, and you think about how AI is going to impact the future of work, the future of the workforce broadly speaking, what are you, what are you thinking?

Lucien Alziari: I'm mostly thinking that I'll be playing golf somewhere, looking at what those CHROs are doing. But I've thought a lot about this. And, if I can, in order to go forward 10 years, I'm going to go back five years. Five years ago, this is sort of pre-pandemic, a number of us in the CHRO community were sort of intrigued with this idea of the future of work.

We were looking at the deconstruction of work, the reconstruction of work. It was all a bit clunky because you were basically sort of doing it by hand or by spreadsheet. COVID, the pandemic came along. Terrible experience for humankind. There were, though, a couple of silver linings on a very large cloud.

One of them was that we finally separated work from the workplace, right? Up until then, those two notions were sort of inextricably linked. People couldn't think of them as two different things. Once you can separate work from the workplace, that opens up a lot more creativity in terms of potential thinking about how that gets done in the future.

We then had a couple of years where it was, frankly, a bit disappointing because the whole debate was about when does the work get done? So the terrible, you know, how many days a week do you come into the office discussions. And then where: so it's sort of virtual or in-office, how many days a week. But nobody was talking about the work.

Alright. I think the future for HR, the next great competence for HR, is in the work. And in my career, I'm lucky. I'm a talent guy in an era where talent has been kind of the core skill set of CHROs. That won't go away. But I'm really intrigued now about the ability to optimize the integration of talent, the work, purpose, technology, right?

Can I have one minute just to talk about, I was really struck by a case study on nursing. There's a world shortage of nurses. Somebody, it's actually RAND that I think published the paper, looked at the work of nurses. Why do people go into the nursing profession? Because they want to care for people. How much of their time do you think they spend doing work that they would associate as caring for people? Very little. All right. So you study the work. So what have you just done? You've deconstructed a job into its component tasks. You've looked at the individual, you've kind of deconstructed them down to the skills that they have, and which are the ones that add most value. And then you've identified their purpose. And my guess is each hospital system that nurses work for has some kind of mission statement that talks about better outcomes for patients. So you've got organizational purpose, individual purpose, you've understood the work. Now, with the technology, you can re-sort that so that, in that role of nurse, you can maximize what's the work that is on-purpose for the individual, on-purpose for the organization, really plays to what they're best at.

And then everything else you have a choice. You can stop doing it. You can give it to somebody else who would see that as work that they really, really want to do. Or you can get it done through technology. And if you look at what we do in HR and organizations generally, we basically figure out what's the work that we need to do to take strategy and deliver outcomes for customers, right?

Nobody has a chief work officer. 

Larry Emond: It's a fascinating thought, chief work officers. Maybe a human gets this done. Maybe AI gets this done. Maybe we don't do it at all. But you're stepping back and eliminating only humans from the equation, which is really interesting. Any thoughts you have on this long-term impact?

Lindsay Pattison: Well, I've been listening all day, and I've been told by many people that trying to think about the future of work, certainly 10 years out, is completely useless. So I'm going to listen to a Noble winner who told us that. So, but just to build on Lucien's point, I think what was interesting, I heard a stat today, actually my CFO and I were texting about budget meetings next week and AI, future of the workforce planning, some work that we've done with actually Josh in the audience. So, interesting conversation. 

She sits on another board, and they just had some research back from using Copilot. It's not our company, but it's a company similar to ours. And they analyzed the time spent by people, and 14 percent of the time was spent with a client. Great, because it's a similar industry to ours, so it should be client focused.

70 percent was spent on internal meetings. So you do the math, and I'm like, when are they actually doing the work, to your point? What work do they think it is they're doing? So I think actually AI and tools that we have in AI will help us realize and really think hard about that. And I love the analogy of thinking, what's the company's purpose and your individual purpose, and then closing the gap with the work that is or is not being done in between. I think it's really interesting. 

Larry Emond: Let's take advantage of the fact that you've only been in the glory of HR for about 10 months. You were out in client-facing roles, transformation, you know, CEO of one of the agencies. Okay. So let's go outside of HR for a second. Because you guys are like 110,000 people. I think a total of a couple hundred different ad agency, PR firm brands, you’re the biggest in the world, etc. What is AI, do you think, going to do for all of that? Creative content, etc. What's going to happen there? 

Lindsay Pattison: We're six main agency brands, and we have 30 smaller ones, just to correct you, because we've been on a hard program of simplification.

But I was interested in, I don't know if Brent's still here, but I was scribbling notes because his five misperceptions of AI included number four, which he didn't go into, which was that generative AI is bad for content producers. So that would be really bad for WPP because we make stuff, like we make ads, and make content, we make PR. And he said that was a misperception. We believe it's a misperception. So we're thinking about how we use AI within our business, internally, but we're thinking mainly about how we use AI as a platform to deliver content, ideas, media to our customers. So we have a platform that thinks across strategy and planning, thinks about ideation, and thinks about content creation.

And the speed at which you can do stuff now is incredible. So I was thinking, I played around with a tool this morning. If you just think about every level of what you might do in a marketing funnel. So we have a tool called Headline Generator, which I hadn't used until this morning. But I thought, well, I'll have a go with that one.

So it's asked me what product I wanted to look into. I said health insurance. It asked me for a target audience. I said young families. And it asked me where they were in the funnel of awareness through to buying. And I said, okay, I'll say awareness. 

What was interesting was that it then gave me categories of different headlines. The first was fear of the unknown. The second was affordability. Next was specific benefits. And then a whole section on quizzes. Because if you have quizzes in headlines, people tend to respond to them. This took less than a minute. 

I'll give you three examples of fear of the unknown. These are headlines. You can judge how good AI is at creating them in under a minute. “Tiny Humans, Big Worries: Breathe Easy with Prudential Insurance,” blah, blah, blah. What I loved is, afterwards it told me why that was good. It said it's showing empathy for that target audience. The next one was, “Unexpected ER Issues: Don't Let Them Break the Bank.” The example there said, this is a specific pain point. And then the last one, “Superhero Parents Need Super Coverage.” And it was trying to say it understood how parents wanted to be perceived. So, not that great, but pretty good to do that in under a minute. 

So we're using it in every stage of our process. And we think it will, it really will transform what we offer clients. So we're creating platforms to enable our creatives to do better work. But I think what's important is, the AI is not creating the content. There is somebody that is using AI to create the content. And those are two quite different things.

Larry Emond: You made a comment to me when we were together last night, how you might over time kind of be like a SaaS platform that has all this functionality, and it allows your clients to get a lot done on their own. And then you come in when they need to figure some things out. 

Lindsay Pattison: All our clients think they're brilliant copywriters anyway, so we'll let them use the platform a bit. But then the onus is then on us to what is that extra level of creativity? What do we do with the time saved by the automation of tasks like those? How are we adding a special sauce? So why would people really pay for our services? So actually there's a lot of work going on. And I won't repeat all the words said today on upskilling and reskilling and higher-level tasks that we can now free people up to do to really add value.

Larry Emond: Bill mentioned earlier this morning, from Vanguard, the creative destruction concept and those two books on the innovator’s dilemma and the innovator’s solution. There's that point that, you know, you ought to be in the business of creatively destructing yourself because if you don't, someone else will.

And probably a big piece of this is how can AI help us do that and kind of help us rethink our business and repurpose it. Lucien, I'll go back to you. Okay, so we've been talking a lot about the possibilities, and we just talked a little bit about where could this go. But what do you worry about? Like, how could we misuse AI in a way that would not be helpful for work and workforce and the function of HR? 

Lucien Alziari: Yeah. I’d generalize a little bit, if I can, beyond just the topic of AI, because I think that there's a theme that I would like to convey. One is, it's interesting the way Larry introduced sort of two archetypes. One, I was accused of coming from within HR, and Lindsay had real jobs along the way and then is kind of on holiday. But I don't think of myself as an HR person. And I don't think my CEO thinks of me as an HR person. And so for those of you in the audience who do aspire to be CHROs, be a business person. And be steeped and curious about what makes your company win. And you happen to bring some expertise in talent and capabilities and culture, whatever, but at the end of the day, you wake up every day worrying about what makes my business win. So, a mistake would be, don't do that, right? That's one.

The second, that I think AI is the next version of, is: HR every couple of years makes an important input an outcome in itself, and it falls off the tracks. Is employee experience important? Sure it is. Is it the outcome of HR? Over my dead body. Why does it matter? Because it produces great performance. If we produce great performance, our companies win. You can take loads of themes over the years where all of these new discoveries about important new insights, they're really important inputs, but don't lose sight of the fact our job is to help our companies win. Our job is not to deploy AI in our companies. We have technology partners. Our job is to figure out what's going to make our company win. What are the problems that we need to solve? And now here is a tremendous new asset and resource that we can use to help us unlock those problems. 

It took me a year to figure that out. Because I literally did spend a year in meetings like this with peers, and we were all talking about how are we going to deploy AI? And I woke up one day and I thought, it's the wrong question. So that's what I wouldn't do. 

Larry Emond: Lindsay, what would you worry about other than that? Where could we go wrong with all this? 

Lindsay Pattison: Well, I think it was mentioned earlier on one of the panels, because LLMs are based on all the knowledge that's gone before, there's inherent bias built into the system. So when we're using AI to then, even when we're thinking about using AI within our own organization, we have to be careful and mindful of that. When we're producing content that goes out into the world, we have to be very careful because it's generally very biased against women or underrepresented groups. So understanding that, I think, is really important. 

I think AI, again, when I'm thinking about the content and what we put out, there was, I think it was a gentleman from Delta was talking about the challenges around data. I mean, there's a whole ton of information that's out there now that can be, that turns into deepfakes. So our CEO was deepfaked earlier in the year. It was in loads of newspapers around the world because it was a really, really impressive scam that voice cloned him. It was emailing employees using his face, using his voice, asking for details, asking for money. Very, very sophisticated. 

So I think when you're in the business of advertising and putting content back out there in terms of, again, to Lucien's point, the business that we're doing is we're trying to create brilliant marketing for our clients. Anything we do is in service of that. And we think about the use of AI. You know, advertising at its basic, has to be legal, decent, honest, truthful. And those are not four words that you naturally ascribe to AI, sadly, at the moment. So actually the ethics of AI and how you use AI, how you use it internally, because there are concerns about how you bundle that data with other people's data, and how you're going to use it externally, I think are really important conversations.

Larry Emond: Maybe just as a final thought, you commented a little bit on it before, Lucien, but, expand just a little more on how does all this change the future of the HR function itself? A little bit more on that. How might it look very different a decade from today than today? 

Lucien Alziari: Yeah, I personally believe that the fundamental role, if you believe my thesis, which obviously that's up for you to debate with yourself, but my thesis is: My job is talent and capabilities to win. I don't think that changes over the next few years. The context in which it gets delivered changes dramatically. The resources with which we can confront those issues going forward, I mean, they were a twinkle in our eye 15 years ago. Now you can do it. And you can do it in seconds. 

So that will not replace the fundamental curiosity about how does your business win, who do you compete against, all of those kinds of things. But when you've got those twinkles in your eye now about, well, what about this? Now you've got the ability to really make very, very fast progress against that. But I don't think it changes the fundamental role of a CHRO, if you believe that sort of fundamental premise, and I do. 

Lindsay Pattison: I mean, I agree. I think a CHRO is a strategic advisor. I always talk about the CPO, CFO, CEO being a triumvirate of how you make decisions. You can't leave the decisions to the CFO. You need to have the people lens applied to everything that you do. Because, certainly for us, and for most people, it's a people-based business. 

I think the other thing that will be different, or the thing that we need to think about as the conscience within our business, to some extent is, is the balance of humans and AI, and not letting one rush to overtake the other. Because there is still fear. There was a panel earlier about how we get ready. There are still people who are fearful of AI and fearful for their jobs. So our role is to ensure that people feel enabled to embrace the future, that they are AI optimists.

Someone talked earlier, I think Jennifer, about pilots, not passengers. So feeling you're in control of your own destiny. But managing that balance and having a culture that's optimistic about that balance but doesn't leave people behind, I think, is really important as you move forward. And I think the human skills as a leader in general become much more important. So compassion, courage, curiosity have been talked about a lot today. I think those are really, really important. 

And again, I loved the analogy when someone talked about, they were rereading Thinking Fast and Slow. Because sometimes our job is to slow things down, and think about things, and be that, I've said it again, the voice of conscience, which I wouldn't naturally say I am. But because technology moves so fast, we can rush towards it like the next gold rush. And actually being thoughtful about how we apply it in a balanced way is really important. I think it's incumbent probably on the people in this room. 

Lucien Alziari: If I could, one last sort of analog. So I grew up as a talent person, and it's been a talent era. I've been very lucky with that. But the key question in talent is: talent for what? Talent's not a generic. It's competitively defined. It's defined by your strategy.

Have the same question about technology: technology for what? The thing that brings together, for me, the talent and the technology is: What's the work that creates competitive advantage for your company? That actually is kind of the unifying measure now, which at the moment, nobody has that lens. 

Larry Emond: I'm going to keep us on time. Something occurred to me today when I was listening to everything. I've done a lot of advisory and coaching in my life. These days, it's mainly a combo advisory and coaching for new and first-time CHROs. But I've been around that field my whole career. And I was thinking about, for all of us and myself. Well, how should we use this just in general?

And one thought would be something you referenced. How can we all leverage AI to help us get a bunch of stuff done faster, better, more creatively, whatever, that allows us more time than maybe we've had in forever or ever, to just think, to be reflective, to be unhooked. None of us do that anywhere near enough.

And it could be that one of the great gifts of AI is to allow us to get more of that time in our life. I think we all know that'll make us a lot better at what we do, both professionally and personally. So maybe that's something to jump on. 

Lindsay Pattison: I agree. I mean, we were talking about capacity unlock and time, and I was saying, “Oh, what could we do with it?” And someone on the team said, “Well, maybe we could let people have lunch breaks.” I was like, “Oh yeah, good point.” 

Larry Emond: Like really, right? Well, thank you two. It's been a pleasure, my time with you guys. And thanks for showing up today and doing this.

The Future of Talent with Prudential and WPP

How is work best done in our organizations, and how is AI changing what roles are needed and the organizational structures of companies? Lucien Alziari (CHRO, Prudential) and Lindsay Pattison (Chief People Officer, WPP) explore how AI will change the way we think about work excellence and what is possible for the talent functions of tomorrow.

Lucien Alziari

Former CHRO, Prudential

The Future of Talent with Prudential and WPP

Former Chief People Officer, WPP

Larry Emond

Senior Partner, Modern Executive Solutions

AI Unpacked with Nobel Laureate Geoffrey Hinton

Even the “godfather of AI” Geoffrey Hinton has been surprised by the speed and scale at which AI has developed. In this keynote from Valence's AI & the Workforce Summit, he explains what is so powerful about the technology and how leaders can unlock its potential and prevent its pitfalls.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

Key Points

Parker Mitchell: Geoff, welcome. We're so excited to have you here today. We are gathered with CHRO's, heads of talent of some of the largest companies in the world. And what we're trying to do is make sense of AI. We're really wondering what it's going to be like in the future. But to understand that, I'd like to go back to the past.

If we look back to, let's say, around 2010, so almost 15 years ago, if you tried to, you know, Geoffrey Hinton in 2010, the predictions that you made: where were you too optimistic, too pessimistic about the speed of progress? How has the field progressed since then? 

Geoffrey Hinton: So ask me about 2016 later. So I think if you had asked people, even fairly enthusiastic people who believed in neural nets in 2010, where we will be now, they wouldn't have believed we'd have something like GPT4. They would have said you're not, in the next 14 years, you're not going to develop something that's an expert at everything. Not a very good expert, but an expert at everything. You're not going to be able to have a system where you can just ask any question you like, some obscure question about British tax law, or some weird question about how you solve equations, and it's going to be able to give you a pretty good answer, an answer that's better than 99% of the population could give you. That's extraordinary, and we wouldn't have predicted that. 

Parker Mitchell: And so progress is happening faster than you anticipated.

Geoffrey Hinton: Yes. 

Parker Mitchell: Can you share more, what's it like to experience that as one of the leading researchers in the space and watching it accelerate? 

Geoffrey Hinton: It's amazing, because back in the ’80s, when Rumelhart reinvented back propagation, he rediscovered it. And he and I worked together to use it for things. And we thought, to begin with, we thought, this is going to solve everything. We've got something that can just learn. And there didn't seem to be any limits to it. And then it was very disappointing. And we didn't understand why it didn't work better. It was partly architectural things. And for about 30 years we used an input-output function that looked like this, when we should have used one that looked like this. Um, just crazy. But it was mainly scale. And we just didn't understand that this whole idea would only really come into its own when you had a lot of connections, and a lot of training data, and a huge amount of compute. So we couldn't have done it back then. And if we'd said back then, “Yeah, but if we made one a million times bigger and had a million times more data, it would really work.” That would have just sounded like a pathetic excuse. But it turned out that was the truth. 

Parker Mitchell: That's fascinating. So one of the things that you and I talked about earlier is the underselling of what large language models do if we use the term “next word prediction.” The experience that we have is that they could be reasoning; they could have a degree of intelligence. Can you share more about how that comes about? 

Geoffrey Hinton: So there's many people who say these things are just using statistical tricks. They don't really understand what they're saying. They're just using correlations. But if you ask those people, well, what's your model of how people understand? If they're symbolic AI people, their model is we have symbolic expressions and we manipulate them with symbolic rules. And that never worked that well. It didn't work nearly as well as the large language models. If you ask cognitive scientists, they'll come up with a variety of explanations, but my initial tiny language model wasn't designed to do NLP, natural language processing. It was designed to show how people could learn the meanings of words. So it's a model of people. A very simplistic model. But the best model we have of how people understand sentences is these large language models. It's not like we have a different model of how people work, and these work differently. The only good model of how people work that we have is like this. So I think they really do understand, and they understand in the same way as we do.

Parker Mitchell: And these large language models might have that kind of embedded creativity already in them? 

Geoffrey Hinton: Yes, so many people say, you know, these language models will do routine things, but people are creative. Well, if you take a standard test of creativity, I think the large language models now do better than 90 percent of people. So the idea they're not creative is crazy. This is very relevant to the debate among artists and Silicon Valley about whether these AI models are just stealing the creations of artists. Obviously to produce a work in a genre, you have to listen to a lot of music in that genre. But it's the same with a person. Whenever a person produces new music in a genre, they are stealing the works of previous people in just the same way the AI system is. So the AI system is not stealing them any more than another musician does. 

Parker Mitchell: I mean, it's fascinating, if you read analysis of the work of Picasso, he is clearly borrowing from artistic traditions. I think he's, you know, Benin masks and many other areas, and he's merging them into a new, you know, a new approach. But he is building off of things that he's seen. I think AI, if it's seen everything, there's no reason why it can't do the same thing. 

Geoffrey Hinton: Yes. So AI can be creative. And of course, to be creative in a particular way, you look at works of art that are done in that way. But it's hard to say that it's stealing, because what it's not doing is pastiching together bits of other things. It's understanding the underlying structure the same way a person does and then generating new stuff with the same kind of underlying structure. So it's just very like a person creating something. 

Parker Mitchell: Now you also studied the psychology of the human brain in your undergrad. How does that compare to what we have in our brains? 

Geoffrey Hinton: So we have about a hundred million synapses. And even though many of them are used for other things, like breathing, the cortex, the neocortex, has most of those. And so we've got many more adaptable parameters than these big language models. Which makes it very strange that GPT4 knows thousands of times more than we do. 

Parker Mitchell: And you said a hundred million. I think you meant a hundred trillion. 

Geoffrey Hinton: Did I say a hundred million? 

Parker Mitchell: I think you said a hundred million. 

Geoffrey Hinton: I could be a politician. I can't tell the difference between millions and trillions. A hundred trillion, yes. 

Parker Mitchell: A hundred trillion synapses. And so it's fascinating. So we have large language models that are two orders of magnitude smaller than the connections in the human brain and yet know an enormous amount of information. 

Geoffrey Hinton: Yes, they're a not very good expert at everything, so they know thousands of times more than any one person. And one of the reasons it can do that is you can have many different copies of exactly the same neural net running on different hardware. So you can get one copy to look at this bit of the internet, another copy to look at that bit of the internet. They can both figure out how they'd like to change their own weights. And if you just average those changes, then both copies have learned from the experience that each of them had. So now you take a thousand of those. Imagine if we could take a thousand people. They could all go off and do a different course. And at the end, everyone knew what everyone had learned, had experienced.

Parker Mitchell: We've talked a little bit about memory and how memory is stored in the human brain. We've talked about sort of fast weights and how those can adjust. Is there anything missing in an LLM architecture that humans still do exceptionally better, that the human brain does better? 

Geoffrey Hinton: I think we still learn better from limited data. And we don't quite know how we do that. We know the human brain has changes in connection strengths at many different timescales. So the first time I met Terry Sejnowski in 1979, that was basically the first thing we talked about: how these neural net models have just two timescales. They have the timescale of the activities of the neurons changing. And so each time you put in a different sentence, neural activities will change. And then they have the activities of the values of the weights, the connection strings, and they change very slowly. That's where all the knowledge is. And they just have those two timescales. 

Now, you could have many more timescales. Let's just suppose you have one more timescale, where you have weights, you have the weights that change slowly, But you have an overlay of weights that change much faster but decay quickly. That gives you all sorts of extra nice properties. So, for example, if I say an unexpected word to you like “cucumber,” and, a couple of minutes later, I put headphones on you, and I put lots of noise in the headphones, and I play words so you can only just hear them, most of them you can't quite make out what they are. You'll be considerably better at making out the word “cucumber.” Because you heard it two minutes ago. 

So the question is, where is that stored? And it's not stored in neural activities. You can't afford to do that, you'll use up too many neurons. And it's not stored in the long-term weights, because in a few days’ time it'll be gone. It's stored in short-term changes to the synapse strengths. And we don't have that in the models at present. 

Parker Mitchell: My undergraduate research was actually looking at something very similar, except it was preperceptual. So you would flash the word “cucumber” very quickly. You didn't notice that you'd seen it. It was subliminal. And then you could pick it up more likely if you either saw it, you know, in a collection of words or listened to it. And so there was a question of how did you understand, how did you process the word cucumber without realizing it in such a way that your brain stored it and was able to recognize it more quickly? 

Geoffrey Hinton: I think there's also a phenomenon where you flash the word “cucumber,” and you'll be better at hearing, at recognizing the word “lettuce.”

Parker Mitchell: Yes, that was actually, in particular, it was the association of sort of similar words. 

Geoffrey Hinton: Yes, so it's not just that you got the word, you got the semantics of the word, without any consciousness. 

Parker Mitchell: Can you share some examples of how introducing new information to an LLM that it might not have had in its training data, how it can reason over that and come up with an answer that's similar to how a human might reason by analogy?

Geoffrey Hinton: Well, I can give a nice example of it doing analogies that most people can't do. 

Parker Mitchell: I would love to hear that. 

Geoffrey Hinton: So I asked GPT4 some time ago, when it wasn't hooked up to the web, why is a compost heap like an atom bomb? 

Parker Mitchell: And I would not be able to answer that question. 

Geoffrey Hinton: Excellent. So it said the time scales are very different and the energy scales are very different. And then it went on about chain reactions. It went on about how, in a compost heap, the hotter it gets, the faster it generates heat. In an atom bomb, the more neutrons it's producing, the faster it generates neutrons. And so the underlying physics similarity GPT4 had seen. Now, it probably didn't see it when I asked the question. It had probably seen it during training. 

So we see a lot of analogies, and we actually store things in the weights. And it's much easier to store things in weights if they're kind of analogous structures. Because you can use, you can share the weights. And these large language models are just the same. And so in order to store huge amounts of information, they have to see analogies between different facts that they're learning. And they will have seen many analogies that no person's ever seen. 

Parker Mitchell: So this is fascinating. So in order to compress that amount of information into that few parameters, they have to implicitly understand and codify analogies in their weighting.

Geoffrey Hinton: And many of those analogies are analogies at a deep level, like between a compost heap and an atom bomb. 

Parker Mitchell: And they might be discovering, they might have embedded in the weights right now, analogies that we as humans have not actually thought about ourselves. 

Geoffrey Hinton: Yes, because GPT4 is a not a very good expert at physics, but it's also not a very good expert at ancient Greek literature. And it may well be there's something in ancient Greek literature that's rather like some weird thing in quantum mechanics, but no one person has ever seen those two things. 

Parker Mitchell: And so, in 2010 you started understanding what was possible, you and Ilya [Sutskever], won ImageNet. Alex, I think was… 

Geoffrey Hinton: Alex Krizhevsky. It's called AlexNet.

Parker Mitchell: AlexNet, oh, that's right. 

Geoffrey Hinton: He was an amazing coder, and he managed to make, to code convolutional nets on NVIDIA GPUs much more efficiently than anybody else. 

Parker Mitchell: And so at that point, you've started to see that scale matters. How has the past 10 years, 2016, why is that moment an important moment for you? 

Geoffrey Hinton: Oh, the reason I mentioned 2016 is because I made a prediction in 2016 that was wrong in the opposite direction.

I predicted that in five years’ time we wouldn't need radiologists anymore. This upset some radiologists. And it turned out it was wrong. I was off by about a factor of two, possibly even a factor of three. The time is going to come, and I meant for scans, I actually think I said at the time five years, maybe ten. But when they're reading scans, in maybe ten years from now, I'm very confident that the way you'll read almost all medical scans is an AI will read them and a doctor will check it. The AI is just going to get much better than doctors. AI can see much more in scans than doctors can. 

So my wife had cancer, and she'd get CAT scans every so often, and they'd say the tumor's two centimetres. And then a month later they'd say the tumor's three centimetres. Well, this thing's shaped like an octopus. Two is not a very good measure of the size of an octopus, right? You'd like to know much more about what's going on. And with AI we can do that. With doctors, they can't do that because they don't have the, they don't know what the outcomes are. But I think with AI we're going to be able to see things about cancers that'll tell you whether they're going to metastasize soon and stuff like that. We know there's lots more information in the images that isn't being used. 

Parker Mitchell: Well, it's as you said earlier, if you've got, you know, 500 doctors that can each spend a lifetime looking at 500 images and seeing the progression of them and then compress their brains, that's vastly more information than one single doctor.

Geoffrey Hinton: Yes. So no radiologist can train on enough data to compete with these things once these things are really good at vision. 

But, for example, in tuition, we're going to get very good AI tutors. And there's a lot of research that shows, take a school kid and put them in a classroom, they'll learn at a certain rate. Give them a private tutor, they'll learn twice as fast. And so we know that AI is approaching being good enough to understand what people are misunderstanding. And as soon as you get private tuition by an entity that knows what you don't understand, it's going to be a much more efficient way of learning than just sitting in a classroom and listening to a broadcast. So I think in health care and education, there's going to be huge advantages. 

Parker Mitchell: I want to spend a moment on that education example because we've been inspired by that idea of a tutor for everyone, for people learning in traditional education, a leadership coach for everyone who is at work. And so for us, this idea of personalization matters. Do you think AI could understand you in your context and almost be able to sort of access, it's like a librarian for the world's information, but just for you. 

Geoffrey Hinton: Absolutely. So a few weeks ago, I won a Nobel prize. And I've never had a personal assistant before. And the university gave me a personal assistant, and she now understands quite a lot about me. And it's wonderful. And everybody could have that if we can do it with AI. 

Parker Mitchell: That's fascinating. And you had to bring her up to speed, give her context. And if she had infinite access to your information, she'd be even more helpful.

Geoffrey Hinton: Yeah. Yeah. But I think that's sort of the good scenario. We all get these really intelligent personal assistants that know everything about us, and help. 

Elaina Yallen: When we think about building an AI product, something that gets tossed around a lot is human-machine or human-model empathy and helping users understand what maybe they should expect from models, so they know how to channel it properly. How do you think about that for software? 

Geoffrey Hinton: Well, there's one experiment where you have AI doctors and real doctors, and they interact with patients, and then you ask the patients, “How would you rate empathy?” The AI ones do much better. The AI ones actually listen to the patients. So, already they can exhibit empathy. It may be, we think of empathy as, you think, “How would that be for me?” And then you think, “Oh my god, that would be awful for me. I'm so sorry.” And maybe they don't do that. But they nevertheless, behaviorally, they seem to exhibit empathy pretty well. And we would like AI, if you had an AI tutor, you'd like it to have empathy about the fact that the pupil’s misunderstood something. And I'm sure they're going to be able to do that. 

Parker Mitchell: And I think you would say, correct me if I'm wrong, that if it exhibits empathy, it might be doing it in the same way that we exhibit empathy. And therefore it might be, it's not just, like, performative empathy. It's going to come across as genuine empathy. Is that right? 

Geoffrey Hinton: It might be genuine empathy. I think for us to call it genuine empathy, the AIs would have to be similar enough to us so they could imagine what it would be like for them. We tend to think of empathy as the ability to imagine what it would be like for you and then see, understand how it is for the other person. And I think if you're not doing that, you're just being very, “Oh that's terrible, I'm so sorry about that,” but you're not thinking of how it would be for you, right? That seems less genuine empathy, and AI can certainly do that. 

Parker Mitchell: I mean, I definitely agree with that, but I think part of the beauty of literature is that it puts you in other people's positions, and you can experience it through that, and you can say, “Well, I've never been in that position, but I've now lived that experience.” And if you have the world's literature compressed into that, you know, model, they might be able to understand what a range of humans, even more than I would, would be going through and exhibit empathy to that. 

Geoffrey Hinton:  They might. Yes. 

Parker Mitchell:  That's really interesting. So I want to zoom out to the societal side of things. So we've seen an enormous amount of hype, an enormous amount of coverage of LLMs in the past couple of years. One of the things you and I talked about is the analogy of sort of how difficult it is to see the future when things are growing exponentially. Can you share a little bit more about how you're experiencing that?

Geoffrey Hinton: Yeah, we're not used to exponential growth. So, a good analogy is, if you're driving at night, on a windy road that you don't know, you often drive on the taillights of the car in front of you. And as the car gets further away from you, the taillights get dimmer. And they get dimmer quadratically. So, if you triple the distance, they get dimmer by a factor of nine. Good. That's why you're trying to stay close. 

With fog, it's not like that at all. It's totally different. With fog, if you can see clearly at, like, 100 yards, you just assume you'll be able to see something at 200 yards. But actually, you can see clearly at 100 yards and then nothing at 200 yards, because fog is exponential. Per unit distance, it removes a certain fraction of the light. It's very different from linear or quadratic things that we're used to. People don't really understand the word “exponential” because it's misused so much. People misuse the word “exponential” to mean a lot. In fact, I think the rate at which they're misusing the word “exponential” is growing quadratically.

Parker Mitchell: It reminds me of a riddle that I used to love as a child, which was, if you have a pond that starts with one lily in it, and it doubles every day until the 30th day, when the lilies cover the pond and obliterate sunlight until the pond dies, which day is the pond half filled with lilies? And the answer is the 29th day. But the intuition people have is, oh, maybe it's around the 15th. And so it's hard to sometimes understand, because we don't live in that experience, what exponential growth could be like. 

Is there anything as you think about the future of work? We talked a little bit about workforce. A world of everyone having assistance is obviously wonderful. A world of jobs being replaced is obviously going to cause a lot of social stress. How should people who are leading large companies think about navigating the next two to three years? 

Geoffrey Hinton: There's obviously joblessness. So we just don't know whether AI is going to get rid of a lot of jobs. I suspect it is. Yann thinks, Yann LeCun, my friend, thinks it isn't. And in the past, things like automatic teller machines didn't cause massive unemployment among tellers. They just ended up doing more interesting, complicated things. And taking longer about it, so you have to queue for a long time. So, maybe it'll produce joblessness, maybe it won't.

I suspect there's some kinds of jobs where you could use a lot more of that. So, if, for example, they made doctors more efficient, we could all, especially old people, could use a lot more doctors’ time. If you got a doctor who was 10 times as efficient, I'd just get 10 times as much healthcare. Great.

There's other things, though, that aren't like that. And what'll happen is one person with an AI assistant will be doing the jobs that 10 people used to do, and the other 9 people will be unemployed. And the problem with that is, you've got an increase in productivity. That should help people. But you get 9 people unemployed, and one rich person who gets a bit richer. And that's very bad for society. 

Obviously, we can't see very far in the future. If you take the fog analogy, I think the wall comes down at three to five years. We're fairly confident we've got some idea what's going to happen in the next few years. In 10 years time, we have no idea what's going to happen. And you can see that by looking 10 years back. We had no idea this was going to happen. 

I think companies should navigate it by going in the direction of everybody having an intelligent AI assistant. So people feel they're going to get improved working conditions from this smart assistant. You're going to get increases in productivity. That would be great for everybody. 

Parker Mitchell: The next five years are going to be extraordinarily eventful, for lack of a better word. And you've played an enormous role helping us get here, getting through the AI winter, getting through those moments when it might not have felt like it was quite as clear as it is now. And I just wanted to say what an honor it's been to have this conversation. And thank you. 

Geoffrey Hinton: Well, thanks very much for inviting me. It's been fun. 

Parker Mitchell: Yeah, I really enjoyed it. Thank you. 

Elaina Yallen: Thank you so much. 

Parker Mitchell: You're welcome.

AI Unpacked with Nobel Laureate Geoffrey Hinton

Even the “godfather of AI” Geoffrey Hinton has been surprised by the speed and scale at which AI has developed. In this keynote from Valence's AI & the Workforce Summit, he explains what is so powerful about the technology and how leaders can unlock its potential and prevent its pitfalls.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

AI Unpacked with Nobel Laureate Geoffrey Hinton

Making Managers Excellent

Prasad Setty led Google's original research, Project Aristotle and Project Oxygen, into what makes teams and leaders effective. In this fireside chat from Valence's AI & the Workforce Summit, he discusses the power of AI to provide personalized leadership development at scale.

Prasad Setty

Former VP of People Operations, Google

Key Points

Parker Mitchell: Prasad, we are very grateful that you flew in from California yesterday to join us. And I know that many of the folks here in this room, if they're not familiar with you personally, I'm sure they are familiar with your research. And I was first exposed to it in the New York Times Future of Work magazine that came out, I think it was 2016.

But you were a driver behind Project Aristotle, Project Oxygen. Can you tell us a little bit more about why you and Google invested so much in trying to understand managing, managers, leaders, and teamwork? 

Prasad Setty: Thank you, Parker. Great to be here. And yes, it's always fun talking about Oxygen and Aristotle, even to this day, after several years.

At Google, there was a notion in the early days that that is the place where convention went to die. And there was a strong belief that, and this will be anathema to this particular audience out here, but there was a strong belief that managers did more harm than good. That they would become bureaucrats, that they would stand in your way, that they would slow down innovation.

And so in a company like Google, where innovation was our lifeblood, that it didn't seem like the right thing to invest in them. In fact, early on in Google's experience, they had all the software engineers report to the head of engineering. They removed all the middle management layers. And so Wayne Rosen, who was the head of engineering, he inherited all of them. Wayne retired soon after that, so I don't think it was a great experiment. But it was a sentiment that really persisted. 

And so with Project Oxygen, we wanted to actually prove that people managers don't matter. And what we found was that they do. They actually have, you know, their teams perform better, had lower attrition, the best managers.

And so that was a revelation for particularly our software engineering community because it was based on sort of proven data from Google. And as we carried out the research, what we also were able to showcase is that, you know, you don't need managers to sort of come in with inherent qualities of being great, but anyone can be a better manager than they are if they followed certain behaviors and practices. And so that is really what we were able to codify, and that led to a ton of acceptance and engagement across the organization to invest in better managers. 

Parker Mitchell: I mean, it's wonderful to fight an uphill battle and then to prove at the end that you're right. And it's great that the data was unequivocal in supporting that.

So the next version after that was teams. So, what is the role of teams? And I know there was some probably similar skepticism on the teams side that your research reversed. Do you want to share a little bit more about that?

Prasad Setty: Sure, as a next step, I don't think it was as much skepticism about teams, but an acknowledgement that we all work in teams and that is how we get work done.

But there were several beliefs and heuristics about what makes for successful teams. And so with Project Aristotle, we broadly looked at 400 different variables across our engineering and sales teams to say, what is it that drives team success? And broadly, these input variables were in the bucket of team composition, who's on the team for instance, versus team dynamics, how does the team interact? 

And what we found at the end of 18 months was broadly that team dynamics trumped team composition. And particularly within team dynamics, and certainly this isn't stuff that we came up with. You know, the notion of psychological safety, which Amy Edmondson and others have studied in great depth. Those things came to the forefront. So again, the sort of aha for us was that, in at least the Google context, any team could become successful if they followed some of these principles of team dynamics. And so then we started educating people on what does it take to improve psychological safety, etc. 

Parker Mitchell: So this is in the late, sort of, 2015 to 2020 time period. We are now almost 2025, where AI is playing just a huge role in all parts of this, whether it's leaders, whether it's team dynamics. How would you be thinking, how would you add to these studies if there was the AI part of the equation that was added to it? 

Prasad Setty: And I think we have touched upon this throughout today. In your opening, you talked about context, you talked about personalization. And certainly I'm a big believer in analytics. I founded Google's People Analytics team. I've been in this space for a lot of time. But I think it's, with our experiences, we start recognizing some of the things that are limiting as well. And one of the big issues that I have with how we think through analytics is that we flatten people, right? We collect 10, 20, 30 data points about them, or in some cases maybe it's much bigger than that, whether it's state variables about things that you know about them from HR systems, or from their resumes, from the work they've done, or from survey responses. But we're still flattening a whole bunch of rich people experiences.

And I think with AI, you now actually are able to sort of uncover that richness. And I think that richness comes about in asking people to interact with you in language and to understand more about the context that they operate in as well as the people in the teams that they interact with. So it is that dimension that I think is new because, when we think about performance, it's really about how people have the skills to engage in behaviors that are appropriate to a particular situation with the teams that they're working with, right? And so there's like multiple variables out here. And so I don't think we had the signals or the capability to understand that entire richness without AI, and now we are able to. 

Parker Mitchell: It sounds similar to what Anna was talking about, with the idea of sort of skills being a little bit too one dimensional, and that the addition of context, the skill application in the context is going to be important. And I'm hearing even more dimensions from you. It's not just sort of the work context, but it's the team context. It could be the moment in someone's career. And so, as you say, we're unflattening the individual, and we are adding new dimensions. That sounds revolutionary for people analytics. How do you see that advancing or evolving in the next few years? 

Prasad Setty: One of the thoughts for me, we are from the people function, so our first instinct is to think about individual people, and we should absolutely do that. I absolutely agree with the humanness and the value of humanity that shouldn't be lost in the middle of this. But as it relates to organizational context, I think, before we think about workforce planning and skills, we need to think about work planning. I think knowledge work today, a lot of knowledge work is about skills being applied in certain dimensions but with very few degrees of freedom. And I think that's the kind of work that people don't like to do because it doesn't give them too many options, right? Like, you're just like pigeonholing people and all of their capacities into things that are perhaps easy for AI to do in the future. It wasn't possible with existing technology, but with AI that will be possible. 

And so I think knowledge work itself will evolve to skills being applied where you have many degrees of freedom with work and in instances where you have a lot more uncertainty about outcomes. And so then knowledge work really becomes about, can you make consistently good decisions when you have multiple degrees of freedom and lots of uncertainty? That is where humans will have to excel. And that, to me, is what will then result in the compound interest effect of good decision making. And that, I think, is where we, as humans, will need to use tools like Valence’s Nadia, etc., to improve our capabilities on. 

Parker Mitchell: I mean this is fascinating, I'm hearing this concept of sort of the aperture widening, and more degrees of freedom in the choices that people are going to have, and that that's empowering for them. That the horizon that they could see is going to be a little bit shorter because there's so much complexity. And then on the flip side of it, we're also trying to measure performance. We've talked a lot on this stage today about how do you calculate ROI? How are you going to assess performance? Where will the impact of AI show up in that? Do you see any contradictions between those, or how will they be married? This sort of choice and expansion of options and the need to define performance and measure it, you know, quite strictly. 

Prasad Setty: It is going to be evolutionary. I think when we talk about performance, I think about four Es: effectiveness, efficiency, experience, and equity.

Parker Mitchell: Effectiveness, efficiency…

Prasad Setty: …experience, and equity. 

Parker Mitchell: Experience and equity. 

Prasad Setty: And I think when you put in these kinds of systems, first you want to focus on, as many of the speakers today have talked about, make the experience really good, make sure there are no inequities in them. And so you get people starting to use them, and that becomes your first sort of activity measure of performance.

But then over time what you want to really do is to see if people are becoming more effective at their jobs, right? And so that is where a lot of folks were talking about augmentation, rather than subtraction, I think is going to be important. And then the efficiency measure. I'm sure all the CFOs are excited about it, but I think that becomes the least important of all of these. 

Parker Mitchell: That's interesting. So I think what I'm hearing is that there's certain phases as new ideas are introduced or new technologies are introduced. And the ROI, or the measure of effectiveness of a new intervention, of an experiment that we've talked about should change depending on where it is. Is it early on, and is it experience and uptake based? Or later in its evolution, when it's more efficiency and maybe harder numbers. Is that right, that there should be an evolution? 

Prasad Setty: I think so. Otherwise I think people, the criticism that I have typically is that people very quickly jump to the efficiency side of it. And then it just becomes something that your organizations resist because they don't want to just be seen as people who are just cranking out the wheel faster. They want to really be better at their jobs and sort of result in, you know, much better decision making and so on. And so I really loved what Tim at Delta was talking about in terms of the kinds of signals they're capturing for their sort of the fourth category of work. Because I think one of the signals that is very hard to capture, but it'd be useful, is what is the quality of decision making that is happening? Right? Across your manager teams or your, you know, your leadership teams. And so, how do you capture that, and how do you see if that is consistently getting better because of the application of this kind of technology? And that to me is going to be the real productivity improvement in the long run. 

Parker Mitchell: I want to jump to that in a moment, but I'm picturing, or I'm assuming, Google is a company of engineers, very numbers based. How was the point of view that you introduced, that you don't have to jump to efficiency, you don't have to jump to ROI right away, how was that received within the broader Google community? 

Prasad Setty: You know, Google was very expansive in its nature. And the thought for many years out there was that innovation trumped efficiency. And so anything that you could showcase as helping people have the autonomy to think creatively. We heard a lot about critical thinking, we heard a lot about imagination in the previous session. So anything that furthers those elements is certainly going to have a better chance of resulting in innovation. And so that was always seen, at least a long time back, as more important than efficiency. 

Parker Mitchell: So you've talked about just the nature of work changing, the nature of knowledge work changing. And when it changes, as it evolves, the return on judgment is going to be higher. Are you seeing any early glimpses of types of work where that change is already happening, where knowledge work is changing?

Prasad Setty: This is one that I would love to see, but I loved hearing some of the experiences that Lesley and Jennifer and others are talking about, right? I think they are truly leading this kind of thought. And then Anna spoke about the world of athletics. One of the things that I'd love to see in terms of whether these capabilities are improving, is in using AI for better simulations. In the learning community, I think it's sort of well established that, as adults, we learn by doing. And so that is why new job experiences are so much more valuable than perhaps classroom education. 

But what exactly do you get from experiencing something? You get to experience multiple contexts, and you get to repeat your behaviors in different interactions. And so if you could simulate that, and if you could accelerate that kind of development using AI, then I think you are furthering everyone's development. And so, if you look at the sports analogy, for instance, in Formula One, race car drivers sit in sims, sit in simulation engines before they get onto the racetrack.  And that is how they practice.

And so what is that equivalent for a daily manager? And I think the equivalent of a game in any kind of sport, the equivalent in the corporate world is meetings. We all go through millions of meetings in our career. Are we getting better in each meeting throughout our careers? And if not, why not? And how can AI or other things help us be better? 

Parker Mitchell: I mean, it's interesting. I'm tying together a few threads here, but if AI helps us be a little bit more productive in our jobs, it can save some time. And if that time is freed up, it can give us more time to do some of this practice, to be able to refine these skills, and then be able to bring better judgment to bear. You know, rather than just be busy for 40 hours a week, we can be thoughtful for 10 hours a week and then be high value for 30 hours a week. And so I think there's a world in which the efficiency gains can free up time for some of these refinements in judgment. 

Prasad Setty: I think you summarize it beautifully. That is the arc that I think we all want to get down to. And you're sort of seeing that even with the development of these large language models, right?  The initial versions of, you know, I've worked in technology for a long time. We'd always think about latency. How quickly is the technology responding back to your query?

And we'd always want to reduce that. When you type something into Google search, you wanted the latency to be as small as possible. But now you suddenly have GPT4.0, where the thought is, let us spend as much time as needed on inference so that we can reason better. And so that is exactly the equivalent on the people side too. How do we get to slower, better, good judgment, and therefore better decisions in the long run. 

Parker Mitchell: You and I have had a chance to interview a number of CHROs and other talent leaders over the past six months. And I think one of the things they've always said is they just don't have time. Even those who are embedded in Silicon Valley, they don't have time to think through how the technology is changing. What's one piece of advice that you would leave for folks in the audience about the importance of doing something even in that imperfect fog that you've just mentioned? 

Prasad Setty: It is certainly a hard challenge, being in any kind of senior operating role, particularly in the HR side, and there are lots of things that are important. But as Bill said right from the beginning of this morning, this is certainly on top of every board conversation. And so I'm sure everyone is thinking through what the right activity is out here. 

I guess I would go down to a couple of sort of fundamental principles that I strongly believe in, particularly for this audience. I think after all these years, and even with all of this technology and with all the pressures that we see, I still believe that people managers are an incredibly important leverage point for any organization. The moment you have more than, let's say, 30 or 40 people managers in your organization, that has got to be the place where you invest disproportionately more of your resources and your attention to that community. Because that is what is going to define what the lived experience of your employees is, and therefore you'll get the best sort of return from that. So that's got to be one of the most important use cases. 

And just to echo everyone else's views out here, there are two ways to think about AI applications right away. One is, can AI be capable of doing things that you don't want to do or your people don't want to do? So it's very much a subtraction-oriented application of AI. Or the second is that you think of it as an addition-based view of AI, which is to say, can I deploy AI in a way that is going to help my people be better at their jobs and grow and learn better? And I do think that Valence’s Nadia certainly falls in the latter category.

I don't think you should choose only additive technologies. I think you have to look at subtractive ones too, but I think the additive technologies are likely to land better with your organization because they don't feel like they're being displaced. And any moment that you're waiting before you deploy these kinds of additive technologies, you're robbing your people of learning opportunities. And that, I think, is a waste of their potential.

Parker Mitchell: I mean, the urge to move fast, I think, despite the uncertainty, we've heard that over and over again. So thank you for joining and sharing those thoughts with us. We really appreciate it. 

Prasad Setty: Thank you, Parker.

Making Managers Excellent

Prasad Setty led Google's original research, Project Aristotle and Project Oxygen, into what makes teams and leaders effective. In this fireside chat from Valence's AI & the Workforce Summit, he discusses the power of AI to provide personalized leadership development at scale.

Prasad Setty

Former VP of People Operations, Google

Making Managers Excellent

Upskilling an AI Workforce

AI is already changing the jobs we do and how we do them. It’s also one of the best tools we have to navigate the changes ahead. In this panel from Valence's AI & the Workforce Summit, HR leaders Rachel Kay (CHRO, Hearst),  Chris Louie (Head of Talent, Thomson Reuters), and Tina Mylon (Chief Talent and Diversity Officer, Schneider Electric) share how they're thinking about upskilling their workforces for the jobs of tomorrow.

Rachel Kay

CHRO, Hearst

Chris Louie

Head of Talent, Thomson Reuters

Tina Mylon

Chief Talent and Diversity Officer, Schneider Electric

Key Points

Das: For our next panel, I'd like to introduce Tina Mylon, Rachel Kay, and Chris Louie. As you guys make your way, I'll do some intros just so that we can keep things moving. Tina is the Chief Talent and Diversity Officer at Schneider Electric, where I believe currently you have an initiative upskilling at scale, called Upskilling at Scale. Rachel Kay is the Chief People Officer at Hearst, leading recruiting, diversity and inclusion, compensation, and talent planning. I'm sure quite a few other things across all of Hearst's businesses. And Chris is the Head of Talent Development at Thomson Reuters and also teaches, I believe, algorithmic responsibility for the Human Capital Management program at NYU.

Chris: That’s right. I work for Anna. 

Das: Wonderful. One of the things I love about a conference or a day like this, a summit like this, is that everybody comes from such different businesses and industries, but is talking about a topic that's shared. AI is going to impact different companies and different businesses in different ways, but it's going to impact all of us. So I'd love to just start by hearing, what is the impact in your industry and specific business, and then, if you're able to, what are some of the insights into the new skills and roles that you're finding you need? Round robin, maybe, Chris, if you want to start. 

Chris: Yeah, happy to start. So there are lots of analyses that are out there looking at different industries, looking at workflows and roles, and trying to estimate how many of those are either able to be augmented or automated with AI. If you take a look at legal, and if you take a look at tax and accounting, those are usually the ones that are all the way in the–whatever your two by two is, but– upper right, those happen to be our biggest businesses. Reuters is also part of that as a news business. And so, a very profound potential impact of a AI, both on our product set and then, because of that, you know, for our own employees, it's very easy for them to get their heads around the fact that AI is and can be existential, which honestly has made our job a bit easier. Thinking back to that change management panel a little bit ago, to help employees understand the importance of, you know, getting proficient in AI and continuing to develop skills as, you know, some skills that you may have rested your laurels on for the last several years–or even decades–may be either less relevant or less protected. So some pretty profound impacts, both on our external business and then that flows through to our internal. 

Das: Rachel?  

Rachel: Let me just do two examples. So Hearst actually is a portfolio of six very different industries. But two examples of things we're worried about–and I think we don't know what's going to happen– but, take magazines. Right now, magazines is largely a digital business. The bulk of our revenue in magazines comes from digital advertising. How do people find our magazines? They go online, they might search: what are the best air fryers? They click a link, it goes to the Good Housekeeping website. Now they're looking at our website, we're getting revenue from advertisers because you are seeing advertisements on the website and then if you go ahead and click on the link, to Amazon or whatever, to buy the air fryer, we're getting revenue from that. Those are our two biggest pieces of revenue. Now, when you go on to Google and search for one of the best air fryers, you're going to get a blurb at the top that synthesizes it for you. You're never going to come to our page. So we're facing, in magazines in particular, some really existential threats around two of our biggest revenue streams and how we are going to accommodate for that. 

At the same time, of course, all of the different–and you see these in the lawsuits–chatbots or LLMs are using our content to create the answer. So it is an interesting question around how we square that circle, because there's a lot that's unknown, and some of it will be legal, I'm sure, some of it will be other revenue streams, but we're really working that out. 

Another business that we have which is very different, Fitch Ratings. Fitch Ratings is a credit ratings agency. They compete with Moody's or Standard and Poor's, you think about what they do, those analysts, they're just combing through tons of information–much of it publicly available–to assess the risk of purchasing a bond from a particular company. That, theoretically one day, could all be done automatically. Why come to a ratings agency? Now, at the moment, you have to come into a rating agency because it's highly regulated and that's required. But in the future, will investors need that? Or will they be able to determine those things on their own? So I think there's some really big questions our businesses are struggling with, and so much of it is still unknown. 

Das: And that existential piece, I think, is an interesting one. Tina, you're in a very different industry. So what are you seeing at Schneider Electric and how are you feeling? What's been the impact on the company and some of the roles needed and skills? 

Tina: Can you guys hear me? Okay, just testing.

So hi, everyone. I work for Schneider Electric. So we are an almost 200-year old company headquartered in Paris, and it is half an industrial tech company, as we call it. So, basically in energy management, in terms of automation, digitalization, anything, a plant, a data center–that's a big part of our business–a factory is using to manage their energy efficiently.

So, our whole mantra is: energy access is a human right. And it's a distributed workforce: about 50,000 people in Asia, 50,000 in Europe, 50,000 in the Americas. Part of what we do is really try to make sure, at the end of the day, whether it's your home or whether it's a plant or whatnot, are you using energy in the most effective way? With electricity being one of those game-changing technologies we talked about in the morning as the most efficient vector. 

So, back to your question, part of what we're interested in, especially for the customers, is how do you create a more sustainable energy landscape? And what is the role of digital? And now, 10 times, 20 times, 100 times over, what AI–even generative AI–is doing? It’s a lot about managing the consumption of energy. So how can we be more efficient that way? It's a productivity gain. And we see a lot of interesting opportunities with AI, as well as generative AI. And then, also, what is the energy mix? So how do we shift also about more sustainable, like renewables, and how we do that. For me, in the talent and diversity space, that comes to also a very pragmatic question that all of us, I think, are grappling with is what are the most pragmatic use cases when it comes to making sure our workforce is equipped to, basically, serve our customers and also the broader sustainability goals that we have for society at large.

The last thing I'll quickly say, I mean, for us, one of the things I think that's most interesting to me–and we'll get into use cases–but a couple of the panelists and some of the speakers talked about inclusion and equity. And that, in the face of generative AI–wearing my DEI hat–is super fascinating. So really, in the advances of all this accelerated technology, how do we make sure no one is left behind, just like energy access? How do we make sure the data is as good data as we can do? We are all struggling with that, as you guys alluded to, especially around biases. And then how do you make sure everyone has access to that transition to a more AI-first world? 

Das: We have some great themes here, I think, to talk about. There's this existential threat, I think, that a lot of people are starting to feel. As a content producer myself. I know there's sometimes that feeling of: back against the wall, I need to change, or what's going to happen next? We have this idea of, how do you bring everybody along within that environment? And then you have the fact that  some businesses, it is existential, and others have huge opportunities. Within all of that change, and how fast everything is moving, where are you now with understanding, do you know what skills and jobs you kind of need in the future? Are you just figuring that out? Where, where are you with that question? 

Tina, you highlighted it as something that's being grappled with. 

Tina: Maybe I'll start and then have Chris and Rachel come in. For us, the answer is mixed, quite honestly. We are embarking, and a couple of folks I talked to, it's one of the things that's keeping me up at night–and Anna spoke eloquently about skills and what it is, what it's not. At Schneider Electric, we are embarking, and it's already been a year and a half in the journey, on a major revamp of our skills ambition. So, we're actually not even starting with technology. We're starting with the whole job and skills and career architecture. For those of you who have been in this space for a long time, you know how painful that can be.So we are redefining our job architecture. More outside in, more market data, and at the granular skills level. We've just engaged, in the last couple weeks, a technology partner to have the end-to-end user interface. And for us, it's really starting with the most critical skills, which I think will be familiar to all of us. Technical skills for our R&D area, especially around engineering, digital AI, though probably more even on software development for us, and certain human skills. This is where we're trying to codify in the system, in the user experience, where people are, what the gaps are, how do you have the pull through when it comes to truly upskilling people at scale.

And we are 150,000 people, like I said, so every year 15 percent are turning, through hiring, but the 85%, that's the mighty majority of folks that we want to very much be ambitious and also focused on how to support them in their upskilling. 

Rachel: Yeah, I don't think we know what the skills are we're going to need or the roles are going to be because so much of it still is unknown in terms of what our future business model looks like. You know, I think what I'm what I'm struck by right now is that the most important skill at the moment seems to be curiosity. And what I love about this and living through this moment is in some ways how egalitarian it is because nobody has the answers and so that means the answer can come from anyone. 

At Hearst, one of our, you know, new folk heroes is this guy named Mike McCarthy who was hired a year ago as a salesman in our Connecticut newspapers And he has, just like Jennifer was describing, when the ability to create your own GPTs launched, he was one of the first people just to dig in and start playing around. He created a whole raft of GPTs, you know, one is the skeptical buyer GPT, so a salesperson can go on and practice their pitch. Another is the deck creator GPT, he has like 10 different GPTs just for newspaper sales. And he started playing with him on his own, he got his team to use them, and all of a sudden, you know, other people across our newspapers businesses are picking them up. And now he has a new job: he's the head of AI for newspapers. So, you know, just thinking about, would you have picked a first year salesman from our Connecticut newspaper as the future leader of our newspaper's AI efforts? I never would have. If I was looking for the skills for that role, I wouldn't have thought, let me go find a guy who's really great at sales in Connecticut, but because we were open to it coming from anywhere and because he was curious, he was able to take that on. I just think it's really exciting and I think what I'm trying to check my own biases on is being too prescriptive on what the skills are that we need because I think we're still figuring it out. 

Chris: Rachel, I'd add on to your curiosity, important skill, I’d definitely agree with that. Two things I would add and put kind of at the same level. One is critical thinking. Because, you know, curiosity, you can ask the questions, you have the, the sort of, like, courage and the compulsion to ask the questions. There are so many different places where you can get answers back now, and as, as we know from, you know, kind of leveraging AI, a lot of those answers can be really, really wrong and hallucinated. I mentioned before, one of our biggest businesses is in the legal industry, and, you know, the cautionary tale that keeps going around are the couple lawyers who submitted, you know, case histories that were completely fabricated. And, you know, they got their slap on the wrist. You need that critical thinking, we all need that critical thinking in the world these days, but also especially in trying to leverage AI.

The other thing that I would add into that is, I guess I might call it imagination. Because I think the beauty of the promise, the potential of an AI solution is that it is giving us capabilities and skill sets, both as organizations and as individuals, that we might not have otherwise had or had access to. And as a result, you can imagine and invent both new avenues to go down in different ways of doing things. And, you know, to loop back to your point around trying to figure out, you know, how things should work and where the business is headed. I think that that is like, the first thing that you need to do in order to try to figure out what skills you ultimately need to develop. I don't think it really works the other way around. Where's your business going? How's it going to work, or how could it work? If you don't know, but you can have some scenarios and then, Okay, what skills do we need or will we need in order to be able to either explore or deliver on that promise? I think we all either are or should be kind of living in that in that space right now. We. We certainly are in Thomson Reuters. 

Then Rachel, the other thing I would, I would share is, you know, the example that you just gave of, you know, the salesperson that may not have been the stereotypical person to put on top of that project. You know, we did a similar thing with somebody in our news business as well, taking them out of their job, having them spend time with our Thomson Reuters labs to really understand the potential of what AI could do, and then unleashing them on the business. And, while again, it may not have been the stereotypical background, I think if you think about those different dimensions of curiosity, she definitely had that to even be up for this. Critical thinking, I do think that that's a hallmark of, you know, folks that have operated in the news industry, whether it was AI or just human beings telling them stuff, they have to really apply that filter and lens to what they're hearing to try to get at the truth. And then the imagination piece, I think, you know, what better person to dream up than somebody who's been living in one place but with the expertise of, Hey what was painful or what has been painful and what could be better? 

Das: Yeah. And it's interesting, as we're highlighting these skills, kind of the three big ones I heard were curiosity, critical thinking, imagination. And then even the word upscaling. And the big word that comes to mind for me is this idea of learning. And one of my favorite business quotes is from a guy named Arie de Geus, who said learning faster than the competition is the only sustainable competitive advantage, especially in these sort of fast-moving times.

So taking that kind of learning lens, how have you approached designing the learning programs necessary? I know you all have different initiatives around this. Talk a little about those initiatives. Like, how are you creating that space for learning so that you can learn faster than the competition, so you can adapt? And I think earlier somebody highlighted the innovator’s dilemma. Like, how do you get out of that? And what are you doing? 

Das: One of the things that we've done is actually to go right at what you just mentioned, of creating the space. So, you know, I don't know that this is completely novel. I think many companies have taken a stab at doing things like having a focused or dedicated or regular learning day or learning week or learning month. You know, and it sounds like, listen, we should always be learning, and therefore why do you need to just dedicate a day? Having said that, there's a lot of stuff we should be doing, you know? And it's just how do you demonstrate to your organization that, hey, this is really important, that it is a priority at the level of the other things that you are doing, and we are in a concerted way together going to block the time. 

Chris: So that's what we've done. Every quarter, we have a learning day. We instituted that in early 2023. Probably not coincidentally, the first learning day was dedicated to AI. So we had sessions on, you know, AI and LLM 101, AI in our products, AI in our internal operations, and then a workshop on AI in your job. We ran multiple versions of those sessions across time zones so people could get the benefit of live.

And, again, that demonstrated to people to make time for learning. And it also became a key pillar of our approach to getting our workforce kind of spun up on AI. We tried to go broad in educating. This, again, was an example of that education. Enabling, so we put effectively a privacy-protected version of ChatGPT in everybody's hands. And then experimenting, both with that solution and then through hackathons, etc., to just show people that not only was it okay to experiment, but that it was valued. 

And then the other piece of that AI learning was actually focused, right? So we had a bit of a SWAT team that would go in with individual organizations and with their leaders to help diagnose where the biggest business needs, and therefore potential AI use cases, might be, and figure out how technically, operationally, and procedurally, and from a human standpoint, we may help them to get over the hump to actually enable their workforce to realize those use cases.

So that's kind of the approach that we took. And they was specialized, function-specific learning that went along with those focused approaches. 

Tina: We're really trying to move away, and we're not really there yet, from learning hours. So coming from a very KPI, somewhat traditional learning environment around the world. When I joined the company eight years ago, it was like this obsession. Like, tracking every little thing, you had to mark it in your LMS. So we are similar. And I think we've borrowed your phrase as well, because our campaign is around creating time and space to learn. Now, the thing is, by freeing that and encouraging people to really upskill, at the same time, we are very focused on codifying that data.

So that's why, back to the skills transformation, having a system to do that. And sometimes it sounds a little harsh, but also saying: this is about us staying ahead of competition, growing the business. But you grow yourself, you grow the business. So that whole expectation is: it's your job. It's your job to upskill, stay relevant. The company has to do its part to provide the great user experience, the great content. But that mindset shift, we're trying to message more and more. It's not easy. Everyone's still used to: How many trainings do I take? Which ones are mandatory? Like, how many hours? And that broader shift is something we're trying to implement.

Rachel: So we're kind of a hot mess when it comes to, like, learning culture. Hearst is interesting. Whenever I go to these panels, I always feel, like, inferiority complex creep in, but, 

Tina: No, it's like collective commiseration. 

Rachel: We're incredibly decentralized, right? I mean, Hearst is incredibly decentralized. We have six very different businesses. And, from the center, we tend to provide carrots, not sticks, right? So here's stuff. If you want it, business, you're welcome to use it. And if not, that's okay. Our feelings aren't hurt. This is one area where our CEO kind of came down and said that we need people to be familiar with this.

So in terms of AI, I'm thinking about it as 101 and 201. 101 is mandatory, and 201 is optional. But in terms of the mandatory, we wanted to launch training to make sure everyone was familiar with the tools. Our tech team had invested a lot, and I know many businesses have, in creating proprietary versions of the OpenAI, the Claude, the Anthropic, all those different tools.

But we weren't seeing a lot of uptake. The curious, the early adopters were using it, but over 30% had never ever even tried it once, and the vast majority had gone and looked at it and never gone back. And so we wanted to force people to do at least something, but it had to be something of quality and something that felt relevant.

And so we invested in having live learning sessions. Everyone had to go to a live session, virtual live session. And it was customized based on your job function and your business. And so we had 17 different versions of the training, and we delivered the training over 100 times. We got 8,000 people to go through it.

The scores for the training were pretty high. They weren't out of the park, but, you know, my L&D leader kept getting very upset by that. I'm like, look, we dragged some people to this, so of course they’re not gonna say it was fantastic. But, in general, it got pretty good feedback, and it was customized, as I mentioned, based on what they do. So if you're a salesperson, you're going to go and see use cases for how to use this in sales. If you're in HR, the use cases were in HR. If you're in content and news creation, it's going to be use cases relevant to that. 

That group, by the way, was our lowest scoring, both in terms of value for time spent and also propensity to use genAI in the future. Anyone who's a content creator, based on what I was just saying earlier, are very skeptical that this is going to replace their jobs. They're very skeptical that this is going to downgrade the quality of what we do. And so, you know, seeing that, having them go through the training, getting the feedback, hearing what they're saying, has actually been really helpful in pulling together our future communications on that.

So, that was the 101. In terms of 201, we have genAI champions, we have our first Tech Academy, we have a TechNext Conference, where we have in external speakers like Ethan Mollick. So we're trying to keep it in the water now that people can keep getting smarter. But for us, the big unlock was making sure everyone at least had that first exposure, and then they could kind of decide where to take it from there.

Das: So we've come across this idea a little bit of measuring success, right? How do you measure success with upskilling, and how do you measure success on upskilling when you're trying to figure out still what those skills are? So I'd love to hear a little bit about, just if we could dig in a little bit more on what are the metrics and the measurements and how you're thinking about that.

Rachel: I mean, for us, and again, this is probably pretty basic. For us, right now, the measure is people's usage of the tools and active projects under discussion that use genAI. So, if you look before and after we launched that training program, we saw a 175% increase in the number of users using our internal ChatGPT. And we saw a 200% increase in the number of projects under active discussion for development. So, for us, that's how we're measuring it at the moment, right? Eventually, it'll be actual business results. But for right now, I think it's just people engaging, is our first horizon. 

Tina: I think, for us, just simply breaking down more maybe soft or human skills versus the technical skills and the digital skills. On the latter, maybe we're overengineering it, as Schneider typically does, but we are going hardcore in codification, certification, levels. What's helped me is it's allowed me to put more business case to also, frankly, outsource a little to go faster. So going to like a Coursera or a platform, where it’s fairly structured, you can cater it by certain domains. Because we have this habit of wanting to create and build because we think we know it all, and that takes up a lot of time as well. So on the latter part we are super structured about certain domains and job families, where you have to be, and some of it's very compliance driven.

Chris: We are having very active debates right now about this very question and the different metrics, including the activity and usage metrics that, Rachel, you just shared. And then trying to be more regimented about what do you need to actually have developed, and have you spent the time to develop it in these specific functions, et cetera.

Those are certainly some of the things that we have been measuring but that we are talking about, because we are trying to figure out: What are our top two or top three things, right? We all have like hundreds of things that, you know, metrics that we look at. One thing that we are talking about is trying to get past the activity, kind of to your point, and thinking about the outcomes. And one set of outcomes, from the employee or from the colleague standpoint that we've talked about, kind of gets at, I think, a core belief. Like, yeah, we talk about skills all the time. We, literally, we in this room talk about skills all the time. And I think to any individual employee who might be thinking about themselves, I don't think that skills is the thing. They can think about skills, but they think about skills as a means to an end. And I think that the end is often like near-term and long-term career growth, career opportunities. Having a job that I like or am excited about. Having things that I can go to that I might be even more excited about in the future or work toward. 

So in that vein, the engagement survey that we put out has, like everybody else's survey does, statements that we ask the degree to which you agree or disagree on it. And statements around, I believe I'm growing my career at Thomson Reuters, or I believe I'm developing the skills that I need to be successful, to be able to do my job or unlock future opportunities.

We're looking at those measures, and we're thinking about those measures very seriously. Those have improved, by the way, over the last couple of years. We want to keep that going. And so we're thinking about that from a colleague standpoint. From the business standpoint, I don't know how many of y'all talk about workforce planning all the time, or strategic workforce plans. We certainly do. And if you think about it theoretically, there's a case to be made of, okay, delivery against your really great crystal ball defined strategic workforce plans in terms of the skills that you will need. If we could somehow both get to those plans in a good way and then assess the degree to which you've developed those skills so that you're  hitting on your plan, you're delivering on your plan, which means that you have the talent that you need now and in the future to address business needs. That would be really great. You know, are you 90% of the way there? Are you 50% of the way there? A lot of the stuff I just said, we don't have the things solid, but, theoretically, if you did, that feels like something that could be really great and on target with delivering for the business.

Tina: It's not perfect, but we're doubling down so much on skills. And listening to you guys and the audience, if I'm going the wrong way, you guys need to tell me, and I need to pivot because I'm like, oh my god. But one quick thing that Anna said that really resonated with me, also. I totally agree, Chris, that skills isn't the end-all, be-all. It's the growth and the career evolution of employees. That's super important. When we make talent decisions, we actually have that sequence, meaning we do start with skills and qualifications to be able to do the job. And then we do two, and this is small scale, maybe that's the top 1,000 jobs, but we look at two factors. So you look at skills, and then we look at your preferences, a little bit more psychometric. And then we look at strategy. So you may have skills where you're a really strong communicator, and then, in your preferences, you're a huge introvert, like myself. And then you look at the strategy. Tina might be good at communications. She's a strong [communicator], but she's a huge introvert. Yet what does she do strategically? That's like, what is she actually doing about it? And that formula, we're testing it, and it works pretty well. That's back to the holistic, beyond skills, how do you really grow a talent and assess a talent? 

Das: So I know we're coming up at time, but I have to get one last question in here because you just teed it up so beautifully, which is: You know, we started the day talking about this great promise of AI to really personalize knowledge to each of us if it has the context it needs. And so, even though AI is what we're trying to adapt to, it's also this really powerful tool that can help us adapt. And I'd love to just hear how are you thinking about that, and how are you thinking about an AI or an AI coach as a tool for learning and that sort of upskilling and adaptability.

Rachel: I mean, whether it's a coaching tool or AI in general, AI is great at teaching things that it already knows. I mean, I see this personally with my kids, right? You can go on and generate quizzes for, you know, my son had to read the first 60 pages of Fahrenheit 451, ask six questions that would test to see if my son had actually read the first pages of Fahrenheit 451. Worked great, and he had. But I think it is great to reinforce things you already know, right? So I think it can be used either in this coaching context or even just, you know, I want to test my ability to articulate a philosophy for this or an approach to that. What are some questions I might get asked? Or what are some things I should be thinking about? So I think the tools are out there to use already that can hold yourself accountable in your own learning journey, whether it's customized for that purpose or not. A lot of them are just generally good at it. So, you know, I haven't thought yet about how to formally incorporate it, but something we tell people is: you know, once you've learned something, those same tools you just learned can now help you to retain that learning. And you should be thinking about how you do that on a regular basis. So that's just what I am thinking.

Chris: I'll add on to that of, I feel like with most problems, most business problems or situational problems, the answer is usually within the person. It's usually within you, and you just need something to help you kind of like pull that out. Most of the time, not all the time. Sometimes I have no idea, the answer is not in me. And I think an AI solution, a coaching solution can help. A great coach can help with that. I think a coaching solution can help with that. And then the other thing I think is, we don't scale, you know? What HR learning organization here is built to scale in anything that remotely looks like a one-to-one way, at least for most of the population, right? And so part of the beauty of this is, it can scale and it can personalize. 

Tina: For me, just to add, it is, to me, I think of AI, especially generative AI, as augmentation and acceleration. And nothing replaces the human touch. So high tech and high touch go well together, and the choices we make will have big implications.

I, myself, have been using Nadia for about half a year now, and we're piloting it in our organization. And it's going quite well. And I shared this with Parker, but, at the same time, my chief AI officer has been on my case going, “Pay attention to this, da da da da.” They were super nervous. But the way we position it, again, to augment, to accelerate, like a check in, as a leader, I myself have found that super useful. So, I'm positive, cautiously optimistic. 

Das: That is wonderful, and, you know, thank you again. I feel like we could take quite a few more questions on this, but, for the sake of time, I'm going to let everybody get to their break.

First though, I want to just give a huge round of applause for Tina, Rachel, and Chris. Thank you so much.

Upskilling an AI Workforce

AI is already changing the jobs we do and how we do them. It’s also one of the best tools we have to navigate the changes ahead. In this panel from Valence's AI & the Workforce Summit, HR leaders Rachel Kay (CHRO, Hearst),  Chris Louie (Head of Talent, Thomson Reuters), and Tina Mylon (Chief Talent and Diversity Officer, Schneider Electric) share how they're thinking about upskilling their workforces for the jobs of tomorrow.

Rachel Kay

CHRO, Hearst

Upskilling an AI Workforce

Head of Talent, Thomson Reuters

Tina Mylon

Chief Talent and Diversity Officer, Schneider Electric

Game-Changing Coaching

Author of The Digital Coaching Revolution, Dr. Anna Tavis believes HR professionals should look to performance athletes and their coaches to understand the power of a work coach to drive performance.

Dr. Anna Tavis

Chair of the Human Capital Management Department, NYU

Key Points

Parker: Great. And this is a perfect segue. I'm going to welcome Anna to the stage. Anna is the Chair of the Human Capital Management program at NYU. Anna has literally written the book on the future of digital coaching. We're going to talk a little bit about some provocations around, you know, first, sort of why coaching is so important and then what are some areas that we can look to to potentially get a glimpse of what coaching in the workplace might look like. 

So, welcome. Thank you Anna.

Anna: Thank you so much. You know, I want to build on and maybe challenge a little bit of what Brent said here before, and that was about the future. So, you know, not to go to a cliche saying that some of that future is already in the ecosystem and might not be sitting in organizations. And that's the topic of, you know, he was in my discussion, is where do we see this coaching and technology, you know, blend, in combination already working in setting some precedent for what we could be looking for in our organization. What are the patterns that have been already tried and proven that are working? And that was kind of the idea that I had when I was researching my book, and I want to share it with you. 

Parker: Wonderful. I want to get to that analogy, but tell us a little bit about you and why it was coaching–and the blend and intersection of coaching and technology–why is that so important to you?

Anna: You know, I rebounded into academia. I was an academic, I left and I spent 15 years in business, both in technology and financial services. And that's where I realized, you know, how important it was. I worked in Europe. I worked here, I was in charge as a head of global talent development, etc, and it always felt that coaching fell short as a tool, as a method. It was very effective with some and even with all of the investments that we were making at the top of the organization.  I don't think it was optimized, that investment, because it was a little bit too little too late. The challenge has always been, how do we make that available and accessible to different levels in the organization.

The other thing is, as most of the people in the audience know, it was primarily applied as a corrective tool. Coaching, you know, got a bad reputation. If you tell senior executives on Wall Street, where I worked, that they need to get a coach, that was like a curse. That means that next will be a performance improvement plan or something along those lines.

So I don't think that coaching got the impact that it was intended to make in organizations. And, obviously when technology became available to us first through the platforms, you know, can we connect? But I remember in 2019, as I was doing my research about coaching and technology, the kind of the roots of it, discussions, whole conferences around whether coaching would be effective on Skype. Remember that technology? Or on the phone. Remember, could coaching be done through the phone? No way. There was a lot of resistance to that. And so that desire to really optimize what this particular method of learning could do, that we know goes back to the Greeks, as probably, you know, historically, the effective method, tutoring.

Parker: So we know it works. We know that spreading it, sharing it more widely, democratizing it would be helpful. We're also looking at the future. So where do you look to to get a sense of where the future of coaching in the workplace might be? 

Anna: Yes. So as I was looking around, sports. Specifically professional sports was where I was starting to do research. It started with data. I remember, Moneyball came out. And then when I looked at athletic performance, even the latest Olympics, there's no way those humans could get to where they got without coaching. There's no question and it's interesting that we have this conversation here about trying to prove to our organizations that coaching works, but no one questions the effectiveness of coaching when it comes to sports. You know, even in little leagues, your basic soccer camp for your three year olds, no one is challenging the fact that all of those kids deserve to have a coach. So that barrier is down and has already been established. You need a coach to get to any level of performance in sports. So the next generation of coaching, how do you get from, you know, the basic training where your parents are doing the coaching, to even mid-level, school-level coaching.

And that's where technology has started to come in a long time ago. From the videos, basic feedback on, the speed of your swing and getting those types of feedback, data points to athletes early on. It doesn't mean that you take away the human coach, the pro in your golf game. But technology was accepted from the get go as soon as those tools became available. And the significant amount of investment that's been put into developing the whole ecosystem of startups. In fact, some of the clubs, some of the professional associations, groups started creating their own technology ecosystems to encourage innovation in those types of technologies that can accelerate coaching for professional athletes.

Parker: I think one of the words you used there was feedback.

Anna: Yes.

Parker: And so one of the things that's incredible about the technology is that it can codify that feedback, deliver it well, and if it's technology it can do that at scale. Where does that analogy sort of have a strong parallel with work and where is work a little different potentially than the sports world?

Anna: Everyone here on the stage who was talking about how they apply these tools talk about performance and productivity performance, etc, etc. And that's where the Venn diagram is between athletic performance and performance on the job. At least in the broader definitions of performance, that's what it is.

So I think there is a lot of similarity in what we expect from either an athlete or an employee. And, you know, I've written and I've done a ton of research on performance management and one of the main headaches and failures of the current performance management systems is inability to provide ongoing feedback.So, in fact, you know, if you look at the athlete, if you look at an employee, it's the frequency of the conversation with a manager around feedback on performance, just in time, in the flow of work, that makes a difference in the ultimate output of that particular employee. And that's where organizations started to kind of manually mandate managers to have frequent conversations. We moved from once a year to quarterly. And there was a whole renaming revolution about, you know, check-ins and other types of language. Imagine that, just like with an athlete, you can provide feedback pretty much on a daily basis. Obviously, there's no human capacity from the manager perspective. A lot of research that has been done on managers, and the role of managers, and the importance of managers, and the conflict in the design of the role of the manager. The manager has to produce and give feedback. So it was really a catch-22 situation in companies around performance management.

So, imagine that you would have some way of providing that daily feedback that could be incremental and building up to the final performance appraisal, or whatever it's called in your organization, so it's actually built on pretty much almost daily feedback. And I think that that is the guarantee that, just like with any athlete, their performance is going to improve. 

Parker: I think one of the interesting things, as we explore the idea of AI, is that sports professionals can get feedback as they practice, so they'll do a lot of practice before they actually perform. Whereas managers and leaders, I mean, it's sort of constantly on the job and AI coaching can actually give them that feedback in an extraordinarily safe way in a practice environment so that they might feel the confidence and have the skills to put it in practice in the real world. 

Anna: And I really want to emphasize that, because we've done some research on using these feedback AI tools, people feel a lot more comfortable and psychologically safe in the environment where AI is providing this very basic feedback vis-a-vis your manager. Because there's a status difference and, you know, and not necessarily always the best, including the fact that managers are not available in their capacity to do that. But, it's a much safer situation when you can get objective feedback on how you are performing from an AI system.

Parker: There's a  quick story that I often share with CHROs. Maybe I'll do it here in this room. People say, well, maybe the manager should be the coach and so, you know, you should just be able to ask your manager. So let me ask a quick question. Quick poll here, how many people here have had a manager at some point in their career that they've had trouble working with? Hands up if that's the case. I can't see, but I can imagine. Okay, how many people here now have someone on their team who's having trouble working with their manager? It's a bit of a trick question. But yes, we all have, we don't have a perfectly safe environment and so this AI coaching is actually in some ways way safer for a lot of people to be able to have their first draft of things.

You and I have talked about a couple of provocative ideas and I know that that's really interesting for people to sort of chew on. One of them is around skills. And you're not necessarily–I don't want to put words in your mouth–but the current path of skills might not be the one that you think is accurate. Can you share a little more about that? 

Anna: I think we all agree that skill is just a very basic, foundational sort of element, a building block of what performance really represents. And those of us working in organizations, I think we've heard a lot about the context, you know, psychological safety, the team dynamics, there are so many elements that contribute to that ultimate performance. Because skill doesn't guarantee performance. There are so many different elements that need to contribute to that perfect outcome that we're looking for. But the ability for us to measure what that ambient context is, oftentimes, has been very limited. Speaking about data. 

Skills, yes, we can infer based on how fast you type. That's where it all started, et cetera. So I think we're going to graduate from skills, with the help of AI, and we'll be able to contextualize, you know, performance. And maybe we need new language.  I think “skills” has its own baggage because we've been using it for so long. If we're able to have all of that wraparound context in addition to, you know, identifying a very specific granular scale, we're going to get closer to actually identifying what it takes–if performance is our goal, which it is– to actually perform at the level of excellence and competency that is required of a job.

Parker: I just think that's such an important insight because skills could be too one-dimensional. And as you say, a skill in one environment might produce a performance outcome and that same skill in a second environment might not produce a performance outcome. If we're in a world where we can understand that context, that is a second dimension to the single dimension of skills that is just so important to try to track. 

Anna: I want to bring the sports analogy, again. All of those Olympic athletes that we admire, they are not thinking about their skills when they are competing for the gold or the silver. They're thinking about mental acuity, they're thinking about visualization. There are so many different elements that they worry about that skill is just automated by that point through the practice, et cetera, et cetera. And I think that’s the same thing we are going to see in the workplace, where we're going to learn a lot of skills are going to be automated through AI and delivered to us. So what will be required is this higher-level, working at the top of the license. Working at the top of your human ability is where we will need to compete. 

Parker: I just love that because I think that ability to sort of move up that, you know, that scale and being able to bring the best parts of who we are  in the world of transformation that we're all going to experience, that's so important. 

So, really appreciate the provocations for the audience and thank you for joining us today, Anna.

Game-Changing Coaching

Author of The Digital Coaching Revolution, Dr. Anna Tavis believes HR professionals should look to performance athletes and their coaches to understand the power of a work coach to drive performance.

Dr. Anna Tavis

Chair of the Human Capital Management Department, NYU

Game-Changing Coaching

AI Coaching Early Use Cases with Delta, Experian, and ADI

AI will change every workplace, but where does that change start? In this panel from Valence's AI & the Workforce Summit, HR leaders Tim Gregory (Managing Director, HR Innovation & Tech at Delta), Lesley Wilkinson (Chief Talent Officer at Experian), and Jennifer Carpenter (Global Head of Talent at Analog Devices) reveal how leading organizations are testing and experimenting with AI, the most valuable starting use cases, and how to expand from initial use cases to effective AI at scale.

Tim Gregory

Director, HR Innovation & Tech, Delta

Lesley Wilkinson

Chief Talent Officer, Experian

Jennifer Carpenter

Global Head of Talent, Analog Devices

Key Points

Parker: Terrific. I think Tim is somewhere and will be joining. We're going to be talking about AI coaching. I think, you know, we'll start with the coaching bit and the investment in managers and why it is so important to make investments in managers. We'll just do a quick go around to understand how you and your companies are thinking about that use case.

Leslie, I will start with you because, as far as I know, you have the most. exciting news that's fresh off the press. Do you want to share with the room what that news is? I think it fits in nicely with this coaching thing. 

Lesley: Thank you for giving me the opportunity to say it out loud. I've been saying it out loud all morning. We just found out this morning that we're in the top 25  great places to work in the world. So, the top 25, just found that out. 

It feels good because our philosophy is about people first. It feels like the right reward for being a people-first organization. So if your question is about why would we invest in something like coaching, I guess it's just because we know the potential that coaching has to unlock performance. It unlocks performance and it unlocks human potential. Why not give that to everybody? Our job is to unlock performance and potential and yet, we've given it, historically, to such small groups of people, it's been cost-restrictive, it's been time-restrictive, and it's been based on that really messy human relationship thing. So if you can find another way of doing that, then why not? 

Parker: Jennifer? 

Jennifer: Yes, hi. Congratulations. 

Lesley: Thank you.

Jennifer: I think just to build off what you just said, when we built the business case at Analog, I immediately went to equipping managers. Then I thought to myself, am I doing an actual disservice by thinking in that narrow of a business case? Because when you look at the representation of our management, they're largely heavily based in North America, largely a male audience, et cetera. So we widened our business case to actually have 40% managers, 60% individual contributors. I'm very happy to say the invitee list had 50-50 gender representation in it. So I think it's really important. We're going to study not only the adoption patterns–and I have a little bit more data to share about the adoption patterns–but also longitudinally understand how performance is being impacted, and so forth. Something I would say to all the talent leaders in the room: when you are thinking about your early adoption use cases, make sure you're not thinking too narrow of a view. Democratize access to these types of products because we do believe very wholeheartedly that it will ultimately help improve individual's performance and performance of the company. I think we all have a really deep responsibility to think very broadly about how we are inviting people to be early adopters in these things.

Parker: Tim, how about you at Delta? What's the business case for investing in leaders and managers? 

Tim: For us, the front line where employees engage with our customers, that's where it's all one. We put a great deal of effort in making sure that those experiences are phenomenal. We want to make certain that the employee experience ultimately drives the customer experience. So for us, it's really self-evident. We’ve got to really focus on making sure that those managers and our frontline leaders are delivering those moments that delight our customers. 

Parker: Wonderful. Jennifer, you're briefly mentioning, sort of, your pilot. I know that you're just in the design phases, but you've done a really thoughtful job–as you said–about who are the right people and you want to set that up so you can measure ROI, you can measure uptake, usage, feedback. Can you share a little bit more about how you design that with the room today?

Jennifer: Sure. And I'll tell you exactly how I was thinking about it. Did your mother ever tell you when you make an assumption, it makes an ass out of you and me?  So I thought we can't make assumptions. So instead of being an ass, why don't we ask? We really were trying to be thoughtful and I did make a mistake, I did make an ass out of myself. In some of our early pilots, we just gave the generative AI tools to people. And crazy, some people didn't want it. There was no parade being thrown for the generative AI product that we just kind of shoved down their throats.

With Nadia, what we're doing instead–and I really recommend anybody do this–is you invite with a very delightful proposition and then you allow people to opt in and opt out. What we found is we had 1600 people opt in, it's about half of who we invited. Of that, about half responded, about 81% said they’d love to, about 18%, 19% said they did not. But I now know why they did not. 

They also said the reason they opted out is they didn't have time. Now, I don't know if I believe them in that because I also kind of embedded another research question in the mix. Only 40% of them agreed that they're confident in their ability to use AI tools. So, do they not have time? Or are they more reticent because they're not sure? That tells me a lot about adoption. There's going to be people who are more a passenger than a pilot on the plane. Who are those passengers and how do we give them both the optimism that it can improve productivity and quality and the agency that they can do this, that they can be effective. 

Parker: Was there a difference in the percent who were confident in their use of AI for those who accepted? 

Jennifer: Yeah, it was crazy. Get a load of this. Of the people who opted in, 75% of them agreed with the statement that I have confidence that I can use this thing. The people who opted out only agreed that they have confidence in their ability 40% of the time.

Parker: That's quite the jump. So it's probably a pretty good explanatory variable, with time being the excuse.  I think this is what we're going to see, story after story, not everyone is going to adopt it with the same level of enthusiasm and the thoughtfulness with which we design these rollout programs is crucial.

Lesley, you've had a chance now, I think you're in month six or month seven of the rollout of Nadia in Experian. Can you share a little bit more about how you designed–I think–a very thoughtful, curated experience for people who are using her as a coach? 

Lesley: Actually, it's a bit longer than that, Parker. We're almost coming up to a year of our first experiment. We roll everything out through a series of experiments. That's the way we design products, that's the way we design HR products as well. So it's really natural to just go for a series of experiments. Our first experiment was, I'm going to call it “Vanilla Nadia.” Which was, putting out there the coaching solution. We had 100% take-up from the first experiment group within the first three hours of extending the invitation. It's the dream of us learning and development people, it's never happened before. But less repeat take-up. 

So we were wondering why. and one of the  problem statements was about: does it feel relevant to me and my organization, my job? So, experiment two, we train Nadia on our own characteristics of leadership, on our own philosophies and leadership content. Then we started to see the repeat use as well as the immediate take-up. 

So, experiment three, we have our engagement surveys, what happens when our engagement surveys come out, how do our managers get to use that? A shout out for my brilliant colleague, Brad, who's in the audience, who runs our engagement surveys and said, you know, we need Nadia to sit with a manager and explain, and talk, and coach on: what are the next steps?

Then we did the same with performance management. So, can you imagine, you're about to do a performance conversation. You can role play it, you can role play it verbally. That was the next experiment. That's when we started to get really strong ROI. One of the main measures of this is, do we see an increase in leader effectiveness on Great Place to Work? And yes, the answer is it's a 5% uptick in leadership effectiveness on the people who use Nadia, who don't use Nadia. Our last experiment is a program offering for all middle managers, the kind of leadership stuff you need to do on a day-to-day basis. The whole program will be based on Nadia.

Parker: I want to double click on that in a moment. But, Tim, I know we're earlier on in our journey, but one of the things that has come up in the conversations that you and I have had is you're looking at a range of technology options. You're looking at internal, external, current vendors, new potential partners.So you scanned the landscape. What was it that caught your eye about Nadia as Delta began their initial trial and now deployment? 

Tim: Sure. I think as Bill had mentioned very early on, there's only so much capital that goes around and you need to be able to create a competitive position and explain the value of AI.So I can share with your audience sort of how we got our leaders to understand this and then where, sort of, Nadia and the Valence tool kind of fit into our model. 

It started with having to explain this ML/AI thing that you did at the very beginning there. It was like, Hey, Tim, you know, it didn't seem like that long ago, we're describing everything is ML and AI. Now, it's just AI. What happened to the whole ML thing and how is this different than that? At Delta, we've been using machine learning for quite a long time for fuel management, even for weather prediction.  In that case, if you can imagine like an X, Y graph with a slope on it, and you've got a couple points on it, and you can change the value of those points, and change the slope to find a better prediction, and you can explain why it is that that particular point changed. You moved it this percentage and the prediction quality got better. You can update that information real time with the Internet of Things. 

So we've been doing that sort of thing for a very long time. This is fundamentally different with generative AI and the use of these neural networks that can learn, in ways that makes explainability a little bit difficult sometimes. The first thing was just kind of explain that, just like you did for your audience here. Then what we did is, we took this idea of learning and its ability to learn. You know, Lesley, as you'd mentioned, grounding it in your own information, we went down a similar sort of path. But we explained the process, we created four categories of this generative AI learning, sort of, use case scenarios. The first one was really focused on things you could do very quickly and add value. You could say, Hey, look, we're doing some stuff in generative AI, but it doesn't know you, it doesn't really understand the organization.

Then in the fourth category, you have stuff that knows you very, very, very well, so we'll talk about that in a second. The first category, we found, there were lots of our existing vendors that you could flip a switch, configure it, and turn it on in your existing environment. You could check the box and you've done something in AI. The value of it is questionable, it doesn't really know your enterprise, it doesn't understand the context of your business. Really, what's happening in a lot of those applications, they're just making calls out to these big frontier models that have been trained on Reddit, and so forth. You can do them very quickly and very easily oftentimes, but the value is minimal.

The next category is where you can actually ground it in the information of your organization. You can do it relatively quickly. The work that we've done with Valence, I think it was a couple months, maybe, from the very first time we had a conversation to the point we had employees in there using it and evaluating it. We put them through Qualtrics so they could evaluate the experience, and so forth. But this category is where it starts to get grounded in the information of your organization. 

The next category is, really, more sophisticated. We're really doing some of those things internally right now, where the model is actually being trained on the data. So we're using Delta IT and we're building some things and there's some opportunities for us to work together down the road. I know you guys got a great direction in your head in that space as well. Then the final category is stuff that we don't really have anything in anywhere in Delta right now, but the most cutting-edge technology that you're seeing with these reasoning models and compound AI, where you have smaller models working together to solve big problems at the enterprise level with an agentic, agent-based technology and all those sorts of things.

That is an extraordinary realm, particularly as the cost of silicon goes down, the compute value gets better and better. So that's kind of how we set it up. We said AI/ML, we're going over here, ML with generative AI.. Here are the categories and Valence sits right in there. It's like great value, quick to implement, grounded in your enterprise context. It made it real simple for us. 

Parker: One of the themes that I think I'm hearing from the three of you and from other conversations is that there will be a multiplicity of solutions. There is no one solution. It's important, I think as Bill said, to have a portfolio of options. Jennifer and Lesley, I know both of you have really thought about how Nadia as a solution will coexist with Microsoft Copilot.

Jennifer, you've got some data that you've already sort of found in the early days. Can you share more about how you,shared Copilot and the uptake there, and the use case, and then how you see the link with AI coaching. 

Jennifer: Sure. Parker and I have also talked about, Oh, Jennifer, you've been using generative AI for so long. Well, so long, when you think about, we're here in November. Just to remind everybody, November 30th, 2022, was when ChatGPT was released. A few days after that, I got in there and it helped me write, while I was at IBM, my team's performance reviews. Then, November 6th of 2023, ChatGPT released the ability for people like us to create GPTs, our own little agents.

So a year after that, I created a GPT to write performance reviews and I shared it with my executive leadership team. I said, hey, just for us, try it out, what did you think? The feedback they gave me was you're too nice. Your feedback assistant is too glowing in their language, funny enough.

Now, this past October–so that's three cycles–in three weeks, I worked with our teams to create a Copilot studio GPT that we put into Workday. Now, when you think about that hockey stick that Brent was talking about earlier today, a use case of one just helping me, a use case of maybe six people just helping six people, in that period of time. we had 22,000 engagements with employees writing their self input and setting goals. That was in the last month. Interestingly enough, because–remember, I never make an ass of myself, I always ask–I asked those people who were using the writing assistant, think about how long it would normally take you to sit down and do the dreaded task of reflecting on your annual contributions and setting your goals. How much time did that thing take to save you that we just produced in three weeks? I was floored that 45% of the people said it saved them about half the time. They got it done in half the time. I was even more surprised that 3% said it saved them 75% of the time. 

But, you asked me to say, how did this relate back to Nadia? I can track that we now, because those people sampled a writing assistant and began to use it, they're some of the highest adoption and opt-in of Nadia, just a few weeks later. So my hypothesis is the more we introduce these experiments and we get people more optimistic and more confident in their ability to use these things and to see value, they're going to be more, and more, and more willing to try and reap the benefits that we know that Nadia can provide to them. But we have to get over this mindset issue that we have of it's either–you know, I love the comments that come through because I don't just ask them statistical questions. I ask them, what else do you want to know? One person who doesn't want to use a writing assistant or a coach said that they don't like the BS, bingo generator of AI. Another person doesn't like Nadia because it says her ex husband's girlfriend's name, so I get all kinds of insights from the comments. But you know you have to listen and really understand what is it that is getting in people's way, but I'm just thrilled to say as we do these little micro experiments we're one, helping people save time and improving quality, but getting them more ready for what's next. Because I think that's what all of us, as talent leaders, are responsible for doing, preparing people, and, in some ways, protecting them as well. We're building up this muscle and all these little experiments are helping us get them into shape, whether they want to go to the gym or not. We're helping them do the reps. 

Parker: And it's so crucial what the speed, the exponential speed that you talked about, to get them starting now because at some point it might just become too much, too overwhelming to get them started.

Lesley, I know that, you know, when Nadia was rolled out, it was pre-Copilot, but now Copilot coexists with Nadia. What are some of the lessons that you've drawn from that experience? 

Lesley: Well, actually, I'm just going to–Jennifer, such an interesting example. So maybe phase four of that example is actually Nadia role playing performance conversations, which is what we're experimenting with now, which is something that–big fan of Microsoft Copilot–works fantastically for the kind of example that you've just given. It can't right now role play and give feedback, but unleashing that power now all we've done at the moment is give that to leaders to do that, it’'s really changed the quality of a performance conversation. The next step is to give it to all employees. I think using Jennifer's is just a great example, actually.

Parker: Tim, as you've got a huge frontline workforce, and this is a frontline workforce that is probably not as familiar with technology as the HQ workers. How does Delta think about ensuring that both parts of the workforce have access to this technology that could be transformative? 

Tim: All of the technology that we develop, we really start there and engage where employees can connect back to the organization, so, obviously with the frontline, it's mobile technology. So, we're very focused on making certain that, their voices are heard, particularly in the early stages when we're designing these solutions.

I mentioned earlier when we did the pilot with y'all, we had seen–I said “y’all,” I'm actually from New York. I moved down there six years ago and now I’ve got the “you all” thing. But they are included in the early design phases. We implemented Qualtrics for all the measuring pieces se got to hear all the verbatims and then fine tune it and improve it more. So, really everything, 90% of our entire workforce is out there engaging with customers. So it really is at the heart of all that we do. 

Parker: And Jennifer, you've been using AI sort of personally for a while, you've experienced some of the change in capabilities. If you try to picture yourself and Analog Devices as maybe 12 months in the future. What are some of the hopeful use cases that you could imagine emerging? 

Jennifer: When I think about the outcomes of the goals, like let's set the intention. One outcome I hope is that we see a direct correlation to increase business performance and revenue growth as a result of this. Because, you know, when you think about this passenger versus pilot mindset, one study that was done by Jeff Hancock from Stanford's Social Media Lab–and BetterUp Labs as well–they looked at across a population of 10,000 workers, 18 industries, and they found those people with a pilot mindset–you know, that high optimism, high agency–were 3.6 times more productive. Now, I have got to trust Stanford, they know what they're doing, I wonder how they calculated that productivity measurement, but let's say it's more right than wrong. If we can introduce, 12 months from now, and help people get into that upper right hand quadrant of pilot mindset, productivity increases, innovation is unlocked, collaboration with AI also will foster creativity with their colleagues. And I have to believe that that's going to improve revenue growth because it's going to improve, you know, performance and hopefully include engagement and make it a best place to work. So I think if we really harness this right we're going to get people prepared, we're going to make more pilots and we're going to be more profitable. 

Parker: Lesley, the last example that you shared was around the integration into a leadership program, and I think that's a new use case where we've only worked, really, with you and Brad and Sophie and many people closely on this. Can you share more about the idea behind it in just the early stages of the genesis so far?

Lesley: Yeah, actually, and I've only just realized this in that last question you asked and Jennifer's response, actually, on the problem statement. And I think one of the opportunities, the problem statement is about the problem of humanness and rather than worry about how AI can cause issues to take away our humanness, how instead does it help us solve the problems that come with humanness?

What has been making me think is that, you know, the biases that we all carry around with us in the people processes, in our people decision making, it all comes together when we have conversations, when managers, how many times as HR professionals have we said, or have I said, Oh, if only the leaders would, you know, this is all about the leaders and their relationship to. Well, does AI have a space in there to intervene in that relationship? Or, to actually take away some of the bias in that human-to-human relationship? We're running a big leadership program next week, which is still face-to-face, so that ability to have retreats is still important. And on the plane over, I was going back and rereading Thinking Fast and Slow, so it's really got me thinking about this first-order thinking and how do we get the hell out of that.

And that, I think, is what we're trying to do in the leadership program, with Nadia. We're trying to replace the problematic bit of humanness with a machine and allow space for human-to-human conversation between people and with their managers. So Nadia will take over some of the basic tooling and some of the space for reflection and then leaders will do the rest.

Parker: I mean, this idea of AI being closer to being a human, I mean, Diane mentioned it. Thinking Fast and Slow, interesting that you brought that up. It was one of the pieces that I was able to talk to Geoffrey Hinton about. How can we take that analogy and apply that to LLMs and how do we help them think slow, because it's hard for us to, but they're able to do so with, you know, the size and the scale. 

Tim, you've talked about. four different dimensions and you know, I saw your eyes light up at that fourth dimension of the sort of fourth pillar of what's possible. What are some of the steps that Delta is taking to do the kind of experiments that we've been talking about today and something that also feels pretty high-stakes?

Tim: Sure. We're really preparing for the future in two ways. One, as we all know in the digital age, data is the oil of the digital age. We've heard that for quite a long time now. I think there's a new dimension to that, with the age of AI, and that's a reward signal. That's another really, really important thing. The machines as they're initially trained on the data will present a prediction. They need to be aligned with the culture of your company, and so on. And that's really where these signals come in. So to the degree that in our case, a hiring manager is engaging with a prediction on what a particular skill may need to be for a particular job, they can evaluate that. They can give it four stars, thumbs up, thumbs down, or maybe the initial prediction had a hundred words in it and they changed 50 of them. All of that is a signal that is really, really important to improving the quality of the prediction down the road. 

So what we're doing now to prepare for the future is really focusing on that data. So we're building some applications that will capture the initial prediction the machine made, store it, take the next value, the preferred value that the human provided, put that into a relational database as well, and generate transactions. So that, ultimately, those input/output pairs, those comparatives, can be used to train the machines. 

Everything we're building, we're really thinking about how we can keep that signal. We think it's going to be a strong, competitive advantage over time. This technology, unlike anything else we've seen before. Traditionally, you purchase it, and then five to six years later, it's depreciated and sort of its value does this: I think, Brent, who showed that sort of hockey stick, it nearly goes vertical with some of these things. As long as you're capturing signal and you've got good quality data, the value of these assets are going to be extraordinary. I think I wouldn't be going too far out on a limb to think at some point, you'll see, you know, from even from a Wall Street perspective, evaluating the quality of these capabilities that you have within your organization, how do you value these assets? Like, in our case, we're building things that know all the skills that we need to run our business. What is the value of that on the asset as opposed to what you traditionally have with a lot of technology? So brave new world coming up. But to answer your questions, it’s really focusing on the data, making sure that we're capturing signal, making sure people understand the value of that, to prepare for the future.

Parker: Wonderful. Jennifer, you talked two years ago, I mean, you were an early adopter 1, 6, 22,000. What are the experiments that you're running right now that are sort of in the 1, 6, 12 category that might explode a year from now? 

Jennifer: We're focusing on things that are fairly turnkey, I would argue. So, you talked about Copilot. We have a mini experiment, about 300 people, that we've been studying very closely on Copilot 365. That's going to ramp to mostly 14,000 employees, I would think, within the next year. That's really just focusing in on core productivity and just early adoption. I think getting people accustomed and climbing that, you know, that optimism and agency ladder, so to speak.

We also have a growing software and digital capability, AI capability within ADI. So we're introducing, obviously, GitHub Copilot and other coding instances and things like that. That's really going to, I think, drive enormous value. So, in the next 12 months, it's around upscaling and readiness and listening. Because we always have to adapt. And I think Nadia is going to come in and be at kind of the softer edge of, it's not just about productivity, but it's about your performance. It’s about a little more personable, perhaps, than maybe driving the productivity dimension so hard.

So I'm really excited to see in, I think it was, the first five days we got 1,600 people to opt in. We do have plans to continue to scale out Nadia even beyond the 1,600. But we want to study and learn so that we're going to make it even better for people for the next wave. 

Parker: I mean, it's funny we hear Diane Gherson talking about, you know, the delight, and we want to make Nadia delightful, and we want to make it a great experience, but we also want to make it a Trojan horse for productivity. We want to help people be better and faster at their job.

Jennifer: Well, people want to be better and faster. 

Parker: They want to be better. They want that. That's what they're excited about. Lesley, what about for you? Are there any experiments? You and I have a lot of chats about sort of the agile mindset in HR and it's obviously working with the results that you're getting and the recognition. What are some of the experiments that are most exciting for you these days? 

Lesley: As an organization, we've been using AI for the last couple of years. So, for new product development, it's super exciting for driving financial inclusion and equal access to products, that gets me really excited. In the people space, for productivity, for the front end of our systems, and in our leadership offerings, we built something called the Leader Exchange, which is our own program, which has AI built into it for search and dashboarding. But it's this space, really, I think it's in the space that Nadia can do that other forms of AI can't do right now, which is to replicate the human dynamic, but remove some of those problematic areas of human dynamic.

I've been thinking a lot about the hockey stick. This is going to be, I think, a much shorter period of time, before we get up into the turn. And then the thing we're really working hard at is how do we create the conditions in the organization and the skills in our people in HR and in the wider organization to constantly see what might be coming around the corner, to understand how we quickly respond. Paying as much attention to that skill set as to the actual experiment on the technology that comes at us. 

Parker: And I think that mindset is just so crucial. It's sort of: try to look around the corner, try to pay attention from, you know, every level of the organization. And then I just really love the, you know, the reinforcement of the idea of experiments. I mean, that's how we are going to learn in a world that's changing so quickly. Preferably measurable experiments as, as we've all talked about. And it's been wonderful to have you share your experiences with the group today. 

So, thank you all. Thank you for flying from, you know, none of these are New Yorkers today. They might have New York roots at some point, but they've all traveled to get here, so thanks, Tim. Thanks, Jennifer. Thanks, Lesley.

AI Coaching Early Use Cases with Delta, Experian, and ADI

AI will change every workplace, but where does that change start? In this panel from Valence's AI & the Workforce Summit, HR leaders Tim Gregory (Managing Director, HR Innovation & Tech at Delta), Lesley Wilkinson (Chief Talent Officer at Experian), and Jennifer Carpenter (Global Head of Talent at Analog Devices) reveal how leading organizations are testing and experimenting with AI, the most valuable starting use cases, and how to expand from initial use cases to effective AI at scale.

Tim Gregory

Director, HR Innovation & Tech, Delta

AI Coaching Early Use Cases with Delta, Experian, and ADI

Chief Talent Officer, Experian

Jennifer Carpenter

Global Head of Talent, Analog Devices

Change Management for AI with Citigroup, IBM, and Novartis

No organization is a monolith, and within any company, there are AI enthusiasts and early adopters as well as resisters. How do you bring the entire organization along and shift the most resistant employees from fear to curiosity? HR Leaders Cameron Hedrick (Head of Learning & Culture at Citigroup), Diane Gherson (former CHRO at IBM), and Lisa Naylor (Global Head of Leadership Development at Novartis) share their insights on getting global enterprises ready for the AI era.

Cameron Hedrick

Head of Learning & Culture, Citigroup

Diane Gherson

former CHRO, IBM

Lisa Naylor

Global Head of Leadership Development, Novartis

Key Points

Das Rush: So, Cameron has led talent initiatives at Citi for more than two decades. Currently, he's the head of learning and culture, where he leads the learning and culture teams, including overseeing their deployment of Valence's AI coach, Nadia. Lisa is the global head of leadership development at Novartis and has been an HR leader for 15-plus years now.

Okay, we're not counting, but if we were, maybe. And focused largely in healthcare, and you've had previous roles at AstraZeneca and Siemens Healthcare. And Diane is the former CHRO at IBM and first started using AI in 2011 to predict attrition, and that tool went on to get—or, sorry, not to predict attrition, as a replacement for HR business partners.

And that was a tool that went on to get 1.65 million inquiries a year. And we'll talk a little bit more—there's some even more impressive stats on that. So we're going to get into that, but today, you're a professor of business leadership at Harvard Business School, a board member, and senior advisor to a number of companies, including Kraft Heinz.

So, with those intros, welcome. Thank you for being here. And to kick off, I actually just want to do kind of a quick round robin, because change management can be this term that gets thrown around and can be kind of vague sometimes. So I'd just like to hear a little bit from each of you, like, when you think of change management and AI, what does that actually mean to you, and what do you see as the role for an HR leader?

Cameron, would you want to start and then we'll work our way over here? 

Cameron Hedrick: Sure. Change management, HR, as it relates to AI, I think, first of all, it's contextual about what company you work in, because we all have equal access to most of these technologies in our personal lives, but we do not have equal access to it based on the company in which you work.

So I work at Citi. We are, understandably, cautious about how this works its way in and what we're doing. And so, there is a competition of sorts between your outside life and your inside-the-firm life that we're trying to manage. But the short version of the question is, first of all, we are trying to help people understand what it means generally, in the universe, how to negotiate what it might mean for your work, because people are understandably fearful, somewhat skeptical.

So we're talking about that proactively. And we're just moving these technologies into the firm very carefully in very limited ways. So, we're right at the beginning of our journey of adopting this inside the firm. 

Das: And, Lisa, how about you? 

Lisa Naylor: Well, I mean, maybe I'll say a controversial thing so we can have, like, a debate-y panel, but I mean, I think for me, it's no different than other kinds of change management.

I mean, I really think that when we think about this work, we need to understand what's valuable to people. You know, as practitioners, we can roll out the thing, we can make it available, but making it valuable, value add, is a whole different story. And I think for us, for change management, we really have to think about what is the value for people?

What are they going to do with it? What is it going to be for their work and for them? And then we have to figure out how to bring that to life. 

Das: Wonderfully said. And, Diane, how about you? How are you thinking about change management and AI?

Diane Gherson: I'm going to be controversial too. 

Das: Let's do it. Okay. 

Diane: I think change management's dead. Okay? I think it's out of date. And I think we've learned that through a number of different changes we've just been through. Like, for example, return to work, right? So making a change and expecting to sort of gloss it up and make it good for people afterward isn't cutting it anymore, right? So, I think that the idea that really is working is creating a movement inside the company, around the change you want to make.

And that's certainly what we did at IBM. And we started by just attacking the things that bugged people the most in their employee experience and made it delightful using AI, right? And so, you know, if you were applying for time off, for example, you wouldn't have to go to a website and click on things and, you know, have to remember how many days you had left.

I mean, it would all be there for you, answering your questions, and so forth. Or if you wanted, if you were going to get a mortgage and you needed an employee verification letter, which was awful—sometimes the mortgage rate would go up by the time you got your approvals.

It was all done for you. Do you want us to write the cover letter for you? Blah, blah, blah. I mean, it all did it for you, right? So, it was done in three minutes. And so, that kind of thing made people not just curious, but excited about the promise of AI. So, I think you start with that movement. Then, you know, you've got to deal with issues of concern, like, for example, privacy.

I mean, we were getting a lot of employee data through our digital channels. And learning about engagement, you know, all of those kinds of things. Hotspots, they're useful, but also snooping. So, we had to come clean, right? And we had to explain, here's what we're going to look at, here's what we're not going to look at.

We're not going to look at email. But, you know, if it's an open Slack channel, all hands are in. And we're not looking at people individually, we're looking at trends. We're looking at highs and lows. So we explained what we were doing with their information, which is important to build trust.

The third thing is, we started with areas where we were upping the game of people as opposed to taking their work away, right? And of course, gen AI is just fantastic for that. The study that Brent mentioned, the Harvard study with BCG and others, have proven that there are some areas where it's just incredible, right?

So let's take customer service. 19% improvement in efficiency, but AI is more empathetic than humans. Why? Because we heard from Parker earlier about the amygdala. We get amygdala hijacks when someone's really rude to us, right? No matter who we are, we do. But fortunately, that doesn't happen to gen AI, right?

They can reframe the situation very calmly and deal with that irate customer. Well, that's great. If I'm a customer service agent and I'm being yelled at all day, right? So it actually improves my productivity in a way that actually makes me a little more sane, right? So, again, delighting people with improved productivity is really important.

And then, of course there are areas we've talked about, what Brent talked about, the Venn diagram. But there are areas where you're going to replace people's work. And there, it's really important to involve people. They are the experts on their work. And getting them to feel like, we're going to upskill you, if you want to be upskilled, if not, okay, the job's going to change.

But instead of making people feel like they own their job, having them feel like they own their skills. If you feel like you own your skills, then you want to keep upping them, right? And the job will change, the workflow will change. So, I would say those are sort of the four—work around that wheel of change that we did at IBM, and it was really quite effective.

Das: It's wonderful, because we've teased out some tensions, but I actually, at least for me, I'm hearing like a lot more agreement, and there's this theme that I feel like is coming out of, fundamentally, change management is taking something that is maybe a pain point to the organization, to employees, and turning into something that's more delightful.

Or, I think the way Brent kind of put it, of like, shaping the future we want, and change management is maybe that process by which we do it. I want to kind of go now, because Cameron and Lisa, I know that you both have some specific AI initiatives you are rolling out, and that, I think there's this tension you have to navigate between the parts of the organization that are kind of resisting.

Whether that's over, you know, security concerns or just employee resistance. And then there's the parts pulling you forward, so I'd love to just hear a little bit about what are your AI initiatives? And what are the things that are going well? And what are those real blockers that you're having to work through?

Cameron: Sure, so just a couple of examples of the initiatives. We've got a generative AI pilot related to writing performance reviews, which language models do exceptionally well. We are using, well, we're talking with Valence to get this product in as well. We are using language models to look into cultural matters and cultural measurements in ways that have heretofore been unavailable to us. So those are some examples of the initiatives.

But as you said the headwinds are manifold, and some of which you've talked about. There are privacy concerns. There are security concerns, and the list is very long. And it's easy to kind of, for me, to be very frustrated with this, but if you work at Citi, and you make a mistake in these areas, info security, for example, the world's economy comes grinding. The consequences are not small.

So we are just treading that middle ground as best we can, right on the knife edge. Because there's true competitive advantage that comes with it as well. But we're not alone in finance trying to navigate that. So those are a couple of examples of the headwinds and the tailwinds.

Lisa: Yeah, I mean, obviously, we put AI in lots of different places, I think, at Novartis, when we look at some of the work we need to do all the way from the science. But, for today, if I just kind of hone in on the things that we're doing with our people and some of this work, we have, I think, the difference between the AI that people know about, the AI that's in the background, and then the AI that they kind of feel.

And so we have various tools that sit in the background, which is really kind of looking at how you engage some of the things that we do with global products and match and other things where it's really helping people think through their careers and their development and grab skills and really curate the way that they engage in a lot of different things, which is where we see a lot of value. 

We are a very large company, and it's easy for people to not understand how to navigate the system. And we find that there are methods in use for AI where it helps people just kind of grab and gather data in ways that maybe they didn't think about.

Really specifically, we're leaning into some work that we're doing with leaders. I mean, if you go around and ask people, are you a great leader? A lot of them are going to go, “of course I am,” but is the reality, are they? I'm not sure, in all of the moments. I mean, I'm very sure, but I'm being kind. So I think, you know, the thing for us is, we're putting tools in to make the work of your team more discussable.

So we have some partnership with Valence where we're really thinking about how do you have people with insight into who they are? How do we then bring that insight into your team? And then now we're starting to lean into things like Nadia and finding people in the moments they need it most. And I think this is kind of that different trigger when we talk about the use and the application of AI.

Do people go out and find it when they need it? Or how do we bring it to them in the moments that they're already challenged with and then really enhance the use? And so this has been the place for us. We've almost gotten kind of a viral-level pickup of some of these tools because people are realizing, “gosh, it was a little bit hard for me to figure out on my own, but with just some of these more discussable, accessible ways of thinking, then now I'm more interested in it.”

And, you know, you get like one interested friend, and then they tell another and really kind of picks up in a way to help our leaders. And maybe in a way that, if we had asked them a few months ago, they wouldn't have articulated they even needed it on their own. 

Cameron: Just one other thing that'll be interesting for this audience, and it builds on your first question and involving people in the journey. But one of the things we're exploring is we're very interested in understanding the skill profile of each individual. Do they have it and how good is it? And everybody in this room has wanted that, and you've asked for it, and you've tried to have them explain it to you, and that is like pushing a rock up a hill. 

But we have the opportunity now to do this passively, to infer these types of things with much greater accuracy and much greater dynamism. And that is going to be terrific, because now we can manage our skill portfolio just like I manage a real estate portfolio or any other asset that we have.

But that is a scary thing for some people, like, what are you going to be using that for? How are you getting that? Is that going to be a performance management thing? So that's the next frontier that we're really thinking a lot about too. 

Diane: That's actually something we did at IBM really early on. We used AI to infer people's skills from their digital footprint.

And we did that around, I want to say, 2011, 2012, and at first we were afraid to ask for validation. We were using sort of skills councils and what does “best” look like, and all that kind of thing. But, actually, over time, we realized that the only thing you can do is ask for validation, and it's actually now up to 96%.

And people are positive about it, but we were scared at first. We thought, “oh my god, what if we got it wrong, and what if we're going to get into endless debates?” It's a whole lot better than the previous version, which, of course, was all manual. But it is really important, and I think it's so important that I actually sit on the board of one of the companies that does skills inference, TechWolf, because I just think it's critical to this whole wave of introducing AI, and there's been, I think—a few companies have tried and not succeeded at doing it.

So it is, I think, just foundational. I totally agree with you, Cameron. 

Lisa: Yeah, I do think there's this, and maybe it goes to where we started on these elements of change management, is when we come and we talk to people about the thing, right, here's the thing you're going to have, but they're not sure to what value is that. For me, I think this is the place we've seen this unlock, when we can help people understand, hey, when we asked you, how's development going or what does your career look like? And they're really struggling. We then talk about the outcome instead of the moment, right? So I think this is what's really where we find the value.

And we're talking to the business and we're talking about next stages. And should we do this and why? We're not talking about, it's cool or everybody else is doing it. What we're talking about is, this is a unique challenge and this is where we think it's going to accelerate something for us, right?

We heard this at the start of our time today, engagement and people and value and belonging and connection. All of those things are kind of opened up with some of these opportunities that we've seen with AI and the tools and the partnerships they create in the organization. Look, I mean, I always say, when I walk in the grocery store and they change it all up, I'm super annoyed that day because I don't know where anything is and it feels uncomfortable and it's new, and then two weeks later I don't notice anymore, right?

And we have some of that too, is this just kind of, how do we get past the initial “newness” or the worry and kind of drive people to the next levels of understanding, and I think that's really the trick we've seen. 

Das: Yeah, I think this is a great frame, and we sort of already in this kind of first half here, started to tease out, fundamentally, and we call this, you know, from fear to curiosity, because you've got all these things that really represent the fear, whether that's the fear on your technical teams around security and privacy, or it's an individual's fear in trying something new that you have to overcome with this, like, momentum of value and these new things that are being unlocked with these new capabilities.

And, like, Lisa, I think you in particular really highlighted this kind of bottom-up adoption or this kind of organic adoption, and I'm guessing a lot of people in the room have had the experience of trying to mandate a top-down initiative, and that rarely goes well for change management versus how easy it can be when individuals see the value, when employees start telling one another.

So how have you, you know, in various initiatives, been able to generate that kind of bottom-up excitement that just makes change management so much easier?

Diane: Well, a Shark Tank kind of contest—so across the whole company, give us some examples of, you know, what you'd like us to work on, what you would like to work on with us with AI, and that started it, I think. I think that the second thing is your point about top-down, that it has to start at the top that we're all learning, right? You can't have know-it-all senior leaders saying, “you guys go do your learning, right? And these skills are important for you.” It's more about, “I'm taking the learning with you.” So our CEO actually held a monthly learning session, and she'd interview different people and ask a lot of questions, a little like Oprah.

But with them, we all went through learning and got certified, whether it was cloud, AI, whatever it was. But the whole idea was we're developing those skills. So very senior leaders would be having town halls and go, “oh my god, that third module of AI was a killer. I failed the test three times, and I wanted to throw my laptop out the window.” And everyone was like, “oh my god, well, if he's doing it, then maybe I should be doing it.” So there's this sense of we're all learning together.

Lisa: There is something, right, about what people talk about, what people are interested in. And, I mean, we have this pleasure in that we are thinking differently about AI in lots of different facets, right?

So we're not only trying to talk about it in one space. We know that there's something special in the service of science and other things. So I do also think there's how it gets articulated. So, inside of our own company, we often talk about this as, look, this isn't about replacing people. We sometimes say it's like an Iron Man type of kind of energy, which is, you already have something strong, and then how do you enhance it or accelerate the value?

So, I think this has been something big. I mean, at the most basic level, we're just meeting people in the scaffolding of the problems that they already have. So, we're not trying to find a moment, right? We're saying, we already know this. We hear this from you, and we have this idea that you may want to try.

Look, there's always people in there who are like, no, thanks. You know, I'm not interested. And then we use all of our best skills, a little competition, a little cooperation, a little something to say, well, oh, that's cool. But Joey Schmutz over here thought it was great. So I think it's also about how we talk about it, but I think simpler is better in a lot of ways.

And really figuring out the use case and the value is where we've been spending our energy. But we have, as you said, like some viral pickup on things that have just been about meeting people in those problems. 

Diane: A couple of things though, I just want to say, we did replace people, right? So, it sounds like you didn't.

In our case, we said, look, take payroll. You know, our attrition rate is 14%. We're not going to replace that 14%. You've all got jobs, but the 14% that leave, they're not going to get replaced. So we were very clear about what the situation was, but the jobs are going to change.

You'll need to upskill. And everyone understood what skills they needed to learn in order to now do the higher-level pattern finding instead of the reconciliations that they were doing before. So I think there is a need for clarity around that. I think the second thing is, you’ve got to link learning with career.

So, in our personalized learning system, it would say, congratulations, Diane, you just passed this certification. Do you want to apply for any of these five jobs that have openings? You have a 90% match. And so there was this immediate gratification that you got as you went through your learning that you were now eligible for certain jobs that maybe you'd never even thought about.

Lisa: The value of being honest, though, that's super important, right? Just talking about what we're doing and what we're not doing, because I think it goes to your point about what's new and what's fear, and let's just be open about what we're doing and then how people can find their way in it. 

Das: I want to come back to this idea of simple communication, but, Cameron, I'd love to hear from you.

Cameron: Yeah, listen, I think part of my job is to create the conditions for the things to happen that I hope happen at the firm, which is a very easy sentence to say. But the doing of that is very difficult. This is grossly oversimplified, but I kind of think of it in two buckets: bucket number one is to find the human and predispose the human and equip the human, absent any other trammeling force, to do the thing, right?

So this is overcoming fear, it's in skills, and the like. And that's hard enough. But then there's, like, human in the context of the firm, which is a much different matter. So does everything in your firm, the compensation systems, your rating systems, your promotion systems, does it align with—does it reward the thing that you want?

So very practically, right now at my firm, I can drop the person who's predisposed to innovation and using generative AI in the system, but the system is not built for that human to thrive. So, systems alignment and harmony is the big part of my job, at least on this particular topic. 

Das Rush: Yeah, and I'm being kind of cognizant of time, and I know that you've all had some experience with Nadia, with AI coaches.

What has kind of been the role of that in helping with change management? Because obviously part of an AI coach is, you know, leadership development, work coaching, so how do you think about using something like a tool, like an AI coach, to help bring employees along on that journey, and has that been helpful? 

Cameron: We have not done it yet at Citi. Conceptually, I'm in. In my personal life, I've done it and created my own agents and things like that. But right now, it's a concept. So I don't have applied experience, but I hope to. 

Das: Yeah. Lisa, I know you've had a bit of experience, too. 

Lisa: Yeah, I think we're just getting started and picking up speed, and it's also, for us, we're just looking at this kind of the whole continuum.

What do people need at different times? And we're just holding this belief that, you know, sometimes you have this moment, this need, and it's short. It's the, “oh, I just wish someone could think through this with me,” and it's just fleeting, and it kind of gets lost into the rest of your day. And so, we're really trying to plug that in, and in the moments that people need it, I think, which is kind of on brand to some of the other things I've said. But we'll have to, like, see you at the next—we'll have to see you at the next one to talk about where it went wrong and where we really picked it up as part of application.

Diane: And I just have to say, I know Eric Erickson is down there somewhere in your Valence system. And that's where they start, right? Is putting yourself in the shoes of the other person, asking those kinds of questions. I'm sure, I haven't used Valence, but I'm sure that's what it does. Eric Erickson is just brilliant, but not all of us can be as good as him, but Nadia will be.

Das: Yeah, like, sometimes just having that—you know, I think the point you made earlier about not having the amygdala hijacking, having something there that's just going to be a calm presence while you're working through that fear, going to your curiosity. 

I want to come back to a point that you made, though. I think you all made it, but I really heard it in kind of what you were saying—communication just needs to be simple, like you need to help people understand what this is, because that's really the starting point of the shifting. How have you gone about communicating the value of AI or what you're doing both to employees and maybe particularly to the resistant parts of the organization?

Lisa: I mean, we have a pretty nerdy organization and that works on our behalf sometimes. I mean, people are really invested in understanding the value to a lot of different moments and kind of in the work that they're doing. But I think for us, what we're looking at is helping people to understand that we can increase and advance the way that they use their time during the day.

So we already have some of this, right, in partnerships we have with Copilot and Microsoft. People are seeing that in just some of the simple use cases of their work. And of course, the people who have personal application, right? They know and understand what they do. And so, for us, it's just about showing them that, “hey, did you feel a little uncomfortable when you had to sit with your team and you had to figure out how it was going? Well, actually we can make that way more discussable for you. And here's the tool to help you figure that out.” 

And that's some of the partnership that we've already seen with Valence, right? So, for us, it's rank. It's not a big thing. It's not a big moment, a big brand, a big name, is to just say, look, we recognize we're in a moment in time of strategy shift or a challenging something, or the world is hard.

And guess what? As a leader, you have to hold that space. And so, how do we help you figure it out? And so, that's really where we come and we introduce it, is to say to people, “look, you're going to do this thing anyway. And so it can either be a little bit weird and clunky, or we can help you get the best discussion, and here are some of the ways that you can think about it.” 

And so, by doing that and then following some of the data we already have, and just saying, hey, this is what people say, or we nudge in and say, “hey, when we look at the last couple of X and Y data points, this is where you are. So we have this idea and it might help you. Are you interested?” Is, again, just trying to find them right in the moment they need it to support. 

Das: Anything you want to add there? No? Okay. Great. I'm kind of curious in particular on this idea of sort of the partners you need to work with, whether they're your CTO or your chief data officer, and I know, Cameron, you in particular face these kind of headwinds.

What is the communication like there, and how are you finding, like, the ability to kind of communicate the value even with—and mitigate those risks? 

Cameron: Well, early on, some of you know Ethan Mollick, and we brought him in and talked to our seniors and the board just to show what the possibilities were. So, I think at the most senior levels, people get it, alright?

I really do think they do. I also don't think this is the type of thing that needs to be born, bred, and run inside the CTO office. This is more of an HR thing, in my view, but I'm biased, so you need to discount that a lot. But to your point, like the headwinds for us, anyway, have not really been tech.

Well, it's been info security number one, privacy number two and just really understanding what this is. What are we going to let loose on our data sets? You know, what are the vulnerabilities that creates? We're moving through that now, finally. So now, we're moving into some of the—they're not esoteric at all, I mean, in the universe of things, but we're moving from that to some more navigable things. But it's never been selling the promise of it or the functionality of it or what it can do. It's just how can we get it through safely and responsibly? 

Das: Yeah. Like finding enterprise-grade tools.

Cameron: That's exactly right. 

Das: We've got time here for one last question. And so, I think the question I want to ask, because this is a fast-moving space, and we're talking about change management and that's really envisioning what you want the future to be. So 12 months from now, if we're back here, we're sitting in these same seats, where do you hope you are with AI in your organization? 

Lisa: Well, I mean, I hope people are telling stories of, it helped me be better at the thing I was doing. And ultimately, that's what we're after in a lot of spaces, whether it's an application in science, whether it's helping us, you know, think through a vacation and leveraging AI.

I think, ultimately, what we're trying to do is find the stories where people say, “I was kind of two or three steps better than the place that I started,” and, yeah, I think most in play.

Cameron: This may not seem ambitious enough, but weighing all the factors I have to weigh, what can I do safely and penetrate, you know, in 12 months. I think the performance review use case is good.

I think the culture measurement use case is good. We have a comparison of three or four documents together, summarize meetings, Copilot. That's good. Just get that out at scale in 12 months. That would be a victory. 

Das: And, Diane, I know you're not actively, but you are advising. So, where do you hope to see the companies you're working with go from where they are now to 12 months from now?

Diane: Well, look, I mean, I do think it's important that people feel delighted by AI and not intimidated by it. And so, they've had to have had some of those early, delightful experiences, and they've had that, then we're in right. We saw that happen with digitization where, in the consumer world, you know, people were delighted by the fact that Amazon knew who they were, and all this kind of stuff, but their companies were way behind, “dear employee” letters were going out.

And we caught up as corporations, but it took us longer than the consumer platforms. I think it would be such a win for HR, because I agree with you, HR is leading this. It would be such a win for HR to be as delightful, if not more delightful, than the consumer platforms are as an experience.

Das: Oh, I love that as a goal for everybody. Next 12 months, HR can compete with the customer-facing things, and let's see if we can do more delightful employee experiences than customer. That's awesome. Hopefully both are, but I love that little internal competition. Great note to end on. 

Cameron, Lisa, Diane, thank you so much for coming here. I really appreciate it. And with that, if we can get a round of applause.

Change Management for AI with Citigroup, IBM, and Novartis

No organization is a monolith, and within any company, there are AI enthusiasts and early adopters as well as resisters. How do you bring the entire organization along and shift the most resistant employees from fear to curiosity? HR Leaders Cameron Hedrick (Head of Learning & Culture at Citigroup), Diane Gherson (former CHRO at IBM), and Lisa Naylor (Global Head of Leadership Development at Novartis) share their insights on getting global enterprises ready for the AI era.

Cameron Hedrick

Head of Learning & Culture, Citigroup

Change Management for AI with Citigroup, IBM, and Novartis

former CHRO, IBM

Lisa Naylor

Global Head of Leadership Development, Novartis

AI Myths

In this condensed version of a popular internal presentation at Microsoft,  Brent Hecht, Director of Applied Science at Microsoft, dispels common AI misconceptions, including why AI isn’t all hype, why AI’s best use isn’t to replace people, and why we should be shaping the future of work instead of trying to predict it.

Brent Hecht

Director of Applied Science, Microsoft

Key Points

Jeff Dalton: Thank you very much for that introduction. It's my honor and privilege to be able to introduce our next speaker that we have here. So we have Dr. Brent Hecht. Apologies. Dr. Hecht is a distinguished kind of colleague of mine. He inspires both my academic research at the University of Edinburgh. He also inspires the work that we do here at Valence on the applied side.

I first encountered Brent's work from the Future of Work study from Microsoft. Maybe some of you are familiar with it. And that really looked at what we could do from hybrid work and how that was transforming work in the age of COVID-19. More recently, there's been work on looking at the new future of work that's focused on the future of AI.

There was one published last week, or last year, and there's also one coming next week. So, coming soon, so look out for that. One thing that stood out to me in one of the recent reports was the role of AI as critical systems thinking. So, being a provocateur to be able to allow us to enhance our people's ability to work effectively and to think critically.

And so he's going to be doing that for us today. He's going to play that role of that kind of provocateur to help us think about where the future of AI is going for work. I want to give you a little bit about Dr. Hecht. He's a director of applied science at Microsoft Research, an associate professor at Northwestern University, where he leads the People, Space, and Algorithms Research Group, focusing on how gen AI can be a positive influence on society. 

He's a prominent figure in responsible AI with over a decade of human-centered work, over a hundred publications in top journals. His research has helped lay the groundwork for developing equitable and transparent AI systems. His foundational work on algorithmic bias and methods for measuring and improving AI systems has really led to the fact that we can now measure AI with satisfaction, as well as kind of essential to building useful, trustworthy AI systems, and fair AI, which are core pillars that guide what we're doing with work on Nadia. 

Brent's research has been featured in major publications, the New York Times, NPR, as well as major venues like Wired. So we're honored to have him here. He's also a founding member of the ACM Conference on Fairness, Accountability, and Transparency, a leading venue for AI research in responsible AI, and today he'll be addressing AI misconceptions. So I look forward to hearing what he has to say for us. Please welcome Dr. Brent Hecht. Thank you.

Brent Hecht: Hey, folks. Thanks for welcoming me here. And I'm excited to talk to you about some misconceptions about AI. And I will try this clicker now. Alright. So I joined Microsoft from Northwestern, where I was a professor, as was mentioned, in 2019. And the pitch was come, you know, invent the future of work.

I thought, great, Microsoft's a wonderful place to do that. Little did I know I would have such an amazing opportunity to hopefully help change the future for the better, thanks to really generational changes in work that happened and are happening over a five-year period. And the first was the switch to hybrid work, which is stabilizing a little bit, but it's still going on, and I'm sure you all deal with that all the time.

And then language models, which is something that I studied for many, many years, they became good enough to become practical for a number of applications and unleashed AI into the workforce in a way that folks had predicted for a long time, but it's here now. So when the first disruption happened, we all remember, I started at Microsoft in, roughly speaking, September 2019.

So by March 2020, I was already doing a slightly different job than I was expecting. One of those was trying to corral Microsoft's amazing research capacity into helping leaders at the company, our customers, and of course product leaders understand what hybrid work is, what it might be, how we can make it better.

And one thing we did in that process was develop this presentation called “The 10 Misconceptions About Hybrid Work” and delivered it all over the place. It was a really fun presentation to give, talked about things that people might've heard in the media, people might've assumed based on first principles, and why the science suggests that might not be correct, and what we might do to correct that misconception. It's also—this format is just a really fun way to talk about a bunch of cool science. So it was a really fun presentation to give. So when the AI boom hit, we decided to put together something similar, in this case, five misconceptions.

So, maybe afterward, when we're chatting, you can nominate a couple more. The five misconceptions that are in this talk, and I have to sadly tell you, we won't have time to cover all of them today. This is a 45-minute talk in its full glory. If you're a Microsoft customer, I'm happy to come by and chat with you all sort of more directly.

Even if not, too, happy to chat as well. Today we're gonna be actually covering, one, two, and five. So the first misconception we'll talk about is that generative AI either is going to make my workforce or my company a zillion times more productive, or it's basically useless, basically just hype.

Second misconception is that the best use of generative AI at my company is to replace people. That is a misconception. We are really lucky that's a misconception and that the science points to that being a misconception, but it is one. The two misconceptions that we're gonna skip, we'll talk about how text is not likely the—we would talk about how text is not likely the future of computer interfaces.

And one that's close to my heart, close to a lot of my research too, is that for organizations and people that make content, generative AI can be a huge boon instead of something that's threatening. So the full version of the talk goes over that. But then the fifth misconception is a carryover from the hybrid work misconceptions talk, and the misconception is that we can predict the future of work, and we should spend a lot of time trying to predict the future of work. So, we'll talk about that at the end. 

So, let's jump in with the first misconception here. And that misconception is that generative AI is either going to, you know, as soon as I install Copilot, I'm going to be a zillion times more productive. My organization's going to be a zillion times more productive, or it's just hype and I could ignore it. And the reason this is a misconception is that generative AI is—all the evidence is pointing toward, or most of the evidence is pointing toward generative AI being what's known as a general-purpose technology. And we know a lot about how general-purpose technologies affect productivity individually in organizations and across the whole economy.

We know a lot because the analogies to other general-purpose technologies are, you know, proving out to be, at least the evidence suggests right now somewhat accurate. So other general-purpose technologies are electricity, the automobile, the internet, these types of things. And simplifying things really dramatically, and if there's an economist in the room, I apologize, but I think you'll agree this is directionally correct.

Simplifying things dramatically, general-purpose technologies affect productivity in the economy in a two-phase process. The first phase is when we get the general-purpose technology, but we have our existing complementary technologies and our existing workflows.

So, for example, when electricity became widely available in the United States, all manufacturing was steam-power based, or most manufacturing was steam-power based. And I don't know if we have any mechanical engineers in the audience. I'm not one of them, but I think we all can agree that electricity probably, we can imagine, useful for manufacturing things.

At the time though all the factors were laid out to take advantage of steam power. All the processes were designed to take advantage of steam power. So they looked and they said, “hmm, this electricity thing seems cool, but I don't know what to do with it.” One thing they did was, instead of having someone run around and light candles to keep things going at night, they replaced those candles with electric lights, and that does increase productivity.

The former professor in me really wants to use a laser pointer here, but that's that linear part of the growth there. That's a real productivity increase, and that's what we see in phase one, is sort of incremental productivity gains. Twenty or 30 years later, which, not coincidentally, is about the work life of, you know, it's roughly a career, or roughly a generation at least, someone figured out how to lay out a factory, and there were a bunch of new technologies developed to take advantage of electricity that allowed them to radically increase manufacturing productivity.

And that's where you see that hockey-stick growth that folks want to see right away, but it's unrealistic, in almost every case, to expect that. Another really good example comes from automobiles, also a great general-purpose technology. We could take, you know, a Toyota Camry and put it in 1910, and that would be pretty impressive.

But actually, it wouldn't be all that useful. We wouldn't have roads, we wouldn't be—just think about the number of inventions that had to be developed to implement gas stations. We didn't have gas stations, you know, anywhere we would need it. Those needed to be developed over time.

So that's a really good example of people having to invent a lot of new complementary technologies to take advantage of that general-purpose technology. So you need those changed workflows and those new complementary technologies to unlock that hockey-stick growth. 

I mentioned 20 to 30 years. Historically, that's about right in terms of how long it takes. There's reason to expect it'll be a lot shorter this time. One reason is that a lot of the infrastructure we need to build out particularly with those complementary technologies, is digital infrastructure, and we have it already. So, I have some colleagues at Microsoft Research that I work with very closely that suggest that, you know, maybe five years might be a reasonable thing for us to expect before we see this hockey-stick growth.

The other thing to mention is there's an argument that because of the way the generative AI tools are built. I think a lot of folks know they take data from the internet; they're inherently dependent on that data from the internet. You could argue that this is actually the moment of the internet's hockey-stick growth.

So it's actually the internet as the general-purpose technology. But that's a conversation over cocktails. I want to go back to that first phase though, and tell you why I'm so excited about it, and that we shouldn’t ignore it. So, one thing I do at Microsoft is help to coordinate tons of studies that look at how much Copilot, specifically, can increase productivity on specific targeted tasks.

And the results there are pretty good. It makes it easy for a scientist like me to come talk to folks like you about the results. It's not very—there aren't a ton of complex stories in those results. Almost all of these are lab studies, and they're showing, you know, roughly 25-50% productivity gains, across the board. There’s some exceptions, you know, in these types of things, but that's pretty good for a phase-one productivity increase.

And it's not just Microsoft that's putting that. OpenAI has a bunch of studies, we have partners at Harvard that are putting out studies with similar results, and these types of things. The good news is, even if you assume that these tasks only apply to about 2% of what people do every day, which is conservative, but it's not 20%, it's not 30%, and you take that lower end, that 25% productivity increase for most people, you're going to be creating enough top-line or bottom-line value to be able to say, hey, I think we're selling a value-creating product, which, again, for me as a scientist makes me feel good and comfortable being able to talk about these things. 

So even though we're not going to be in phase two for a bit, those phase-one productivity games are important and valuable for companies, and companies that leverage them are going to be more successful than companies that don't.

The second thing I'd flag, and this is called—we just put this out two or three months ago. Another thing I do on my team is put out reports to help people sift through all the science that's coming out about the ways that work is changing, and hybrid, and now generative AI. And we put out our second AI and productivity report, and in this report, it's the first public mention of an incredible study that some of my colleagues at Microsoft have done. Sixty customers signed up with them to do a randomized, controlled trial of Copilot being deployed in their organizations. This is almost medical-quality information. So, you know, they randomized the seats that got access to Copilot initially.

And we're able to see how work changes by comparing people who had access to Copilot to people were randomly selected not to have access to it, and we're seeing really good phase-one style productivity increases, like we would expect from the lab studies. So, we're seeing 10% more document creation, roughly the same order of magnitude or the same effect in email time. Meetings, really interesting. Some companies see a significant drop in meetings, some companies see an increase in meetings. Looking into the increase, it's that Teams Copilot is becoming so effective that people are using meetings to, for instance, write documents.

So, you know, hey, let's talk about this memo together, and let's have Copilot write the first draft of that memo. So, pretty cool stuff, and very impressed with the study that my colleagues have done here. 

Okay, moving on to the second misconception, which is an important one, and one that I'm sure a lot of people are thinking about for their organizations, their personal lives, themselves, and their kids.

When people first see generative AI, they often think, “wow, this is going to replace this set of jobs, my job, this organization within my company.” And I've anecdotally found this to be a very widely held misconception. But it does run against a key principle in the literature on how technology has changed productivity and, quite frankly, improved living standards.

And broadly speaking, that literature rolls up to betting long on human labor. So, it's a good bet for the last 300 years, since the beginning of the industrial revolution. Specifically, betting that humans will become more—the time of humans doing work will become more valuable with advancing technology has been a really good bet.

And those who place that bet have generally won. And those who’ve placed a short bet or assumed that labor saving technologies are mostly substitutional, to use slightly technical terms have, broadly speaking, no pun intended, come up short. So, my colleague at Stanford, Erik Brynjolfsson, wrote a great piece.

I'd really recommend folks check this out. It's a very much-needed cultural critique of my field, as it is a discussion of the topics associated with this misconception. It's called the Turing Trap, and it sort of critiques computer science for using—many of you have probably heard the Turing test as the goal of the entire field.

It's an inherently substitutional test. Like, how can you trick someone into thinking that they're talking to a human instead of a computer, rather than thinking about new, incredible things that humans and computers can do together. One thing he talks about in this essay, which I find—or an anecdote he has in the essay, which I find very powerful, is he talks about apparently there's an ancient Greek myth—must be a long-tailed one because I don't remember learning about it in school—where someone had invented some magic device that could do anything that a human can do.

And he thinks about, okay, what would happen if that were deployed? Well, you know, no one would have any work to do, but we'd still be stuck with latrines. We wouldn't have vaccines, you know, these types of things. All of those computer or technology plus people, quality-of-life advances—and another way to understand quality-of-life advances in this context is productivity improvements wouldn't have happened, so it'd be stuck in that era.

And so, you can imagine, if you just replace all the people at your company with generative AI, you'll have the same potential, simplifying things, selling the same amount of stuff. If you take generative AI and make your people more powerful, you might be selling 100 times, 200 times, 300 times more.

So, for those of you who are stock market folks, we are in New York. If you take a short on human labor, the most you can save is the cost of human labor. If you take a long bet on human labor, the potential upside is infinite. So, implications for leaders like yourselves. I would shift, if you have the instinct, many do, it's okay, don't feel badly about it.

This is a labor-saving technology, but who can I replace—how can I reduce costs in my company with this labor-saving technology? Shift that first question to how can I make my people more productive using this technology? How can I do more things and do different things than I used to do? There are some complexifiers here, to use a term from a former president.

The first is the nature of demand. So, let's take software engineering as an example. Many software engineers are concerned right now that these technologies will replace them. I'm less concerned about that because my boss's boss runs all—he is in charge of all of Microsoft's productivity tools.

And he never says, can you make the same number of tools, at the same quality, at the same speed. He wants more tools, better quality, faster, right? And that is an implicit statement of a lot of unmet demand for software engineering. So if these tools make us 100%, a 1,000% more productive, there's still a lot of demand that will be there to, roughly speaking, use that productivity gain, right?

Where the demand is capped, things get more complicated. A lot of people talk about customer service as an example. I'm less convinced about that. But if there is a fixed demand for customer service at your firm, and these tools do make someone 50% more productive, you might think, then, that labor substitution might be something you would consider.

However, having dealt with some customer service, particularly within the insurance industry lately, I can say there's a lot of unmet demand for high quality customer service, at least from this customer. So we can think about how to improve the quality of the customer service instead of laying people off initially, and that will present, potentially, some very significant business gains for you folks. 

So, the bigger caveat, I think, is actually on this next slide, which is that everything I've talked to you about is industrial revolution economics, and the goal of many people in my field and many large organizations in my field, most notably OpenAI, is actually to end those economics.

We don't know if they'll be successful. They have not been successful yet, but the goal there is effectively to create a technology that is so productive, that no amount of unmet demand really matters. And this is very much an explicitly stated goal in OpenAI's case. Their charter, the second paragraph, says their goal is to create an AI technology that can do most people's jobs better than they can.

That's how they define AGI. It's the implicit goal of a lot of people in the AI field as well. It's a goal that we should be discussing more as a populace, because there are a lot of implications from that that we don't have time to talk about now. But if they are successful, then a lot of what I said is not as relevant. But they haven't been successful yet. 

Alright, let's jump into the fifth and last misconception here, and that's that we can accurately predict the future work, and we should spend a lot of time doing this. So this is where I tip my hat to you folks who know how to manage people. I'm a computer scientist, my computer science PhD advisor would frequently tell me, Brent, the social sciences are the hard sciences. We know much less about how people work than we do how computers work, and people in my field sometimes forget that and make predictions that turn out not to be correct. 

So on the right side of this slide, you can see a whole bunch of very, very famous computer scientists making very, very inaccurate arguments about when we'll have, effectively, AGI, roughly speaking, technology that could do anyone's job, as per what I was just saying.

And so we have to be careful about listening to those and making decisions based on those, in part because oftentimes they're either subconscious or conscious attempts at self-fulfilling prophecies. So we have to be careful about, oh, someone said this is the future, then everyone says, okay, this is the future, and they make it the future.

So instead of that, I'll actually skip ahead a bit here. And suggest that—you folks are all business leaders. I work at a very large tech company. Instead of trying to predict the future, our energies are better spent trying to create a future that we want. So when I hear at Microsoft saying, “what's the market going to be? What's the technology going to look like in 2025? What's the market going to look like in 2030? Will we have AGI by date X?” 

Let's think instead about what we'd want if we had that, or what type of technology we'd want to create. Our time is much better spent doing that. We are very poor at understanding one person, let alone the complex dynamics of how a technology and a society will work together.

So, beyond that, to see if I know how to move back, I don't know how to move back here. So I will make an attempt to speak over slides that we just skipped through. I do want to say this doesn't mean that we just say, okay, let's ignore. You know, you folks are business leaders.

You have to plan as well. So, first and foremost, we want to be creating. What do we do when we need to plan ahead? This is again where I turn to your expertise. Business leaders know how to handle uncertainty. You diversify. So instead of making a big bet on a single potential outcome—AGI will arrive by 2028, you know, AGI will never arrive, these types of things. Diversify your expectations. 

Hopefully the presentation today walked through some of the higher probability potential outcomes. But planning for a low-disruption outcome, a medium-disruption outcome, and a high-disruption outcome is reasonable for your organization, and actually for your personal lives as well. 

The long version of this talk has been unreasonably popular internally at Microsoft. I rolled out of bed, fed my three year old, and got stains on a sweatshirt. I was like, “maybe I should take the sweatshirt off before I give this talk that I thought 20 people were going to attend.”

I had 1,200 of my colleagues attending one of the times I've given this talk. And one reason is they all kind of want to know what to do with respect to, for example, a question, “what should I tell my kid to major in? What should I think about for my own future?” 

For your personal lives, too, I recommend think about a low-disruption outcome, the internet arriving, standard general-purpose technology, high-disruption outcome, maybe electricity, a big general-purpose technology, and then the very high disruption outcome, which is sort of the OpenAI successful outcome, and plan for each of those rather than trying to make a big bet on one of them.

And, with that I'll close with this slide. You know, one thing I do at Microsoft is help out with a lot of our responsible AI work. And this is a very simple slide that I use to help guide that work within a company. We are limited in doing stuff that's great for the world that's outside of our self interest.

But there's a ton of stuff that's great for the world and is in our self interest. And getting this stuff right, getting generative AI to land well in the workforce, is definitely in that center, so that everyone feels like they are benefiting from generative AI versus it's something that's happening to them.

I feel very passionately about that. I suspect many of you do as well. With that, here's a list of the misconceptions again and a link to where you can learn a little bit more behind the science of what I've been talking about, the annual New Future of Work report. And I will take questions when we have a chance to mingle.

AI Myths

In this condensed version of a popular internal presentation at Microsoft,  Brent Hecht, Director of Applied Science at Microsoft, dispels common AI misconceptions, including why AI isn’t all hype, why AI’s best use isn’t to replace people, and why we should be shaping the future of work instead of trying to predict it.

Brent Hecht

Director of Applied Science, Microsoft

AI Myths

AI: Now or Never

AI is a fast-developing technology, and it may be tempting to wait and see how it evolves. But, as former Vanguard CEO Bill McNabb explains in this fireside chat from Valence's AI & The Workforce Summit, those who lean in early will benefit from compelling productivity gains and develop new and better capabilities.

Bill McNabb

Former CEO, Vanguard

Key Points

Parker Mitchell: So, Bill is the former CEO and chairman of Vanguard. And Bill's also been a close follower of Valence for the past five years and has been an extraordinarily valued board member for the past three years. And I thought we'd begin, Bill. I mean, when we were first being introduced to Vanguard, I know we, you know, dressed up nice and tried to pretend that we were a big company, but I think you saw through it a little bit.

Tell us a little bit about why Vanguard decided to take a bet on what you call a garage startup. 

Bill McNabb: Well, you were a garage startup. So, well, it's good to be here, Parker. Thank you for having me. It's actually even more than five years ago now, which is really remarkable. One of my former colleagues had met Parker and had come away impressed with some of the ideas Parker talked about and Parker and his team talked about in terms of making teams more effective.

And you have to understand, you know, the two most impactful experiences I had sort of before, you know, getting to a place like Vanguard were, one, I was a competitive rower, so team orientation was sort of, you know, became part of my DNA. And second, I was a teacher. And so the whole teaching, coaching thing became really interesting to me.

And at Vanguard, we had done a lot of work pretty—you know, we're investors, so we think about everything, it comes down to an ROI calculation. And we were struggling with the amount of money we were spending on development, not because we thought it was a bad thing to do, but we couldn't figure out why there was, like, big drop-offs after someone did the initial training and workshops we ran.

And the idea of having a team-based platform that actually reinforced some of the concepts we were trying to teach really hit home hard. And then, of course, as you've evolved the business to the coaching model, that, to me, was the big gap that was missing. So it's been really exciting to watch it evolve.

Parker: That's terrific. And you're also on boards. So you are seeing this not just, you know, from the stories you're hearing from Vanguard about the day-to-day challenges. But I imagine that 90% of board meetings are about AI. Can you share, maybe, what are the differences in the tenor of conversations from 12 months ago to now at the board level?

Bill: Yeah, so, and you know what's really interesting is, you know, I have the privilege of serving on two very large public company boards. But what's really interesting is I also sit on boards of several startups and sort of smaller-cap companies.

And we're having the same discussions. And I would say the big tensions, if you will, are how fast to go and where to sort of put your bets. And, you know, in most of the discussions, at least that I'm involved in, what we as board members are doing is really encouraging companies to not talk about it forever, but actually go do something and find use cases that really make sense for their particular business and go try something.

It's interesting, and the larger companies in particular, there's becoming a little bit more of a tension between the business leads who want to go try things and the chief technology officers who are like let us build it for you. And one of the things we're doing, at least again in the boardrooms I'm in, is we're saying to the CTOs, “yeah, great, you guys go develop this.”

But we're also encouraging the business to go experiment with people who are maybe a little deeper on particular topics. So, you know, Vanguard is an example. And again, I'm not on the board at Vanguard anymore. But I know, you know, talking to my former colleagues, they're very early adopters of Valence.

Love it. It's deployed through probably about 80% of the company at this point. 

Parker: We checked, 16,000 users. 

Bill: Yes. So 16,000 out of 20,000 employees. So pretty remarkable adoption. We have a company called Writer, which is a startup in there doing content creation, and the CTO has four or five big, you know, projects that he's driving the development for. And, again, for the companies with those kind of resources, I love that kind of approach.

You know, for smaller companies, I think it's fine people like Valence who can really solve a specific problem for you really quickly and give you experience with AI. That's what really makes a lot of sense. 

Parker: I've heard other people talk about that. The sort of portfolio approach, some internal, some external, some existing vendors, some new vendors.

If you were giving advice to a leadership team, how would you give advice to navigate that over the next couple of years? Because there'll be tensions between different groups wanting one thing or promising something else. 

Bill: You know, I actually wouldn't overthink it. You have a certain amount of—every company has a certain amount of capital to deploy.

And some companies, it's a large amount. Some companies, it's really small and, you know, really tightly controlled. I think the thing is to make sure that you actually do have a balance. The one thing I'm pretty convinced of, unless you're a deep, deep tech company yourself, no matter how good your engineering team is—so, again, let me just step back. You know, at Vanguard, 35% of our employees are software engineers. Most people don't think that. They think you're an investment firm. We're an investment firm with 35% of our population are engineers and our engineering team is good. Like, they're really good, and they think they're even better than really good.

And you know, the truth of the matter is we can't be as nimble and agile on things like an AI coach development as a company that's designed to do it. And I think, as business leaders, the advice I would have is just make sure whatever the capital allocation you have for these kinds of experiments, you've got a piece where you can pick a couple of firms and go really deep with them, because it's not that expensive. And I think you're going to get insights that you won't get from your own teams. 

Parker: Are there any interesting results from experiments that have floated up to the board level at either of your two public companies? 

Bill: Yeah. Less on the coaching side, more on the content side.

But the company I mentioned earlier, Writer, has gotten, you know, in one of the companies, it's like, whoa, these guys are really good. Like, we're able to do things from a content perspective that we never thought possible before. I think there are also a couple of players out there who are really, very deep in helping build out customer service, just automating in a much more intelligent way how customer service reps respond to calls in particular. And if you can make those folks more accurate, more efficient, overall more effective, again, huge amount of savings, but also a huge jump in quality. 

Parker: One of the things that you and I have talked about is your belief in not just the value of leaders, but the importance of investing in leaders, and that's been a through line throughout your career.

Can you share a little bit about how that showed up for you at Vanguard, the investments that you made in the pre-AI world, and then we'll talk about post AI? 

Bill: Yeah, I mean, you know, one of the things, I was talking to somebody before we started here, and I had the great privilege of joining Vanguard when we were just a little beyond the startup phase, a few years into our history, and our founder, Jack Bogle.

And Jack was, you know, for those of you not in the financial services world, he's really an iconic founder. Visionary is sometimes a word that's used too often, but in Jack's case, it was true, and he completely disrupted investment management with our approach. But Jack also had this instinct around people and that, you know, he had a saying, “even one person can make a difference.” And no matter how big we got, he kept repeating that mantra.

So, I was lucky to grow up like that. And then Jack's successor basically took it another step further, and he was like, we've had this amazing founder who's a visionary and, you know, pretty directive in terms of how we built the company. That's not going to scale. We'd gotten to a hundred billion dollars doing that, from a startup of one and a half billion, but if we wanted to go to a trillion, we were going to need to have a much more team-oriented culture.

And so he really installed the ethos that, at the senior level, a high-performing team was the way we wanted to build the business, not like one visionary leader, you know, sort of directing everybody where to go. And then, as you know, I was our third CEO. And one of the things that we began to see is, while the high-performing team at the senior level was working really well, farther down the organization, there was a little less engagement than maybe we wanted.

And we had seen, just from a business-case standpoint, a direct correlation between employee engagement, and we used Gallup at the time, versus net promoter score of those particular client groups. The higher the engagement, the higher the net promoter scores. Again for this audience, that's probably like, yes, of course. You all know that. But I would say, this was the early 2000s. People didn't—it didn't come to them naturally. And so we actually pivoted and changed the whole way we thought about attracting and developing leaders and made that the central part of what we do.

And you know, the way I like to think about it, we did a lot of work with Jim Collins. And so those of you who are familiar with his “good to great” concept, we built a flywheel. Like, here's the business model that we want to have and what the different components of that flywheel were. I won't bother you with all those details.

But at the heart of it was high-performing people, and that was like the axle upon which the flywheel would spin. And so, we started talking about this, and we had the, again, really good luck in that one of our neighboring companies was run by a guy named Doug Conant, who runs the ConantLeadership center now.

And Doug was the CEO at Campbell’s, and some of you are probably familiar with his work on engagement, but Doug came to us and said, “how are you going to know if you're successful?” And some of our people were asking. So we had four numbers that we focused on at Vanguard.

One was investment performance. You would expect that. One was client loyalty, so net promoter score. You would expect that. One was our version of profitability, which was, you know, what expense ratio we charged our clients. Doug came in and said, “you need to have a people component to that, and it needs to be first.”

And we actually did that. We used an engagement ratio that we calculated from Gallup engaged to disengaged. And that was our number-one number. So when we would report out to the company how we were doing and what our aspirations were, we always started with employee engagement. And if you're going to get great employee engagement, you've got to have great leadership.

And so, that's where the work really began. It's actually what led us to you guys originally, was how can we take that to an even higher level? And I remain convinced, you know, at Vanguard, and again, you're always a captive of your own experience. But when you ask people—I sat with a lot of our competitors at industry trade associations or whatever.

And, you know, people would say, “oh yeah, Vanguard, they're really, you know, they pioneered that indexing thing,” which most of our competitors hated. They're really low cost. Most of our competitors hated that because it cut margins. No one ever talked about our people. And so we didn't really brag about it a whole lot because we didn't want them to know.

We thought that, actually, of course, we had advantages structurally that helped us with the cost, and we had the indexing idea. But what really turbo-charged our growth was when we doubled down on people engagement among the employees and leadership, among, you know, frontline leaders all the way to the top.

And you can see a company that was growing at a pretty good clip, then went into hyper-growth drive when we got better at the people side, and it was just direct, you know, it was just math at the end of the day. You could then take that to the company, and you could take it to your board, and you could say, “see, all these investments we're making on the talent side, look what's happening to the business side.”

Parker: And when you have that straight line, that correlation, causation equation, That's incredibly powerful, because then as people see the number go up, they understand why the investments are being made. If we turn our eyes to the future, the next three, five years, there's a world in which there'll be quite a bit of disruption on the people's side.

How would you suggest that CHROs or CEOs who might be like-minded to you, that they think about that? 

Bill: Yeah, you know, and again, it's really hard to know ahead of time exactly where it's all going to come, but you get some clues just by watching what's happening even today. 

What we did, and again, whether it's applicable across the board or not, we went through a very early period of reskilling our people. So, you know, I said 35% of our workforce were software engineers. Every major financial—every major legacy system in investment management was, you know, essentially built on COBOL code with DB2 relational databases.

Like, that was it. And so we had all these engineers, that's what they knew how to do. Well, imagine what happened when all of a sudden workstations came around and then the internet came around. We actually reskilled most of our engineers. And we had really aggressive programs to teach them new languages, new ways of coding.

We move from a waterfall-developments approach to an agile approach. You know, I think we probably started that move almost 20 years ago now. And obviously, today, I'm sure they've gone way beyond anything I can imagine. But the commitment to reskilling people, again, we were fortunate; we were a really successful organization.

So we had the resources to do that. But I'll tell you, I can't prove it, but we had much lower turnover than our competitors and much better performance as a company as a result. So here's a correlation. Whether there's causation, I can't really prove it. I believe there is. And I believe very strongly there is, but that's how we did it.

And it wasn't just—I use the software engineering group as an example, but in all of our other major areas, similar things happened. Our processing groups had to change what they do. And again, we tried to reskill as much as we could, and, look, you know, we had people who resisted the reskilling, and they usually ended up taking themselves out of the equation.

We never had a riff through my tenure. So for the first 40 years of the company, and I think a lot of it was because we got out in front of some of these issues. 

Parker: And I'll add that I think you took over as CEO the week after the financial crisis or the week before the financial crisis, 2008.

Bill: Two weeks before. So, I had two weeks.

Parker: One of the things you and I have talked about is sort of the only wrong answer is doing nothing. Can we just end on sort of advice that you would give to someone who's struggling to convince their team on, you know, why they need to take action today?

Bill: You know, so I'm a huge fan of Clay Christensen's work on The Innovator’s Dilemma. And again, in this audience, I know most of you are familiar with that. And, you know, as soon as you feel like, okay, we can't do something new because it's going to disrupt what we've been doing, and we've been really successful.

It's the beginning of the end. You know, Jack Bogle, our founder, was actually a fan of Clay Christensen's deep predecessor, a guy named Schumpeter who was an economist, an Austrian economist, and he talked about creative destruction. And you know, my observation is creative destruction is actually one of the most important elements of capitalism.

And companies that are, you know, if you can imagine something could happen, somebody's already doing it, and they're coming at you from a competitive landscape. So I actually think this isn't like you get to choose. I think you have to do this. And, again, I don't know exactly where AI is going to end up, none of us do, or maybe a couple of our later speakers do—we've got some real gurus here. But I think the biggest sin anybody could do here is not do something, because I think this is coming.

And the more experience, the more testing, the more, you know, pivoting from lessons learned that we do at this phase, the more likely we are to not only succeed, but actually seize opportunities as businesses, you know, from this new technology. But I'm completely convinced that this will be as disruptive as the internet was.

And, you know, obviously the internet was as disruptive in many ways as the original industrial revolution. And if you sort of play that out, you want to be part of it. You don't want to be a victim. 

Parker: And there's a learning curve, so I think it's important to get started early. I just want to say thank you.

It's wonderful to have these conversations. I know you're joining us en route from Philadelphia to almost the Canadian border, but we really appreciate you making the time today. Thank you.

Bill: Well, it's a privilege to be here, and I'll just say, having known this guy since he literally started the company, I am not unbiased when I say this, but as an investor in the company and as a fan of what Valence is doing, it's really exciting for us to see all of you here, because we're going to learn from you, and, hopefully, it's going to make us a better company as well. 

Parker: Absolutely. Thank you, Bill.

AI: Now or Never

AI is a fast-developing technology, and it may be tempting to wait and see how it evolves. But, as former Vanguard CEO Bill McNabb explains in this fireside chat from Valence's AI & The Workforce Summit, those who lean in early will benefit from compelling productivity gains and develop new and better capabilities.

Bill McNabb

Former CEO, Vanguard

AI: Now or Never

The Era of Personalized Knowledge

The internet gave us access to more knowledge than ever before. But in today’s more complex world, we’re drowning in information. We need help to navigate through it – and as Valence CEO Parker Mitchell explains in this keynote, LLMs can provide that help by personalizing knowledge in every aspect of work.

Parker Mitchell

Founder and CEO, Valence

Key Points

Parker Mitchell: All right. Welcome, everyone. It is such a delight to be able to host an event like this. It's sort of a well-kept secret within Valence that we actually only began organizing this seven or eight weeks ago. And it was because I'd had so many conversations with CHROs, HRLTs, and they always came back to this same topic of how AI is going to be the disrupting force of the next three, five years, and how many outstanding questions they had, and probably most importantly, how much they valued learning and talking to one another.

And so we thought we're having these kinds of conversations across the US, across Europe. What if we just brought people together and had a focused event? An event where we could hear from people who are luminaries and thought leaders in AI, people who are peers who are experimenting with AI, and people who are going to offer some provocations, some ideas to just get us thinking.

So, it's incredible that in seven weeks we were able to put the word out, and we now, as Das said, have 200 people who will be in and out over the course of the day. I think we have 1,000+ who will be joining us virtually. And they represent companies that, as Das said, have 7.5 million employees. And the thing that we're so excited about is just the potential for AI to literally reach, in positive, empowering ways, each and every one of those 7.5 million employees. 

And the thing that I think we all know—we all know AI is going to be disruptive, we all know that it's going to affect the workforce, affect the workplace in ways that we can't even begin to imagine and can't even begin to predict. And, I think many of you, as you are thinking about what this future looks like, you are also having to navigate the present.

I imagine that most people here have pressure from their boards, from their CEOs: try to do more with AI. What kind of use cases can we come up with? And I also bet many of you here have had pressure from your AI councils and your chief risk officers to maybe do a little less with AI. These are the tensions that we are always navigating, and I think one of the reasons, though, why we're here is we know that it is absolutely incumbent upon us as HR leaders to take this moment to lead.

Much of the conversation is around how AI is going to affect productivity. And there's undoubtedly going to be productivity gains. But, as we all know from the conversations we've been having, those productivity gains are going to cause shifts in the workforce. And for us to be able to navigate those shifts, we also have to invest in AI that is going to augment human potential.

And I think that's one of the reasons why we're all here today, to get those ideas about how can we find the AI that is not just going to automate processes of parts of jobs or even entire jobs, but how do we find AI that's going to augment our workers and help them navigate through this change? While most of this is going to be a conversation among peers, I am delighted to just share a few words about Valence and why we're here today.

So, I think we've been as close to some of these questions about AI in the workforce, about AI in the talent strategy as pretty well anyone out there. We've had, as far as we know, the first AI coach that was deployed in enterprise, and I think now it's among the most, if not the most, widely deployed coach.

We've had partnership conversations with many people in the room, many of our partners who, you know, are overseas or will be joining us virtually. And we've really tried to wrestle with both the future of AI and how to help people, and how do we introduce it, given all the complexities of large global companies.

But we're also only here because we've been thinking about these types of things for, you know, collectively, the leadership team, the founders of Valence for the past multiple decades. My background, I was the founder and CEO of a group called Engineers Without Borders. And we quickly realized that the people-side of things was what really mattered when we were working on these difficult, complex projects.

And so, you know, back in the 2000s, we were teaching thousands of engineers a year concepts like the amygdala response, the ladder of inference, the Johari window, well before they were sexy. And I gotta admit, I don't think they're sexy, even now. Unless you're an IO nerd like some of us are.

But this is what we were trying to do. And we were doing this because we believe that this idea of personalization of knowledge, of personalized learning is truly transformative. And so when we founded Valence, it was this idea of, can we bring some of the personalized experience, the experience of having an executive coach or a team facilitator, can we bring that to the masses? 

And we were more product nerds. I was an engineer, and we were way better at that side of things than marketing. So actually, the very first name of our company was BetterTeams.coach. It's not exactly a sexy name, but it's pretty clear about what we were trying to do. We were building tools for managers, digital tools for managers, so that they could understand their teams better, work better together, and lead those teams better.

We're going to get a chance to hear from Bill McNabb, who is the CEO of Vanguard, is one of the early believers in these kinds of team tools, and this is what our early customers like Vanguard and Coca-Cola, Nestlé, and others have been deploying at scale. And what we learned from that is some of the challenges that managers face, what it's like to work in global companies, what it's like to work in countries around the world. But we knew that, ultimately, our vision was how do we try to offer this type of personalization at a way higher scale. 

Now, in the, you know, late 2018, 2019, all of our competitors in the HR space were talking about AI, and they were always having an AI module of this, and my team, even my investors, were saying, “Parker, Valence should really talk about, like, what our AI strategy is.”

And I actually refused. I said, none of the stuff that’s being done right now is actually valuable AI. It’s machine learning, which is really great if you have large, labeled data sets, but that's not what it's like to engage as humans with one another at work. The thing that matters most to us is language.

We are complex beings in a complex world, and we are communicating with one another by language, and until we can understand that, AI isn't going to make much of a difference. And this is in 2019, and we all knew in 2019 that AI that could understand language was decades, decades out. Well, we were wrong, but I still remember sitting down with a very good friend of mine who was a researcher at one of the large AI companies.

This was a little more than two years ago. It was a few blocks north at the café near Union Square. And we were talking about what was possible. He knew my views on AI and he said, “Parker, we're actually getting pretty close to cracking language,” and he let me play around with early versions of what they had.

And I remember, after about two hours of playing around with it, I went home and I wrote a blog post, and the blog post was titled, “Are We in Our Gutenberg Press Moment?” 

So let me explain a little bit. As I said, I think language is one of the most important things for charting the course of humanity. Language is how we codify ideas, it's how we share ideas among one another, it's how we pass them on to generations. But for the first 70,000 years, we were just limited to the spoken word. The last 3,000 or 4,000, a small elite had access to written language to begin to sort of codify that knowledge. But the year 1440 was a particularly pivotal moment, I think, in history.

In the 50 years after Gutenberg invented his printing press, more words were printed than in the 50,000 years prior. So we just saw this explosion of literacy, explosion of knowledge, and explosion of human potential. But where we are now, if you think about going about your daily life and, you know, your personal life, or especially in the work life, we are just inundated with information.

We are drowning in a sea of it. And information is not the currency that matters, it’s how to apply that information to the context at hand. And when you have an incredible teacher, an incredible mentor, an incredible guide, someone that can help you make sense of your environment, it’s an extraordinarily powerful thing.

And so, when I was playing with those LLMs, what I saw was because they were language based, they could understand how I think, which is in words. They could understand the life, especially the work life, that I'm in, which is also in words. And if they could understand those two things, then we would be able to create, essentially, a personal assistant.

And so we think generative AI, it’s creative, it’s incredible in all sorts of ways, but the true promise of AI is going to be personalization. And for those of us who are in the L&D space, the HR space, we think that this personalization is going to utterly upend how we think about learning, development, how we support leaders.

It is going to allow us to rethink from first principles. It’s going to unlock us from some of the shackles that we were facing in the past. We're going to hear, later on this afternoon, how AI is so exciting in the field of education. And that is because it’s moving from teacher mode, a sort of one-size-fits-all, to a tutor mode, which is bespoke to each individual person. 

And the educational attainment, the speed of learning in tutor mode, is twice as fast as it is for teaching. And so if we can—that personalization exists in education, we think that personalization is also going to exist in onboarding and learning and understanding and in performing our jobs as leaders and managers across companies.

And so if there's one thing that I hope you take away from today, whether it's Valence that develops it or not, we think that every employee, not just every leader, not just every manager, but that technology will be good enough and affordable enough that every employee will be able to have a personal work assistant.

Understands them, understands their world, and helps them, makes their life a little bit easier. Now, for those of you who’ve experienced conversations with ChatGPT, we know it’s not that at the moment. It’s going to take some work, but we think the building block is there. Companies like Valence are trying to translate and transform general AI into specific, purpose-built AI.

So in our case, we are combining together many LLMs, many agents, if you want to use the technical terms, to be able to perform parallel tasks and be able to deliver a smooth coaching conversation. And I just want to talk a little bit about what that might feel like. Because general AI knows a lot about a lot of things.

People have described it as a compression algorithm for the internet. And it's sort of zipped all that knowledge into an incredibly dense set of parameters and weights. And it knows a lot. But it's not designed to know anything about you. And so the one layer that we are adding on to it is a personalization layer.

So trying to learn as much as possible about you and your job and not just be there to respond to a question that you might have, but to always be thinking proactively, how can I reach out to you as an individual and help you in your learning goals or in your job or in the question or challenge that you have? So this idea of sort of proactive coaching, so essential. 

The second thing that we make sure that our AI coach is focused on is coaching. We want Nadia, we’ll introduce you to her in a moment, to be an expert on coaching, to know when she should dive deep to understand a root cause, when she should tactically help you with something that you're facing, when she should pull a framework that you might have seen before and remind you, or when she should try a new approach.

But she is deeply trained on all the best practices, sort of like a librarian that can choose from the world's library of coaching frameworks. And then, finally, we'll get a chance to hear how Nadia can be bespoke to your company so she can understand your leadership frameworks and the moments that matter in your talent cycle and understand the business challenges that you might face and start to coach, not just on sort of learning and development, but how do you understand the world that you are in, and how do you perform a little better in it?

So this layer of purpose-built coaching that really invokes this context and memory around personalization, around expertise, and being bespoke for your company, we think this layer has a chance to fundamentally transform this general AI into purpose-built AI. So this is AI that knows, and we think this is the type of AI that's going to be incredibly helpful.

Nadia: Hi, Parker. We'd booked time to chat about John. How is that going? 

Parker: Nadia, I'm pretty frustrated. John’s a VP who should be setting big goals, not making excuses. He’s got to be a more ambitious leader.

Nadia: I know you’re frustrated, but we’ve talked about how you fail to set new people up for success when you’re moving too fast. John hasn’t had a proper chance. Let’s problem-solve this. Can I set up 15 minutes for you and me on Friday, when things are usually calmer?

Parker: So we wanted to just give you that quick overview of what Nadia’s capabilities are. I know a few of the speakers have deployed Nadia, and you might hear some stories about the uptake or the use cases. So we want to just give that introduction, and we’ll also give you a sneak peek later in the day of Nadia’s capabilities that we're going to roll out very soon in January.

And I just want to call out, what’s exciting when we talk about Nadia is able to coach anyone, anywhere, you know, as we speak, she is being used by production leaders in factories at General Mills or Schneider Electric or AGCO. So these are people who never would have had the idea of a personal coach. 

She’s being rolled out on the front lines at Costa Coffee. We’re doing a webinar with them in a couple of weeks, but there are great, successful experiences there, at On Running, at Delta. She could be used by knowledge workers. She could be used in regulated, unregulated industries. The great thing is, she could be used all the way from the CEO—we literally have the CEO of On Running, who’s provided a quote because he’s just so excited about how Valence’s tools have helped him and his co-founders and his leadership team, and then everyone at On.

So it can be used top to bottom. It could be used across sectors. And I think this idea of democratization is just so important. And so the one thing I’ll just leave you with, as I’ve said, this idea of investing in potential is so crucial if we’re going to navigate all the changes, the upheavals, the transformations in work.

And we think that betting on AI coaching, betting on having AI that your employees can interact with and test and work through, that is absolutely a bet worth taking. So thank you for joining us today and I'm delighted to welcome Bill up on stage to join us.

The Era of Personalized Knowledge

The internet gave us access to more knowledge than ever before. But in today’s more complex world, we’re drowning in information. We need help to navigate through it – and as Valence CEO Parker Mitchell explains in this keynote, LLMs can provide that help by personalizing knowledge in every aspect of work.

Parker Mitchell

Founder and CEO, Valence

The Era of Personalized Knowledge

The Future of AI in the Workplace

Any big technology leap comes with a central promise and a lot of rough edges. With AI, the central promise is personal assistants and coaches who support us in every part of our lives — including at work. In this keynote address, Valence CEO Parker Mitchell lays out his vision for how work will change as AI makes personalized coaching available at scale to global workforces.

Parker Mitchell

Founder and CEO, Valence

Key Points

00:00  The Imperative of Our Time

Parker Mitchell: So I feel very, very fortunate in the position where I am. I get to talk to a range of folks, thought leaders like Gillian, like Ethan, like Reid, people who are—Geoff Hinton, who's coming up at the end—who are talking about how is, at the thirty-thousand, fifty-thousand-foot level, how is technology, how is this wave of technology beginning to impact work, beginning to impact us as people, beginning to impact societies. But it's equally as exciting, it's probably even more exciting that I get to chat with many of the folks, I recognize some names of folks who are joining us today, people who are our partners, partners in trying to put AI into the hands of their workers. And you'll hear in a moment that I think this is one of the imperatives of our time, being able to give people, workers at every level, every seniority, every type of job, AI fluency, AI literacy to work with the most powerful tool, I think, the most powerful tool that any of us have experienced. I think that's the imperative of our time, and it's just such a privilege to have a chance to partner with people who believe equally, themselves, in their own companies about the importance of this and are able to help navigate the sometimes difficult mazes. And so I've had a chance to distill some of these ideas and conversations that I've had into a few key sort of themes and thoughts that I'm delighted to share with you today 

01:37   The Future is Here, It’s Just Not Evenly Distributed

Parker: So one idea that again I'm privileged from this position of being able to see so many different things happening is that we see a future that is actually in many cases already here it's just not evenly distributed. It's a classic quote from William Gibson and one of the things that we're seeing with early AI adopters, and these are individuals in companies who are saying, "Hey, I want to make AI part of everything that I do." You know, we get to see this from Nadia. We get to hear from them often because they feel like Nadia provides them with so many resources, but we see just such an incredible range of use cases, creative ideas of how can I set Nadia up to be able to coach each and every member know each and every member of my team, so that as I'm talking about them, Nadia is able to remember them and able to give me specific advice to the relationship I have with them. 

So we heard that from one of the users, I think it was over in Ireland, maybe two or three months ago, hear so many ideas of people being able to sort of push the frontier there. And I think the lesson that I take from this is just the importance for companies being able to hear those voices being able to make a safe space for people to be able to share their innovations, draw them out and then be able to amplify them. And so as we look around, we sort of say, hey, where is AI, you know, where's the impact of it? Hasn't shown up yet in the productivity statistics, and our belief is that it's going to take some time to show up in the productivity statistics. That's about widespread adoption, and we're going to talk a lot about that today but the spike, the first 1%, 2%, 3% of a company that is already there. And so we have to go out and find it now as we look into the future. 

It's really hard to make predictions in a chaotic world in general, but it's especially hard to do so in a world where the future is exponential. I've had a few conversations now with Geoffrey Hinton about this, just about the trajectory of the change of technology that he's experienced particularly in the past 15 years, and it's hard to remember that 15 years ago, this work on back-propagating neural nets was considered sort of the backwater of AI. It was still considered a little bit of, you know, not where the innovation was going to happen, but even he was unable to fully see the potential and how quickly these new innovations would arrive, the new models would arrive. And so, as we look forward, it's also important to realize we can't just extrapolate from the past. Things are going to continue to accelerate, and it behooves us all as leaders to really try to, even though we can't predict the future, to try to get glimpses of what it might look like and to set our organizations, our leaders, our employees, our workers up for that. 

04:35   A Brief History of Big Technological Leaps

Parker: Now a very quick digression into the history of some of the technology leaps because I think it's really interesting to see what were society's reactions to some of the big ideas, the ideas that clearly were going to change. And the history of the car is an interesting one. If you look at the newspaper reports from about 125 years ago, as the first cars were being introduced, they weren't glowingly positive. Cars were having so many negative externalities. They were noisy, they were dangerous, they were dangerous to pedestrians, they got into accidents with each other, and that is obviously true, but there were a lot of innovations that then were able to come with it. So there were seat belts, there were street lights, there were better rules of the road. 

And I think we can sometimes get—when new technology comes in, we can get caught up in the rough edges. There are certainly going to be challenges to how AI models produce information, but those challenges are going to be overcome-able. The work that we've done, and many other companies deploying AI in enterprise, to just put really strong guard rails on the models and make sure that they don't hallucinate, they don't talk about topics that aren't allowed to be talked about. That was a relatively quick solution for us and others to put in place, and it takes care of problems like hallucination. And so I just think it's important, even if there is a momentary issue that causes someone to want to hesitate, to know that there will be solutions to that. 

06:03   Generative AI & The Era of Personalization

Parker: So the big idea that we believe is incredibly empowering is that generative AI will usher in an era of personalization. And when we talk about, with our product team, what we really want to help, we really want to help have our AI coach, Nadia, understand the mental model of each user, understand how each person sees the world and how they are trying to navigate it, and then be able to support them as much as possible. And this is a long-term vision. This is something that's going to build over time, but I think this idea of personalization, of having a coach alongside you the same way I think every child is going to have a tutor alongside them, an AI tutor that knows how to help them on their learning journey, this AI coach is going to help them at every stage on their professional careers and learn how to collaborate. And I think that is going to be one of the most profound changes to how work is done, and it's incredibly exciting to be part of, to be at the vanguard of this. 

07:05   Our Origins and Motivating Principles

Parker: Now, we didn't start there. I’ll briefly do a quick digression. We were founded with this idea of a coach but knowing that we didn't have the technology to get there. And so you see a very primitive prototype of our team tools. I know some folks on this call are Valence team tools users. This is a version of a line done by paper, just seeing, hey, how will people react to that. But I share that because at the center of our mission has always been, how do we help people work better together? How do we help them understand themselves? How do we help them understand others, and through that, how do we help them have a better collaboration, which is really how work is done? 

And so we've taken that ethos, and we've woven that through every product that we've built up to and including Nadia, our team coach, and so these are our motivating principles. We started with this idea of democratization that everyone in the world deserves this idea of a personal coach. We talk about a world where potential is more valued than credential, so if you have a growth mindset, if you have a desire to learn, if you have an openness to feedback, if you have a personal coach like Nadia that will compound over time, and we think that those traits of potential are the ones that should be rewarded socially. 

08:32   Reimagining the HR Talent Platform

Parker: Now, as we've come to build Nadia, we've also seen that there's a huge opportunity for something else, which is to help modernize talent programs. And I don't mean talent programs as the design; I mean the technology that's behind delivering the talent programs. And I think many of us, many of you on the call, will probably if you think about the tech stack that you use for your talent programs think, yeah, that might not actually be the way that I would design it from scratch if I was redesigning it today, especially if I had the power of generative AI behind it. And so I think it's been, again, it's been a great privilege to partner with heads of talent, heads of leadership with people who are thinking about, how can we redesign these programs to make them more personal, to take down some of the burden to make them more fair, and it's an exciting journey to help modernize some of the technology behind the talent programs. 

09:29   Our Mission: Augmentative AI for All

And then the final motivating principle that we have, and this is what I alluded to at the beginning, is that we think the change that is going to sweep through the workforce, the change that is going to be driven by generative AI, which is a new way of comprehending our world, which is mainly in the written and spoken word and reasoning over that world, it is going to cause enormous change, and I think we are still just only catching glimpses of it and probably underestimating the change that it will drive. And the solution we think to this change that's going to be caused by AI is to give each and every employee generative AI that is augmentative, not AI that's going to automate and take away things that they do. 

That's very important as well, but AI that they will interact with and learn how to use and co-create with. We and others call this augmentative AI, and we think that the imperative of our time is to put this augmentative AI into the hands of our employees, imperfectly at first, but then it will get better and better and smoother and smoother. And so that's why we, at Valence, do what we do, it's why we're building Nadia, it's the principles behind Nadia that we build, and it's just been an enormous pleasure to, again, partner with such great folks to help discover what is going to work in this new future world that we're all moving toward.

The Future of AI in the Workplace

Any big technology leap comes with a central promise and a lot of rough edges. With AI, the central promise is personal assistants and coaches who support us in every part of our lives — including at work. In this keynote address, Valence CEO Parker Mitchell lays out his vision for how work will change as AI makes personalized coaching available at scale to global workforces.

Parker Mitchell

Founder and CEO, Valence

The Future of AI in the Workplace

Product Demo: What's Next for Nadia

Valence CEO Parker Mitchell shares the next evolution of our AI work coach, Nadia. The power of AI coaching comes from gathering context: about each employee, each team, and the entire organization. Nadia's unique ability to understand and remember what she learns about individuals and organizations allows her to take the moments that matter in the talent cycle (goal setting, performance reviews, the launch of new leadership frameworks) and personalize them to every employee, in every role.

Today, we're building Nadia to better understand team dynamics, not just individual needs. She's ready to help people role play difficult conversations, based on what she knows about each member of the team. And she's able to help team's close the feedback loop, making anonymized feedback easy to share in minutes and helping users synthesize and understand how they can become better leaders and teammates in real time, not just during official performance review periods. Finally, Nadia has new abilities to integrate into the HR talent calendar, personalizing and assigning initiatives to individuals across regions and functions and giving HR leaders a seamlessly way to track progress on the critical growth moments across their organization.

Parker Mitchell

Founder and CEO, Valence

Key Points

00:00   The AI Coach for Users & Talent Teams

Parker Mitchell: The core innovation, the core capability of a coach is really that idea of memory, and you can get the sense of what Nadia is able to do because of the context that she is able to integrate. And so as you see on, you know, the personal side of things, there are moments that naturally happen.

00:28  Building Context: How Nadia Helps

You might have a meeting with your boss that's stressful, and you talk to Nadia about that. Maybe the NPS score of your unit or your, your function is going down and you wanna understand, hey, what's wrong there? You might have a poor performer on your team and have a challenging conversation. So Nadia is building her understanding of you through each of these moments. 

And then as a company, you are using Nadia to Make the talent initiatives that you have personal, to make them personal for each and every person. Goal setting, OKR setting is obviously personal and people could benefit hugely from having a personal coach to work them through that. As managers prepare for their mid-year team development conversations, a personal coach can be invaluable to help reduce some of the stress around that. If you have a new leadership framework or new cultural values, imagine the power of 10,000, 50,000 people each having, again, an individualized coach, helping them try to take this concept that's usually just a set of words on a wall and bring that to life for their particular tiny part of the business. And all those tiny parts add up. 

So there's an incredible power there as Nadia learns about you, and as Nadia understands the programs and the moments that the company has, and the intersection of the two is so valuable. And so we're gonna get a chance to explore a few of these new features. Now, I'm realizing that there's probably a lot of foundational elements to what Nadia is that people might not fully understand. Again, we're more than delighted to showcase that individually or collectively over the coming weeks and months. But I wanna show what's new. 

02:08   Building a Personalized Understanding of Users

Parker: So we talked originally about this idea of building out that context from you to your environment, your team. And I wanna introduce you to a fictional character named Aidan. We picked the name Aidan because Aidan is Nadia backwards. So just in case anyone's wondering if this is real, Aidan is a fictional character. Aidan works for an airline, is a station manager located in a small airport. And we picked this because Nadia is incredible at coaching such a range of people, knowledge workers working in, you know, behind a desk, but also frontline leaders, frontline managers who are working with workforces in very, very diverse, diverse circumstances. 

So what I wanna first highlight, I'm gonna highlight three key new features that we're offering. So we began with a profile, building a profile for individuals. So Nadia is, as she has conversations with Aidan, she's building an understanding of what he is like, what his leadership styles, what are the particular challenges of his role. Noticing that he's a new manager. This is all coming from information that she's gathering from conversations. Imagine her taking, you know, coaches notes after every conversation. She also has hypotheses. She has ideas of what she wants to learn about him, and she's gonna weave those into the conversation. 

03:39  Nadia Knows Your Team

But the big thing that people have always said is they said, I wish Nadia knew my team, not just who they were, but what my interactions with them are, what my trust levels with them are. And so we have introduced this new feature called Your Connections or Your Team. You can pre-populate that if there's an integration into your HRIS. But what Nadia is doing is understanding who are the people you're working with, what are the interactions and the collaboration you need to have with them, and what are suggestions almost like a, a coach on your shoulder as you go through your daily work, your weekly interactions. What are her suggestions for how you can be a better leader and a better collaborator with each of the people that you work with. 

So to give you just a quick glimpse of what this looks like, you know, we've pre-populated this with some, again, fictional characters. And for each of these people, Nadia's able to build, she's building this with, from her memory. So she's got a series of pieces of memory from conversations. Again, this is all the conversations that you've had with her, so she doesn't know what Alex actually is like. but she's able to understand your take on her and give you coaching and guidance through that. 

Now, one of the things that people are, you know, what we find that people are doing is they're saying, okay, I want Nadia to understand all these different people that I have. Give me these proactive tips. But then often there's difficult conversations. When we talk to managers, a conversation of resetting expectations and performance, it's one of the most nerve-wracking moments for them, especially new managers, but even experienced managers. And so we can go directly into a role play now. role play is one of the most powerful features. People just love this chance to try out a topic, a thread that they go through. And we've, we've really focused the effort of personalizing it. 

So the way Nadia will respond if she's, you know, pretending to be Alex will be different than how she'll respond if she's pretending to be Rachel. Or if she's playing the role of Paul, she is going to, take on as much as she knows of their personality and try to react in a very realistic way. And that's so helpful as people navigate what they think might be a difficult performance, or promotion or any of the types of conversations that they have. So this is a powerful way of adding the context in not just, you know, I'm talking to Nadia about this one-off and I'm getting intelligence, you know, ideas back, but Nadia knows my context. 

06:19   Closing the Feedback Loop

Parker: Now, the third thing that we want to really highlight is this idea of closing the feedback loop. We believe that if organizations were able to have more collective feedback, if individuals were able to get more feedback from their peers, that is one of the most important elements of helping people learn and grow, closing that feedback loop. And so we built a very simple and easy way for Nadia to collect feedback on your behalf. And so if you were asking for feedback, you could send out a link automatically, you could just copy that link and paste it into, you know, to send it to the people that you are interested in. And an anonymized version of Nadia is going to ask them thoughts and feedback and reflections on what their take is. 

And so Nadia, this is a fully private, anonymous, confidential conversation. Nadia will synthesize all the results. It's extremely short, takes two, three minutes. And the big innovation is that people find it quite easy to actually use just a sort of stream of consciousness saying, hey, here are my quick thoughts. Nadia's then able to synthesize that, play it back to you, see if that's correct, and then say, okay, this is what I will bring into my report. The early feedback on this is incredible. People actually say, I wanna give more feedback than just the question or two that's there. They love the natural voice interface. And then when you come back as a leader, you get to see what your feedback report is. So this could be formal feedback moments in a business, but it can also be informal, you saying, hey, it's been a couple of months. I'm a new leader. I just wanna hear what's going well, what I could do better. And it's extremely powerful on that front. 

So those are a few of the highlights. For those who haven't seen what it's like to have a conversation with Nadia, as a reminder, you can come in and ask about anything. In this case, this is an example of a skill building plan to build delegation skills. So I'm not gonna obviously go through this whole thing, but Nadia is giving you some ideas. Asking you where you wanna start, being able to go through and work with you to refine what that plan is. She's suggesting a four-week plan, again, through time, to be able to build some of these skills. Inviting you to respond on it. She's refining that and saying, okay, do we agree that roughly these are the things that we could do to be able to build this skill? You’re looking at it and saying, eh, this is actually not quite right. I wanna refine it a little bit. So she refines this. 

And one of the things that's pretty powerful, just at the end, she is able to then do some of these, do some of this work on your behalf. So she's saying, you know, let's have another meeting. Let's check in, let's make sure June 11, does this work for you? And then she's able to generate, you know, what you see here is Aidan asked, can you draft an email, one to my manager and then another to a couple of the people that I'll be working with, communicating my plan? And so she writes this email for you. You can download it; you can copy and paste it. You can make this as, as seamless and easy as possible. 

So these are just a few of the new features. This idea of building out this concept and trying to make it as easy and painless as possible for people to be able to get the kind of guidance and advice that, you know, they need kind of on their shoulder. And to be able to gather the feedback from their team and their environment so that they can learn and improve. 

So we're extremely excited. We've had great early responses from the beta customers that we've worked with. And we just wanted to share this because this is sort of capturing some of the vision that we have of Nadia that's able to be integrated into every part of your business. 

10:30   Customizing Nadia for your Talent Calendar

Parker: Just to highlight one more thing, so this is what users are loving. As a person deploying in your talent program, you have an opportunity to customize this, we call it performance review, but it is probably more your, company moments timeline. And so this is customizable by individual. So you can say, you know, these roles get these moments, this geography gets this moment. We understand all the challenges of running, you know, a global business. And you can assign work, assign initiatives to people, you can check to see what the progress is, and you can make sure that these ideas, these best practices that we're often trying to suggest and to push out into the business are well received, they're personalized, they're engaged with, and then they're able to leverage these. 

So it's a powerful new set of features, going back to this vision, building out to have the talent team ability to push these programs into, you know, the hands of users and to do so in an incredibly personal way. And then to have users be able to use Nadia, week over week, day over day, as they problem-solve and go through the particular challenges that, you know, we're all facing as often overwhelmed and time-poor leaders. Having this kind of coach is both, it frees up time and it allows us to do a better job.

Product Demo: What's Next for Nadia

Valence CEO Parker Mitchell shares the next evolution of our AI work coach, Nadia. The power of AI coaching comes from gathering context: about each employee, each team, and the entire organization. Nadia's unique ability to understand and remember what she learns about individuals and organizations allows her to take the moments that matter in the talent cycle (goal setting, performance reviews, the launch of new leadership frameworks) and personalize them to every employee, in every role.

Today, we're building Nadia to better understand team dynamics, not just individual needs. She's ready to help people role play difficult conversations, based on what she knows about each member of the team. And she's able to help team's close the feedback loop, making anonymized feedback easy to share in minutes and helping users synthesize and understand how they can become better leaders and teammates in real time, not just during official performance review periods. Finally, Nadia has new abilities to integrate into the HR talent calendar, personalizing and assigning initiatives to individuals across regions and functions and giving HR leaders a seamlessly way to track progress on the critical growth moments across their organization.

Parker Mitchell

Founder and CEO, Valence

Product Demo: What's Next for Nadia

AI: The Untold Story

We sat down with LinkedIn co-founder Reid Hoffman and Financial Times editorial board chair Gillian Tett to go beyond the headlines and get a deeper understanding of the economic and workforce impact of AI. Their message for HR leaders: act now, because the change is already here.

Key Takeaways

1. AI is here to amplify human potential. Instead of focusing on AI as a way to cut costs or reduce headcount, Reid and Gillian see AI as augmentative: a new member of the team that changes workflows and unlocks capacity for human creativity.

2. "AI is the best educational tool we have created in human history," Reid says. There are real challenges around AI's impact on how young people learn and develop the skills they need to enter the workforce. But Reid and Gillian explore how AI can create new models of education and training that personalize instruction in a way that was previously only possible at the most prestigious universities.

3. With AI, everyone becomes a manager. Reid sees a near future where every employee has a team of AI assistants that they manage to get work done. "There won't be such a thing as individual contributors anymore." In this world, the same EQ skills that make people great managers and coworkers become the skills that make them AI super-users .

4. "If you're not using AI, you're going to be under-tooled," says Reid. From leveraging AI to run better meetings to reimagining what's possible to achieve with an AI assistant at every employee's side, Reid and Gillian outline concrete starting points for driving change at scale. Because, as Reid says, in six months, if AI isn't embedded in your workflows, "It'll be like saying, 'I'm a carpenter, but I use rocks, not hammers.'"

Reid Hoffman

Co-Founder at LinkedIn, Manas AI, & Inflection AI

Gillian Tett

Editorial Board Chair, Financial Times

Key Points

00:00   How the Media Covers AI

Parker Mitchell: I thought maybe we could kick it off actually with you, Gillian, and the conversation, the public narrative around AI. How is the media covering the idea of how AI is gonna impact work and the economic and social impacts of that on our livelihoods? And wondering what are the things you think the media might be under-covering, or what are the stories that you think should get more coverage? 

Gillian Tett: Well, I think the media is pretty negative on AI at the moment. A cynic might say that's because they know that their own jobs are threatened. And one of the really striking things about the AI revolution is that it's threatening white collar jobs, not just blue-collar jobs. And so it was very easy for pundits, who are working at financial newspapers, to say, actually, productivity increases are good, which is code for let's have less blue-collar workers if we have automation. And now that white-collar jobs are threatened as well, be they lawyers or traders or journalists, suddenly the narrative has changed. 

So I think there's a real concern about AI. From time to time, there is now a recognition about the extraordinary things that AI can do in relation to life sciences and other research capabilities. But for the most part, it's pretty negative. 

01:17  Now is the Time to Learn How to Work With AI

Reid Hoffman: Well, Gillian's exactly right about the, you know, kind of general media response. And, you know, I think that the kind of things that aren't covered is that one, look, whatever you're wishing, AI is here. If you haven't actually, in fact, already personally found significant use cases that would help you in your own work and in your own life, that means you're not trying hard enough. And there's a general reflex to wait for when it stabilizes. Like, well, when they release the new iPhone, I'll check this out. And it's like, no. It's actually improving on an order of magnitude of months. And so, you actually have to be, you know, kinda going with it. And so what, I think that it is scary. It is changes. It is changes to white collar work. It is the case that businesses start when they, you know, when they can encounter something new with how do we cut, you know, costs. 

So, you know, could we take our marketing department and could we take it from 10 people to two people? You know, can we do this with less journalists? You know, hence the point that Gillian was, you know, gesturing at. But the actual thing is that it'll actually change workflows and change everything else and that, as individuals, you can already begin to see what that is. And so we're telling, one, the positive story, namely, how do you get into it? What do you what you should be doing? What should you be experimenting with? Two, what are the ways that we're that we are, essentially experimenting with to expand our capabilities as individuals and as offices because those capabilities are really there. 

Like, you know, for example, you know, one of the things that I regularly use deep research for is as a research assistant on a broad variety of topics. What I'm trying to actually— I can now think much more broad and synthetically about a number of different kinds of areas relating them to what I'm doing, and I have an immediate research assistant. Now you say, should I get rid of the research assistant I have? No. Actually, frequently what I'll do is generate something and send it off to the research assistant saying, hey, could you track down this, this, and this about this and maybe use, you know, you know, deep research to follow up on these things and so forth. You know, as an iteration and as a as a thing to do. And I think that's the kind of thing about that kind of positive story, that's really important. 

And the other thing that, Gillian was gesturing at is the negative story that we've kind of, you know, kind of wrapped ourselves in the West is actually ultimately, you know, damaging to us. It isn't to say that we don't pay attention to the risks. It isn't to say don't pay attention to the issues. But the question is, AI is coming. It's like we're in a river. It's going down. You can say, I don't like this river. I'm gonna throw my oar up, and I'm gonna yell. Like, okay. That's a really good way of navigating a river. Right? So it's like actually, it's like okay. 

So how do I start steering? What do I learn? What's going on in terms of the currents and that kind of thing. And that's the kind of thing that we need to be doing as individuals, as industries, and as societies, and, obviously, of course, you know, for our audience, as companies. 

04:34   Augmented Intelligence

Gillian: I strongly agree. And in fact, one thing I'd say is that the reason we call it artificial intelligence is because the person on the wall behind me, and I'm sitting in King's College in Cambridge, which is where Alan Turing, was based, and that's actually his portrait up on the wall behind me and literally about a 100 yards away from me is where he used to live and did much of his work. It's called artificial intelligence because almost exactly eighty years ago, Alan Turing did the Turing test and, basically spawned the word artificial intelligence. 

But I often wonder how different it would feel if we called it instead augmented intelligence or accelerated intelligence because the way I see it is that it's not so much about replacement all the time, although sometimes it is, let's be honest. It's more about being an additional member of a team. And I say that because a few years ago, I gave speeches saying that there was one thing that AI could never do, which was to tell a really good joke, and therefore comedians had job security. And it turned out that I was totally wrong because the reason why I thought AI would never be able to tell a joke and challenge comedians was because the pre-transformers models of AI, which were basically path-dependent based on logic, essentially could only produce very basic, crude jokes like knock-knock, who's there jokes, or wordplay or Christmas cracker jokes, and they weren't funny. Post transformers, where, essentially, you're dealing with probabilistic observation, they can produce jokes that are funny about half the time. 

And the dirty secret of humor is that actually comedy writers for late night TV shows are only funny half the time. And the reason they know that is because those jokes are written not by individuals but teams who chuck jokes into the mix and they bat them around, and they finally produce a late-night television comedy. And adding an AI agent into that mix doesn't necessarily replace the humans. It simply adds to the jokes that are basically swirling around and gives more checks and balances. And humor is fascinating because humor in many ways is the very definition of cultural anthropology, which is what I studied my own PhD and I've done academic work in because humor can't be predicted by an algorithm because it depends on contradictions in culture, on ambiguity and silences that we don't like to talk about, and very tribal behavior. So the fact that AI can now master even that but can do that by being part of a team is really important as a parable for what we might see emerge in many professions. 

07:12  AI Assistants for All

Parker: Yeah. Let me double click on that, Reid, because you've talked about having teams, individuals, having assistants, teams having assistants. I'd love to hear you expand on that vision for what work might look like. 

Reid: So let me start with just a couple of near certainties to predict in the near future. And near future is like small number of years or medium number of months. Which is, one, there won't be such a thing as an individual contributor anymore because essentially, every person who's doing this will have a small to a large team of agents facilitating what they're doing, and they will be managing that process with those agents. That's one thing. So it's kind of like almost like the managerial skills, like, the kind of managerial skills you might exhibit today when you're using a deep research agent or other bots in order to do stuff. That's gonna get deepened. 

And another one, this one actually is a prediction I made in the MIT tech review a number of years ago, is every single meeting that we do, we will actually have an AI agent, not just for transcription and notes, all of which is happening now. But where that AI agent would be going, oh, you know, Gillian and Reid are talking about this. Do you realize also you should talk about Alan Turing in the following question, or you should refer to the following thing about the Turing papers in the King's library? You know? 

And so, you know, that kind of participation for information, for follow ups, for questions, you know, will then become like, it's almost like, wait, wait. Can we have this meeting? We don't have the AI agent turned on yet. We're gonna be so much less effective if the AI agent isn't here in the things we're doing. And, you know, part of this amazing transformation that we're about to go into, this is, like, only a few years into the future. And, you know, in fact, when you're doing white collar work, if you're not using AI tools, you'll be under-tooled. It'll be kinda like saying, I'm a carpenter, but I use rocks, not hammers. Or, you know, like, it's kinda gonna be a standard part of what is professional competence, and then that will spread through the entire team. 

So I think this kind of massive capability change is coming so fast that you need to get engaged and you can't, like, say, we're gonna set, you know, these three people to go off and study it and come back in six to twelve months and tell us about it. I think that's too slow. So I think what you want to do is what are the ways you can experiment quickly? So, you know, simple things that you can do that I've, you know, done with, you know, organizations that I'm on the board of and others to say, well, you know, make sure that there's kind of like a weekly, biweekly, you know, fortnightly review of where everyone says, here's what I've tried, here's what I've learned, here's the things that I'm doing. Right? And then you can also similarly go, and here's some of the things that we should be doing, you know, as a group. 

For example, when I'm working with my groups, one of the things I do is I take a transcript of the meeting, and I feed it in with some relatively standard prompts in AI that says, is there anything we missed? Was it an important question? Was it an important source of information? Was there a follow-up? And this set of different things because we just take the, we had the meeting, we did the transcript, we just put the transcript in, and it gives us a very quick response to that. It can even be before the meeting's over is how fast this can be, where you go, "Oh, right. Yeah. Yeah. We should do that too." And so anything to be doing to starting to be experimenting and seeing what is our company culture, our market position, the way that we operate in our groups and not just as, hey, you know, let's go assign Sue or Fred to go, you know, generate a report on this that, you know, we'll go look at in x months. 

11:12  AI Adoption Across Industries

Parker: Gillian, how are you seeing those adoption, sort of steps forward, step back, step sideways? How are those playing out either in the conversations you have with leaders or even potentially within journalism and the Financial Times itself? 

Gillian: Well, I wear several different hats in that I am both overseeing this college in King's College in Cambridge, where academia is potentially being very challenged by AI in many ways. Good news is that the life scientists and the other scientists that I deal with in the college are being given extraordinary wings all of a sudden to do the kind of research at speed that most have never dreamt to be possible. So they are totally positive about AI. 

Many people in the humanities are pretty negative about AI because they can see that it's basically either going to undercut their role as teachers, or, in their view, make, you know, a whole generation of students pretty stupid because they're cheating with AI and not using their brains. I mean, as it happens, Cambridge and Oxford are probably the most, ChatGPT-resistant types of education in the world because they rely so heavily on small face-to-face interactions and what we call tutorials and supervisions where they have to write essays and then talk about them for an hour or two. And that is AI-resistant in many ways, or rather AI-enabled because you can use AI to research your paper, and then you're forced to discuss it as a human being, using what you've seen from AI. So I actually expect that going forward, we may well see more spoken exams, more teaching patterns of the form that we have in Oxford and Cambridge. 

In terms of journalism, you know, I was actually meeting with the CEOs of most of the big British media companies yesterday and moderating a discussion with them all about this very topic. And the message is that they are very threatened by the fact that AI companies are scraping their data with no monetization or monetary reward, often no attribution. And they're basically demanding some form of compensation for journalistic content to be used to train models, which I think is entirely fair. What form it will take is unclear, but there needs to be some way to get the media ecosystem compensated. Otherwise, there will be no media ecosystem and content in the future to scrape. But when it comes to actually providing news, they're taking very different attitudes. I mean, the Financial Times is not using AI to write stories in any formal sense, maybe to do some research, but not to write stories. But it is to aggregate news headlines, for example. And I suspect you'll see a lot more of that going forward. 

And as far as CEOs are concerned, as Reid says, many of them have barely started thinking about it yet, but they need to quite urgently, because if they don't, they will get overtaken. And apart from anything else, they won't actually realize, you know, how to familiarize themselves in the way they see both the benefits and the risks around it. 

14:15  Personal Intelligence & AI Companions

Parker: Reid, I wanna pick up on something Gillian mentioned, which was the tutorial model, at those, you know, Cambridge and Oxford. It's famous. It's so successful. One of the companies that you cofounded a few years ago with Mustafa Suleyman was Pi, personal intelligence. Can you expand on that vision of having this idea of personal intelligence alongside you? 

Reid: So, one of the things that's another kind of startling prediction is that we will, within a small number of years this will be a little further down than the earlier predictions I made. But we will actually, when we have a kid, we will actually have them have an agent, that will go with them through their entire life, and, you know, learn and help and so on with them. By the way, we will adopt that as adults sooner because we won't have all the complexities around, well, what are what are the set of things around it around the child. And so part of that is having a essentially a companion. And part of our idea when we built Pi, you know, pun intended, but it's a personal intelligence, but, you know, apple pie, etc., is that, training for this. And in my, you know, earlier book, Impromptu, I called it amplification intelligence, although augmentation intelligence is good too. 

When we're gonna be amplified, you don't just need IQ, you also need EQ. You need conversational capability. And part of that is to, is to actually be a very good, you know, kind of companion in the things you're doing. And so Pi, I think, you know, kind of set the standard for all of the other GPT4-class models on how do you put an EQ, how do you have it, you know, have it be a conversationalist and ask questions, you know, and how does it help solve a variety of those kind of, like, you know, kind of life navigation. And it applies to work too because social intelligence is part of the meeting, part of kind of collaborating with teams that was important. But, you know, that that obviously, you know, people don't necessarily think of that in its kind of top role. But that's what Inflection and Pi are about. Now we've seen other agents beginning to do, you know, like, you know, Anthropic with Claude and others beginning to develop on the same in a similar line. 

16:33  Navigating Concerns About Job Displacement

Parker: How should company leaders, CEOs help their workforces navigate some of these natural threats that people will feel as they see parts of their job, not just the augmentation parts, which I think people will be excited about, but bits that they maybe spent ten, twenty years becoming an expert on, watching AI do parts of that as well as them. I think that's gonna be the crux of change management in companies. 

Gillian: The really interesting for the companies right now, I think, is actually, in many ways, the entry level jobs because what AI is replacing above all else are a number of the boring entry level jobs that graduate trainees in particular would do for a couple of years to have effectively an apprenticeship in a white-collar way into the wider world of work. And by that, I mean, you know, sort of early career engineers, early career lawyers, paralegals, or in the case of journalists, you know, your classic grad trainee journalist. And when I was running, you know, bits of the FT, you know, I used to make all the new entrants do really dumb, boring stuff like write the markets column, which frankly could have been done by, you know, automation twenty years ago. But we still had people doing that, often because it was a really good training ground for learning how to handle data information. 

So one of the questions is gonna be, how are we gonna train the next generation into apprenticeships and entry level jobs. The flip side of that though is that if we start talking, calling it augmented intelligence or artificial intelligence and start trying to train people how to use it to make their job more effective, we may actually start to see people not only using it to be more productive, but creating whole new categories of work that we haven't even imagined yet because that's been the story of calculators and computers. 

Reid: Actually, for young people right now, part of what I advise them to do is become as expert and possible in AI and come to the organizations going, I can be part of your AI transformation. Like, I will, I will front end, I will use this too. I'm an AI native. It's this, I'm part of generation AI and doing that. I actually think that's part of how the transformation is gonna happen. And by the way, when you get to, well, how are we doing apprenticeship and so on. AI is the, like, by just many, many miles, the best educational tool we have created in human history. Right? 

So the question around, like, it almost gets back to, like, we go back to what Gillian was talking about in terms of the Oxford-Cambridge model. We can now have essentially, you know, a kind of a quasi, not the same and better with an Oxford or, you know, kind of, Cambridge on. But we can have one that is interacting one on one with every individual and helping them, you know, kind of get better on, you know, kind of the way they're thinking, what they're doing. And so you say, well, we got, how do how do we get people up the curve? It's well, actually, in fact, using AI and learning from AI and then using AI to do the work is part of what I think is gonna happen. And then precisely as Gillian was gesturing, we'll start, we'll say, hey, you know, as opposed to having, you know, twenty lawyers, we only need four. But then we're gonna figure out other new things that we need to be doing or can be doing that are really good for how we do business, risk mitigation, analysis, contracts, etc. And then the work will expand in, you know, in different ways just as it has, you know, in every adoption of technology. 

While there's concerns and things to navigate, the human amplification is, like, just simply amazing. We have line of sight to a medical assistant on every smartphone running at under, you know, five pounds, five dollars per hour that is there 24/7 for everyone who has access to kind of a smartphone. We have a legal assistant, a tutor, etc. All of these things are part of that kind of amplification. And, you know, how do we get there? Like, here's, I'll end with that kind of, one of the ways that I think white-collar work will be changing, which is we already have coding copilots that essentially people with engineering mindsets are using. I think every white-collar job will have a coding copilot assistant that, part of how you're doing journalism, teaching, evaluation, analysis, accounting will be actually, in fact, having an AI system that's doing coding with you in order to be accomplishing those missions. 

Gillian: Yeah. The key question we face is that we know that, you know, like, any innovation can either unleash our demons or the angels of a better nature. That applies to electricity. It applies to guns. It applies to nuclear power. It applies to anything that we've created. And if we look at social media, the reality is it unleashed our demons for the most part. They overwhelmed our angels of our better nature. I do think that AI, agentic AI, does have ways to potentially unleash our angels, and the question really is how. And I would argue simply, to be totally biased, that mixing artificial intelligence or accelerated intelligence with anthropology intelligence, i.e., a sense of our own humanity, the other AI, is one way to go. 

Parker: I think that's a terrific ending, a terrific inspiration. The mission of our time is to ensure that we steer AI's adoption by humanity to unleash our better angels. What a terrific conversation. Reid, Gillian, thank you both so much for making the time to join us. We really appreciate it.

AI: The Untold Story

We sat down with LinkedIn co-founder Reid Hoffman and Financial Times editorial board chair Gillian Tett to go beyond the headlines and get a deeper understanding of the economic and workforce impact of AI. Their message for HR leaders: act now, because the change is already here.

Key Takeaways

1. AI is here to amplify human potential. Instead of focusing on AI as a way to cut costs or reduce headcount, Reid and Gillian see AI as augmentative: a new member of the team that changes workflows and unlocks capacity for human creativity.

2. "AI is the best educational tool we have created in human history," Reid says. There are real challenges around AI's impact on how young people learn and develop the skills they need to enter the workforce. But Reid and Gillian explore how AI can create new models of education and training that personalize instruction in a way that was previously only possible at the most prestigious universities.

3. With AI, everyone becomes a manager. Reid sees a near future where every employee has a team of AI assistants that they manage to get work done. "There won't be such a thing as individual contributors anymore." In this world, the same EQ skills that make people great managers and coworkers become the skills that make them AI super-users .

4. "If you're not using AI, you're going to be under-tooled," says Reid. From leveraging AI to run better meetings to reimagining what's possible to achieve with an AI assistant at every employee's side, Reid and Gillian outline concrete starting points for driving change at scale. Because, as Reid says, in six months, if AI isn't embedded in your workflows, "It'll be like saying, 'I'm a carpenter, but I use rocks, not hammers.'"

Reid Hoffman

Co-Founder at LinkedIn, Manas AI, & Inflection AI

AI: The Untold Story

Editorial Board Chair, Financial Times

HR When AI Joins the Org Chart

We keep hearing that AI is joining the org chart, but what exactly does that mean? In this roundtable discussion, Lucien Alziari (former CHRO, Prudential), Diane Gherson (former CHRO, IBM), and Larry Emond (Senior Partner, Modern Executive Solutions) explore how AI is becoming a new part of how work gets done, how the best leaders are onboarding AI into their organizations, and what the future of talent looks like.

Lucien Alziari

Former CHRO, Prudential

Diane Gherson

Former CHRO, IBM

Larry Emond

Senior Partner, Modern Executive Solutions

Key Points

00:00  AI Joining the Org Chart: A New Reality

Das Rush: Thank you so much for joining today, Larry, Diane, Lucien. We've had a lot of conversations about various topics. And this might be one of the biggest ones at the moment, at least, that I've been hearing, which is, you know, AI is joining the org chart in 2025. Maybe it already has for a lot of organizations. So I wanted to start there getting each of your takes with what does it actually mean for you when you hear somebody say AI is joining the org chart? 

Larry Emond: Well, you know, as you know, my central life activity is bringing CHROs around the world together in meetings. And I saw I had three meetings in May, Denver, New York, and Boston with about 45 CHROs total. And in all three of those meetings, of course, we spent time in AI and HR. And in all three of those meetings, we ended the meeting talking about what is the future of this thing and what are we gonna call the function because the term human resources or even people is already antiquated. Right? And so my point is that that all those CTOs have fully embraced the idea that that AI agents in particular are going are already becoming like parts of their workforce. And, of course, most of them are personified with names like Nadia, and they're becoming, you know, part of the team. 

Diane Gherson: You know, when I was at IBM, we had agents. We gave them names, but we weren't receiving emails from them. Now we're receiving emails from them. So, you know, so they're becoming a little more personified. But I think at the end of the day, you know, back to the point that Larry made, there is a need for us to think about how to fit them into how work gets done. And so we haven't really thought through when it's on the org chart, as you say, what does that, you know, what, they don't have to do this to us. We don't have to be sort of back in the industrial era where you organize work around the machines. Right? If you put the human at the center, then maybe the org chart would look very different than if you didn't. 

01:59  CHROs as Chief Work Officers in The AI Era

Das: Yeah. And, Lucien, I wanna get you in here because I think this leads in really well with something that you very presciently said at our last summit in November. And you had said, like, you know, CHROs are gonna become Chief Work Officers because AI is this new way that work is getting done. And for HR leaders, it's all about thinking how work, technology, and talent come together, and that you find most people aren't thinking enough about the work in that equation. How should CHROs right now be thinking of the work? 

Lucien Alziari: Yeah. I think the most encouraging thing is, is that even over the last few months, I see this notion of the work is really coming much more mainstream into the discussion. And I think catchphrases like, you know, there's an AI agent on the org chart. Okay. It's fine. But like you guys, I said, well, what's really underneath that? Because an org chart is basically how does work get done in the organization, how does an organization sort of organize itself to do the work. 

And I still feel that, if the future, and this is my view of the future, is it's basically an optimization game, which we can manage very dynamically between the people and talent that we've got, the capabilities that technology can bring, and many of those are now real, whereas before they were like a gleam in the eye. But now they're scalable. They're with us. And then the third part of the equation is understanding of the work. The big still void for me is, like, who's taking control of the work? And I don't want to be misunderstood because people have said, oh, okay. So it's about the work, not about the people. No. It's about both. Alright? And it's about the technology. And I don't see, the heads of HR or people, whatever you wanna call them, becoming the Chief Work Officer exclusively, but I just want us to kind of own that space. Nature fills a vacuum, and if we don't step up and sort of own this space, then somebody else will do that. And I'm not sure that's the right answer for the organization. 

04:08  Organizational Change with AI Integration

Das: So how do organizations change as AI joins the org chart? You know, Lucien, you made that point that the org chart itself is really just how you're architecting your organization to get work done. How is that architecture changing as AI comes on? 

Lucien: There has been a trend, but I think it's gonna accelerate fast now in terms of minimizing the number of layers in the organization because that drives speed and adaptability. I think AI supercharges that. So I think there will be fewer layers in organizations. I think there's gonna be a big rethink about the role of managers, as we get much more granular and forensic in terms of understanding how work gets done. And I think some of that, the work is gonna get managed and done with heavy technology use. 

So, I think there's gonna be quite a debate about what's the role of managers, what's the span of control that we should expect of managers. The old model of the middle manager who managed the work, coached the people, and communicated messages for the organization: I think that sort of, trio of themes is gonna get rethought. And so, fewer layers, fewer managers, probably bigger spans of control, but more focused on the development and communication than on the management of the work. 

Diane: Yeah. I would second that. I actually, authored an article in the Harvard Business Review this month about that topic of middle management. And I think, you know, more than anything, now that we've got AI, we have to be thinking about, reframing for our people what work is about and what we expect of them. What is the end game? I mean, you said optimization, Lucien. Maybe it is, maybe it isn't. But, you know, let's be clear. The old game was to reduce headcount and to outsource. That was an industrialization game. The new game has so many more possibilities, and work does not have to be fixed, right, into fixed jobs. We're looking for much more fluidity. Okay? So let's start talking about what that looks like. Maybe more variation in the kind of work that you do. Maybe it's your skills that matter more than your job, etc. 

So, I mean, having those kinds of conversations, that's what we expect of middle managers now. I thought, you know, the Shopify CEO, you know, throwing the gauntlet down to his employees was exciting, but there's a lot of fear, and middle managers have to think through what does this look like for you guys, you know, my organization, my piece of the organization, and, and what could you expect? And I think that reframing part, framing for people is a role that middle managers still need to play, often they're not, but that they don't have to do the coordination work and all of the stuff that you mentioned, Lucien, on that passing along of messages. You can go to a town hall and, you know, directly talk to your CEO from whatever country you're in. 

So things have changed dramatically for what middle managers do, and there will be fewer of them, but I think their jobs will become actually more important. 

Lucien: Many of these discussions that we're having now are very reminiscent of when the Internet came in. You just think about, how did the Internet change the world? And a lot of jobs went away, and a lot of jobs were created. And so, at a macro level, I'm actually quite optimistic. Now clearly, if you're caught in the transition, it can get very challenging, but that's why I think organizations need to be helping employees and potential employees adapt to this. And the adaptation isn't gonna be deep technical skills, because the half-life of any deep technical skill is getting shorter and shorter. But the human skills, I think, are gonna sustain. 

So, I would always encourage CHROs to sort of go back and say, well, what really matters in this debate? And then, you know, what are the things that you can do? Because I do think this is, as I said before, a huge invitation to just be creative and invent playbooks because there is no playbook at the moment, and that to me is really exciting. 

08:21  Driving AI Adoption Through A Culture of Experimentation

Das: And that might actually be the most important message for anybody to take away right now is, like, there is no playbook. Diane and Larry, I'm curious what you're seeing. What are the best organizations doing, and how are they getting creative? Especially when it comes to this moment, you know, our theme is, like, driving adoption. We're really in this moment where the task is to help people find their ways to these tools and to be in good partnership with them. What are you seeing the best organizations do? 

Larry: I think the ones that are leaning in the best, and I, you know, I know it through the HR organizations with all these CHROs I know. It's the ones who let's say their head of talent is literally like, their head of talent or talent management is like almost an AI first mindset. And you so you're testing every possible you know, you try Nadia. You try another one. You try another one. You mess with some stuff. You program yourself in ChatGPT Microsoft Copilot. You're technology first, AI first. And those are the companies that have gotten way out ahead on this. And it's interesting. 

There's a woman, a woman my firm, actually, my firm placed as the Head of Talent in a company. And she's been a guest a couple of times at CHRO meetings, and she does a little thing on some of the experiments, you know, she's kind of a mad scientist, that she's been working on. And every time she's done it, the reaction by the CHROs is: that's who, I need that persona as my Head of Talent. That's the future. And I think it's those that in, let's say all the COEs that have an AI kind of first mindset. They're just gonna get way out ahead of everybody else. 

Diane: I think it starts at the top, but it also starts at the bottom. Right? You need to have a culture of experimentation. So you're saying to people, we, you know, we want your ideas. And I've seen companies do a great job of crowdsourcing saying, hey, you know, what's the best use of AI for us? So I think that's something that helps. I think the other thing is that the more HR uses AI to give people agency, the more people start to understand that it's actually a really cool thing. Because, you know, in the old days, HR used to sort of do things out of being experts. Right? And then here's the program, and we're rolling it out. We'll do change management. But now we can cocreate with our people using AI. And even if we have, you know, a hundred thousand employees, we can get all of their responses, summarize the pros and cons that, you know, the different points of view by AI in a matter of minutes. 

And people can see the different points of view, then you could turn it into a poll, and people can respond and say what they liked and didn't like. And suddenly, you're using AI as a force for good in terms of people having a sense of agency, and it gets them excited. It's not something being done to them. It's something that they're part of. So I think that's part of creating a world where AI belongs to all of us, and, you know, let's us all participate and learn and get something out of it. And I think that kind of a headset is really important to get it going. 

Lucien: If I can sort of pile on, because I just so agree with what both Larry and Diane have said. I see two approaches with AI, and I'm not a technology person at all. But I think some of the focus is, okay, which jobs are they gonna replace? It's kind of an efficiency thing. It's a new way of driving productivity. And, look, we're business leaders. We can't deny those possibilities, and we should go after them just like any other business leader does. But the most interesting stuff is, well, what can we now do that we always wanted to be able to do but never could? 

So we're here because of our interest in Nadia. Now imagine five years ago if we'd have said, how about let's give everybody in the organization a coach. Alright? Or let's give everybody in the organization a work assistant so that they can get some of their work done more efficiently. And we'd have said, that's a brilliant idea, but we can't afford it. Now we can do both of those. Alright? And those are just the first thoughts that we've had. 

12:25  AI’s Challenge to the Talent Pipeline

Larry: I think one of the big challenges going ahead is how are we gonna develop senior people, and I'll use an example. I did a couple meetings last year with Chief Legal Officers, just kind of for fun as an experiment. I thought they were gonna be horrible. They were actually great meetings. Not as much love in the room as a bunch of CHROs, but they were actually they were still pretty warm and funny. I found them quite entertaining. We talked about how the law firms are openly saying, you know, that they buy their services. You know, we're not, we're gonna need a lot fewer junior lawyers because AI agents will do discovery and research and blah blah blah. To which, of course, the question is, well, how are you gonna develop senior lawyers? Right? And that's gonna be true across the board. There's gonna be the same thing in finance. It obviously is the same thing in HR. Those are fascinating challenges. I mean, opportunities, but also real challenges. 

Diane: You know, I think this question about the loss of the entry level rung on the ladder is probably the first most important question for HR leaders to be thinking about because it's real, it's here today. You know, maybe with some of the stuff you described, you know, we're not sure when that's gonna happen, but it's here already. And so the question that we've got to ask ourselves as HR leaders is that okay? Is that okay that we're having a whole generation of people graduate from university with no access to entry level roles because, you know, they're being taken over by AI, you know, over a shortish period of time. And what does that mean for our pipeline of talent as an organization? 

That is something we could design into our organization. We don't have to be victims of this. We can be in the driver's seat and say, this is how we want it to work. We are going to have an intake of this many people, this kind of training, these kinds of, you know, hands-on roles, and this is going to be the role of AI. We don't have to say AI is gonna take on all the, you know, entry level legal work or whatever. But I'm not seeing enough to, back to your earlier question, Das, I'm not seeing enough HR people actually say, I'll take that one on. Right? We're gonna design that. Don't worry. It'll be good. Right? It's more, you know, tell us what the technology can do and, you know, we'll hire around it. 

Lucien: Yeah. I think one of the worries, I think, and we're still in the very early innings of this debate. But, if you just take what AI does, you're basically just taking what anybody else, any of the companies that you compete with, you're just taking industry standards. So you're not building anything that's competitively advantaged. And so I'm still actually quite optimistic about this combination of humans and technology because, if you're just taking kind of the B-minus AI answer that applies across the board, and in many cases, that's fine. That's enough. But it's sure not gonna make your business win competitively. And so I think it does come back to the creativity, the critical thinking, the curiosity of humans to actually ask the right questions, pose the right problems for, and then the technology is there to help solve it for you. But the technology is not gonna, help your company figure out how to win versus others. I still think that's the human piece, and that's why I'm still overall quite optimistic about this. 

15:57  AI Coaching for Manager Development

Das: A lot of times we've got in the habit of doing something for the sake of doing it. Right? Like, we're gonna deploy technology to deploy technology. We're gonna do performance reviews to do performance reviews, and we've kind of lost sight of, like, why do we do performance reviews? It's to make us better at the work. Why do we deploy technology? It's to make us better at the work. And I think that's a common theme. And this disruption of AI is really kicking up a lot of that dirt. You can't hide anymore. It's really clear when you're doing work for the sake of work. 

So what are you seeing there with kind of especially AI coaching and the role of it in helping organizations both bring in these opportunities for entry level talent, but also get back to the purpose of, like, driving business impact and doing work that's actually meaningful and moves the needle? 

Diane: You know, the role of managers changed dramatically with the pandemic, and it also changed with the generations that are coming into the workplace. And the role of managers has changed dramatically. They need to have empathy. They need to understand the whole person. They are dealing with people who are working both remotely and on-site, so they don't have the same ability to pick up on things the way they might have before. They need to resolve conflict, not putting them in the same room, but actually resolve conflict in different ways. And so they need AI coaching, but the manager in particular needs AI coaching to help develop collaborative work environment and so forth. 

So I think it rises to all the new challenges that managers are facing today in a way that I was very concerned, you know, when I was looking at it through the pandemic, because so many managers were falling apart and just burned out, not able to do it, and people didn't want to be managers. And I think now what you've got are, you know, you've got a different situation because you're enabling them in a very new way. 

Lucien: I just think it's really exciting what Nadia can do, the whole field of AI coaching. And I think it started on the development side, and that is the right place for it to start. But I think over time, it's gonna be around coaching high performance, not just coaching as an end in itself. And so I think it is going to go into the adjacencies around the management of the work, the management of performance, all of those areas. And again, that's always been the work of HR, but I think this is a great new capability for us to become even more effective of that. 

18:31  Final Advice for HR Leaders on AI

Das: What's your kind of bottom-line advice? Don't lose sight of this in the next six months, twelve months. 

Larry: I'll just make one comment to CHROs and other senior HR leaders that might hear this. You have to take the time to meet these agents yourself. You have to spend time. You know, as you know, I've had Nadia as a guest in many CHRO meetings. We always do this kinda last thing in the day. Imagine you're a frontline manager in your company. What would you ask? And when they engage with her, I'm always shocked how many of them are like, "You're kidding me." Well, that's on you for not having already taken time to see what's possible with these agents. 

Diane: Well, I would double down on Larry's comment about, learn yourself, like, get your hands dirty. You know? Because it's, it's really important to be on top of the technology. So I would add to that by saying there's some really, really good, you know, Substack, LinkedIn newsletters that you can find that keep you on top of the latest technology and the thinking about it. There are way too many sort of well-curated ones in in in the HR/AI space, but I think David Green's, in the talent space, is exceptional. And if anyone isn't reading that every month, they should be, because it really does go quite broad in terms of looking at all the different research that's been done and the impact of AI on, on work and on people and so forth. So I highly recommend that. I just think it's an important part of every HR leader's day to be staying on top of what's happening in AI, in HR, and the work. 

Lucien: For kind of a 60,000-foot perspective, think about what really makes the best CHROs the best CHROs, and I think there are three things that come to mind. One is that they just have an intellectual curiosity about, sort of, what's really happening here, and they can get to the underlying cause of issues. The second is that they have a passion around the business and how they can bring all of the capabilities that are available to them to help their businesses succeed and remember that that is their fundamental purpose as an organizational leader. And then the third is they're always able to bring this kind of outside-in perspective so that you keep your own organization in the right context, and we don't become guilty of sort of wishful thinking. 

And so here's a great opportunity now to say, it's for you to show thought leadership because nobody else in the organization owns this right now. Right? So it's an opportunity for you to lead and to say, what's the potential of these amazing new capabilities? But your job is not to deploy technology. Your job is to help the business win, and I think the next path to help the business win is this really granular understanding of the work and how the work best gets done with this combination of people and technology. And it's unlocking a whole new game for us. So I think we just need to go there. And nobody's got a playbook, so think about it and then lead on it. 

Das: Fantastic note to end on. So, Diane, Lucien, and Larry, like, thank you so much, for joining today.

HR When AI Joins the Org Chart

We keep hearing that AI is joining the org chart, but what exactly does that mean? In this roundtable discussion, Lucien Alziari (former CHRO, Prudential), Diane Gherson (former CHRO, IBM), and Larry Emond (Senior Partner, Modern Executive Solutions) explore how AI is becoming a new part of how work gets done, how the best leaders are onboarding AI into their organizations, and what the future of talent looks like.

Lucien Alziari

Former CHRO, Prudential

HR When AI Joins the Org Chart

Former CHRO, IBM

Larry Emond

Senior Partner, Modern Executive Solutions

Beyond Chatbots and Search Boxes

Jeff Dalton, Head of AI and Chief Scientist at Valence, has authored more than 100 research papers and holds multiple patents in search, natural language understanding, and question answering. In this conversation, he shares his unique perspective on the pace of change in AI capabilities, lessons learned from a career spent breaking the barriers of what's possible with AI assistance, and a deep dive on the architecture that power's Valence's AI coach, Nadia.

Jeff Dalton

Head of AI and Chief Scientist at Valence

Key Points

00:00  The Excitement of AI Coaching & Virtual Assistants

Das Rush: To get started, like, what has made you so excited right now about this space of AI coaching and this potential we have to build, like, true virtual assistants and coaches? 

Jeff Dalton: What's really exciting to me, and I think a lot of people in the field right now, is just the pace of change is just absolutely amazing. Like, there's talk about, like, kind of the exponential growth in terms of the AI capability that we're seeing. So even what we the kind of pace of change that we saw, what we saw three years ago, and what we have now, it's like ten years of change just almost overnight. And so what that means is what we could do before, even a few months ago, suddenly, that was impossible is suddenly now possible. 

And what really makes that exciting is the fact that many of us have had this vision, as you mentioned, for a long time, twenty years, ten years, and some people for a lot longer, some people for fifty or even sixty years of having this AI assistant, that can actually be with us. And suddenly, many of these things are much closer to reality in a way that feels very tangible and exciting. My vision for what we've had for assistants is also the fact that we—assistants matter because they help us accomplish a goal that matters to you or to us. Right? So in coaching, whether that's dealing with a coworker who's struggling, whether it's helping us achieve your next career goal or your next big promotion or your five-year plan for what that looks like: what's a coach that's gonna be, like, help you along that path, right? 

01:27  Applying AI to Build a Virtual Coach

Das: Now that you've come over and are working on Valence and building Nadia, kind of what are one or two of the ideas that you worked on earlier that you're finding now are really applicable to building this AI coach? 

Jeff: Yeah. Certainly. So I've been around for a little bit. I did my PhD now probably, kind of fifteen years ago, really working at working on intelligent search systems. So how could they know more about us and know more about the world? And we did that using something called knowledge graphs, which were a structured form of a kind of a geeky way of, like, how machine we could encode facts for the machines to be able to understand and do question answering. And I continued that when I went to Google. I worked on health search. And so we take something that says, "I've got gunk in my eye," and we've turned that into something that we can give an answer to, leveraging a knowledge graph. Right? And, hopefully, it make your health a little bit more reliable in the process. 

And what we quickly realized is the fact that what was in the search box, that wasn't enough. We needed what was outside of the search box. We needed an AI assistant that had a plan that could talk to us, that could reach out and have, and then have that. And so I went to work on the Google Assistant, and we tried to build some of those technologies and tools. At the time, the technology is, like, just wasn't there. But the underlying element of having machines understand us, leveraging domain experts to be able to really deeply understand the world, are all fundamental parts of components of, like, the future AI assistance that we're building. 

02:48  Breaking (and Rebuilding) Virtual Assistants

Das: You know, you mentioned there, Google, like, your medical assistant. And one of my other favorite assistants that I know you built was a virtual kitchen assistant. Could you tell that story? 

Jeff: Yeah. So not too long ago now, around 2021, my research group competed in the Amazon Alexa task bot challenge. And our goal there was to do something that hadn't been done before. The goal was to have something that you could cook along to do something real in your kitchen. So not just talk to your assistant, but actually, see what your system was doing, use a screen, use rich interfaces to be able to do something in the world, and to have a coach there with you along the way. And along the way, we realized that pretty much everything that we had for the current assistant was broken. So we broke the speech, the voice, the real time voice. None of that existed. And so we had to build it, and we had to build a whole new open assistant toolkit. 

What was really a key challenge there that was really transformative for us was we were right on the cusp of the LM transition. So we just hit the first generation GPT three where we could take in and actually transform a recipe. So you're like, "I now live in Colorado, transform this for high altitude," and then that one could make that possible. So really kind of wow moments that really demonstrate just the potential for the assistant to be adaptable at the next level. 

Solving Key Challenges in AI Coaching

Das: It's, you know, you mentioned breaking the assistant. Right? Like, having to kinda break every piece of technology to build the assistant. And I think that ties into something that's been kind of a refrain at Valence, which is, you know, AI has a strong central promise of being able to personalize technology to us and to make it far more natural to use. 

At the same time, it's not fully formed, and there's a lot of rough edges that we're smoothing out right now and a lot of problems that need to be solved and worked on. What are some of the most important problems that you and your team right now are working on as you build Nadia? 

Jeff: Like any new technology, we still have a lot of rough edges. We're still working out a lot of the details of what the capabilities are, and I can talk to you probably for a whole hour just about the different kind of challenges and aspects of what we're going to need to do to build the coach of the future. So I'm just gonna give you a little bit of a taste for what that looks like and then maybe talk a little bit about how Nadia works today. 

So the first thing is how can we build an AI that you can build a relationship with, that you can trust over time so that Nadia can grow with you and adapt with you in the long term? Second, how can we scale Nadia so that Nadia is not just a coach for you, but it's a coach for everyone across all organizations, across all different types of domains? And the third challenge is how can we build an adaptive and proactive coach that's going to change and become more personalized with you over time so that your Nadia experience today is very different from your Nadia experience six months from now or a year from now in ways that are fundamentally different than what we have. 

And as we're kinda working on those challenges, I wanted to just kind of talk through a little bit about where we are today. Here's some information of the overall architecture of Nadia, a little bit more of a technical or conceptual level.

05:50  Nadia’s AI Architecture & Capabilities

Here at the base, again, we have foundation models. So those are the large language models that you probably have heard about. There's not just one large language model. It's many different types of models, small models, big models, state-of-the-art models, multimodal models, reasoning models, all these different models that, when you're using Nadia, you're using a suite of the best-in-breed, state-of-the-art language understanding building blocks that go into building the next-generation assistant. 

Next level that we have is memory. So Nadia has a couple different kinds of memory: memory about you, memory about your organization, and memory about coaching. So memory about you are things like that you expect the coach to remember. You expect them to remember your past conversations, the last time you shared your calendar with them, the documents and information that you uploaded, as well as information that you've shared and talked to about the people in the network that you have. 

For your organization, we—Nadia is custom for and bespoke to your organization. So knows your OKRs, knows your company values, your training documents. So the coaching that we have with Nadia is bespoke to your organization that's leveraging the right frameworks so we have the coaching be most effective for your team and for your overall organization.

And lastly, for me, really fundamentally, coaching. Nadia is nothing without her deep expertise and really differentiated expertise with expert coaches. So we work with a set of expert coaches to curate knowledge, to curate situational understanding that now you can use in future conversations, knowing the best frameworks to use so that it's not just a generic coach that you would have off the shelf. On top of that, memory alone is just not enough. Memory is only as good if you know how to use it. So on top of memory, we have an intelligence layer or hypothesis engine for just building a plan for this conversation and for future conversations, leveraging just the right elements of memory to be able to pull them in at the right time to be able to make that plan and to be able to have a successful conversation. 

On top of that, we have that interface layer. We have Nadia's core capabilities that we have to be able to execute that plan, whether today may be a skill building plan, tomorrow, it might be a reflection on feedback that you have from coworkers and have those orchestrated as part of the conversation, of course, in a rich multimodal interface experience. And around all that, of course, we have the safety and guardrails because one of the fundamental pillars of Nadia is the fact that this is an enterprise tool. We want people to feel safe and trust it, so you can talk about anything and Nadia will, if you can't talk about it, Nadia will stop you and prevent you from talking about things that aren't allowed in ways that are gentle, that are coaching approved, and that maintain a fundamental kind of trust and kind of safety of the coaching experience. 

And with that, I think we'll probably be talking about some of the new and exciting kind of product features that we're building on top of this architecture. 

Das: Jeff, thank you so much for joining us for this, and looking forward to seeing what you and the team build in the next coming months.

Beyond Chatbots and Search Boxes

Jeff Dalton, Head of AI and Chief Scientist at Valence, has authored more than 100 research papers and holds multiple patents in search, natural language understanding, and question answering. In this conversation, he shares his unique perspective on the pace of change in AI capabilities, lessons learned from a career spent breaking the barriers of what's possible with AI assistance, and a deep dive on the architecture that power's Valence's AI coach, Nadia.

Jeff Dalton

Head of AI and Chief Scientist at Valence

Beyond Chatbots and Search Boxes

WPP x Nadia: Transforming Employee Support

WPP, one of the world's largest creative agencies, isn't just adopting AI, they're building it with their proprietary platform, WPP Open. But Lisette Danesi (Global Corporate People Lead, WPP) knows that it's not just the platform that matters; it's an employee experience that makes people feel safe, supported, and seen. She shares how WPP's partnership with Nadia has made that experience a reality across their entire global HQ workforce, the remarkable results they've seen, and detailed tactics for driving adoption and change at scale.

Lisette Danesi

Global Corporate People Lead, WPP

Key Points

00:00  WPP's AI Transformation Journey

Lisette Danesi: Hello, everyone. I'm Lisette Danesi, and I lead the corporate people function at WPP. For those less familiar with us, WPP is a global creative transformation company. We work across marketing, advertising, communications, and consulting. We're home to 108,000 people around the globe in more than 110 countries, all connected through our London headquarters. 

Now, it's fair to say AI has moved from buzzword to boardroom priority. At WPP, it's reshaping the way we work, the way we think, and, yes, the way we lead. We know the business case for AI is compelling. It can drive efficiency, creativity, performance, but the real challenge is human. How do we engage our people in this transformation, especially at scale? Because without employee adoption, even the smartest tech won't deliver the results. 

00:59  Empowering Employees with AI

Lisette: At WPP, we're not just using AI. We're building it. We've developed our own proprietary platform, WPP Open, which powers how we create content and ideas for our clients with greater speed and insight. We also work with the big tech firms to extend our capabilities even further. But one thing has been true from the start: AI only works if our people are on board. And people don't come on board because of the dashboard. They come on board because they feel safe, supported, and seen. That's why we focus not just on the tools, but on the employee experience surrounding them. And that's where our partnership with Nadia has really come into its own. 

01:41  3 Steps For Piloting Nadia: Listen, Learn, Iterate

We started by listening, because we knew we couldn't make assumptions. And what we heard was clear. Access to support and coaching wasn't consistent, especially in our service centers and at the junior level roles. Employees wanted help navigating ongoing change, restructures, new leadership, new tools. And they wanted clarity. How do my goals connect with the bigger picture? How do I stay focused and resilient in my day? 

We also agreed upfront we wouldn't get everything right the first time. So we built our approach around three principles: listen, learn, iterate. We began our pilot involving 400 colleagues around the world across our Global People team and our Malaysia-based delivery service center. Malaysia was an intentional choice for us. It's operationally vital and culturally diverse, so it was a really strong test case for scale and localization. 

The feedback was overwhelmingly positive: 95 net promoter score, 82% engagement, and a strong uptake across all levels and all functions. So people describe Nadia as supportive, as relevant, and something that they wanted to keep using. They were surprised it was that good. That early trust gave us the confidence to keep moving forward. 

From there, we expanded and rolled out Nadia to all our global HQ people. We prioritized teams going through large-scale change, like our enterprise technology function, where new managers really would benefit from the coaching and the support to lead through the complexity of what was happening in the business. We also worked closely with regional leaders to tailor Nadia for local context: not just language, but tone, mindset, and behavioral norms so it would show up relevant to people where they are. Colleagues across functions up to C-suite are using it to rehearse challenging conversations, test responses to sensitive emails, and gain clarity, all in a private, judgment-free space. In just one of several onboarding sessions, over 800 of our 1,500 employees joined in India. That kind of response only happens when something is truly resonating on the ground. 

We're seeing the same impact elsewhere. In Japan, for example, our CPO there shared with me that she was using Nadia to prepare for sensitive conversations around change, how helpful it's been in slowing her down, reflecting, crafting more considered responses, all in local language. This shows us that it's not just a tool for early career talent. It's something that's supporting even our most experienced leaders around the world. 

One of the most impactful applications of Nadia so far has been goal setting. It's an area that often feels like an administrative chore that comes around once or twice a year, but we've framed it to make it useful, to help people feel energized about the goals that they're setting, feeling like they can align what they're doing to the business. Employees tell Nadia who they are, what they do, and it suggests goals tailored just to them. 

04:48   Driving AI Adoption: Lessons Learned

Lisette: So what have we learned about driving adoption? Firstly, relevance matters. We didn't roll out one generic version. We really worked with teams on the ground to tailor the experience. Secondly, leaders matter. Our senior leaders are using Nadia, and they're sharing what they've got out of it, and they're sharing how they're using it with their direct reports. It's giving others permission to try. And thirdly, storytelling. It works every time. We're showcasing and lifting up the real voices of our employees, real examples, and not shying away from the parts that may need adjusting as we move forward. 

We've also uncovered barriers. One of our future of work experts highlighted a growing gender gap of AI adoption in a recent conference. In early data, women already are less likely to be using AI tools, and that's concerning, often because they see it as cheating or they feel unsure how to engage with it. So we're really now focusing our work in our employee communities to address that, building confidence, breaking down myths, and ensuring we don't allow another digital divide to emerge. And we realized something else. A lot of people don't actually know what coaching is. So we're also focused on educating, not just enabling Nadia, because democratizing coaching means making it both accessible and understood. 

Another benefit: the insights. Because of the way Nadia works, we now have anonymized data on what our people are struggling with most and what they're actively working to improve. And what comes through loud and clear: it's not the technical skills. It's setting clear and measurable goals, communicating with clarity, and active listening. These are foundational human skills, but they're also critical leadership skills in an AI-enabled workplace. We're using this insight to fine-tune our L&D focus and support leaders where it counts. 

06:39   Democratizing Coaching for All Employees

Lisette: At WPP, Nadia has helped us make coaching accessible to thousands who never had it before, deliver global consistency with local nuance, and foster trust, where teams learn from each other and show up with curiosity. But we know adoption isn't the same for everyone. In The UK, we've piloted a generational bridge workshop, and what became clear is that different generations adopt AI differently. Older colleagues weren't resistant. They simply hadn't been given the space and the confidence to explore. Now that's a part of our focus, to show up where it matters across countries, across functions, genders, and generations. We're not waiting for perfect. We're learning in motion with feedback as our compass. 

As we look ahead, our ambition is to embed Nadia into everyday working life, not just the big events like performance management, but to help with the small, meaningful moments that define our work and our lives. But we'll try to listen, learn, and iterate every step of the way. This transformation isn't a one-time event. It's continuous relationships between people, purpose, and possibility, and Nadia is helping us make that relationship stronger and at scale.

WPP x Nadia: Transforming Employee Support

WPP, one of the world's largest creative agencies, isn't just adopting AI, they're building it with their proprietary platform, WPP Open. But Lisette Danesi (Global Corporate People Lead, WPP) knows that it's not just the platform that matters; it's an employee experience that makes people feel safe, supported, and seen. She shares how WPP's partnership with Nadia has made that experience a reality across their entire global HQ workforce, the remarkable results they've seen, and detailed tactics for driving adoption and change at scale.

Lisette Danesi

Global Corporate People Lead, WPP

WPP x Nadia: Transforming Employee Support

Nadia x AGCO: A Global Coach for a Global Organization

Colleen Sugrue, Head of Global Learning and Organizational Capability at AGCO, shares how her team brought scalable, sustainable, and personalized coaching to AGCO's global workforce with Nadia, supporting every employee around the world in over 29 languages. Colleen has priceless advice on driving global adoption of AI coaching:

1. Find solutions that scale, take the pressure off HR workloads and budgets, and are tailored to the needs of your workforce.

2. Find your team within the organization to drive adoption, from IT, to executive champions, to comms partners.

3. Find out what works by customizing AI to meet your people in the moments that matter in your talent cycle.

Colleen Sugrue

Head of Global Learning and Organizational Capability, AGCO

Key Points

00:00   Talent Development at AGCO

Colleen Sugrue: My role is all about talent development. Right? So, basically, it's ensuring that AGCO has a pipeline of people that can do the job both today, but also the job of the future. Right? And as we all know, the job of the future is kind of a challenge to keep up with these days. Right? So I develop people. And whether that's leadership development or functional trainings or whatever that might be, we are invested in making sure that our employees have the knowledge, skills, abilities, and mindsets to be successful. 

00:38   Scaling Coaching with AI for Global Impact

Colleen: The biggest way that I feel like AI's impacted my role is it's given me the ability to scale things that, historically, that just wasn't an opportunity. Right? I think about coaching. Right? That's the biggest piece for us. Coaching was this really small, niche kind of market that we used to have to heavily invest in. It was only available to a small number of people within the organization. But we've been able, through our partnership with Valence, to really scale this globally in a way that we can both sustain and offer great value to our employees. 

01:14   Why AGCO Explored AI Coaching

Colleen: It was this moment in time for me where we really needed to look holistically across our talent development strategy. Right? We had a lot of things in pieces. We had a lot of things in pockets, but we truly, truly didn't have this global strategy. So I was really looking at everything at the same time. Right? So today is the story. It's about coaching, but just know that it's, it was part of this broader story about how are we really, truly scaling and adopting talent development strategies around the globe. 

01:48   AGCO’s 3 Criteria for L&D Tools

Colleen: For me, there's kind of, like, three key categories that everything had to fit in. Whatever we were gonna do, it had to be scalable. So it had to reach a global audience, and it had to reach a large global audience. Right? So we're about 23,000 people globally. So we had to be able to hit that audience. 

It had to be sustainable. So it had to be something that, if we were gonna implement, it wasn't going to require more people resources, a whole lot of money. Like, it had to sustain itself long term for the future. 

And then the third piece of that is, it had to be about the employee. We had to put the employee first in the decisions that we were making and make sure that it was gonna be something that was applicable to that. 

Here's an AI solution that could speak all of the languages that I need because we have we have a global audience, and we do, we have 13 core languages within our organization, and we have other languages that people speak. Right? So we needed to consider that. We needed to consider this large audience. And with AI, well, this coaching tool was able to customize it to that employee. Right? And it wasn't just this class that I was gonna send people to. It was this always on, always available, 24/7 guidance that was going to be able to be the support for the employee. 

03:14   Designing the Nadia AI Coaching Pilot

Colleen: It was really important for me, though, that I pulled different groups of people in from around the world from different cultures that could also test in different languages. So we had a whole, I think there was about 50 of us in that trial that we did, and we set realistic goals in there. Please try to use it once a week, test out the language capabilities, and we gave some prompts at that point. So here are some conversation starters that that test with her and see how this goes. And then we would meet monthly, and we checked in. We said, how is it going? What are you learning? What are you hearing? At the end of the trial, then we asked everybody really basically sort of that MPS question. Would you recommend this to be used here at AGCO? In our case, the answer was absolutely. 

04:12   Encouraging Experimentation with AI

Colleen: We'd be talking and somebody would say, oh, you know, I really wish or I gotta think through that or I wanna talk through this, and I would say something like, hey. Why don't you give Nadia a try, you know? Why don't you try to have that conversation with Nadia, see if she can help or this, and without a doubt, they would come back to me with, via Teams, typically, I went and I talked to Nadia, and it was amazing. Thank you so much for recommending that to me. 

04:39   Global AI Coaching Rollout at AGCO

Colleen: So I kinda laugh a little bit at this point because I'm like, well, you know, the truth of the matter was, in my team, we're in a position. Like, we needed a win. Okay? Like, we needed a win that was gonna be global, that could have big global impact fairly soon because, like I said, we didn't have a global approach. We didn't have a global strategy just yet, and so much of our work was piecemeal. So we had some things in different regions and not in the others, and licenses were managed all over the place. So really took a moment and said, well, what's our biggest bet? And it was Nadia. 

So it was this moment for us where we felt like we could hit the biggest number of people. We had something globally. You know, we have our voices survey. We get feedback from the organization on what they want, and we knew they wanted more of these opportunities. So we tied straight into that. And we said, look, we heard you from our voices survey. We know that you want more development opportunities. We're working on it. Step one, here's something we have right now. And we went big with it for that reason. 

05:47   Integrating Nadia into Talent Initiatives

Colleen: So kind of two of the major things that we did, one, was implement Nadia as a way to help coach the managers based on the feedback we get from our employee engagement survey. So that was a direct partnership with my peer in that space where she, said, well, yeah, we could do this, and I think we should. And we worked and we said, well, here are the results of the survey. What would we want a coach to talk about? And then how can Valence partner with us to customize that. Right? 

So now it's really amazing. After the voices survey, when a manager gets the results, they can then say there's a, there's a link in the communication that says, would you like to talk about your results with Nadia and get some custom personalized feedback on what you should do with your team and how you should action plan for this? And they can just click it, and there's Nadia, and she's trained up and ready to support them on that. So just an amazing partnership there. Again, find your team, find your partners because they matter. 

You know, my other peer, Doug, on the other side of the fence, talent management, we started partnering on performance conversations and around, well, how can Nadia help coach the managers on how to have better performance conversations? And so same deal. We pulled in Valence. We said, this is what we're trying to do. How do we customize the tool? So now, anytime it's time for performance conversation, they go out there, and Nadia is the support that's in those communications to say, hey. Would you wanna prep for a conversation? Do you have one that's particularly difficult? You know, Nadia can help you role play for that.

Nadia x AGCO: A Global Coach for a Global Organization

Colleen Sugrue, Head of Global Learning and Organizational Capability at AGCO, shares how her team brought scalable, sustainable, and personalized coaching to AGCO's global workforce with Nadia, supporting every employee around the world in over 29 languages. Colleen has priceless advice on driving global adoption of AI coaching:

1. Find solutions that scale, take the pressure off HR workloads and budgets, and are tailored to the needs of your workforce.

2. Find your team within the organization to drive adoption, from IT, to executive champions, to comms partners.

3. Find out what works by customizing AI to meet your people in the moments that matter in your talent cycle.

Colleen Sugrue

Head of Global Learning and Organizational Capability, AGCO

Nadia x AGCO: A Global Coach for a Global Organization

How to Successfully Deploy an AI Coach

It’s one thing to buy seats for an AI tool, but it’s another thing entirely to successfully integrate it into your organization. In this deep dive on AI adoption strategy, Jonathan Crookall (Chief People Officer, Costa Coffee), Jennifer Carpenter (Global Head of Talent, ADI), and Maree Prendergast (Global Chief People Officer , VML) share the use cases that led them to roll out AI coaching with Nadia for their frontline workforces, tactics for successful onboarding, and lessons learned on the path to global adoption.

Jonathan Crookall

Chief People Officer, Costa Coffee

Jennifer Carpenter

Global Head of Talent, Analog Devices

Maree Prendergast

Global Chief People Officer , VML

Key Points

00:00  AI Coaching for Leadership Development and Recruitment

Das Rush: What are the AI initiatives that you've each been leading in HR? And then kind of what has the journey been specifically with AI coaching over the last twelve months? And then I wanna go to kind of what's changed specifically in the last six. 

Jonathan Crookall: So the journey that we've been on in Costa Coffee on AI has really started with using some of the normal tools like Copilot to help with efficiency of running general activities around meeting efficiency, supporting on generating documents. I mean, I think the Costa context is, you know, we employ just over 20,000 people across multiple markets, and where we found most, application of Nadia right now is in our UK store manager population. 

So we've got around 1,500 store managers, operating across stores in the UK. And their situation is, you know, they don't know in the morning when they're gonna come in and find out that their coffee machine's down or, you know, Sally hasn't showed up for work today or whatever. So they're so being able to schedule coaching support for a store manager in a, in a coffee shop is really tricky. 

So the, one of the significant advantages we've had of working using the Nadia tool is just that ability for store managers to take the time when they wanna take the time to get the support that they need to be a better leader. So we've had a big focus on leadership, and Nadia has been the big sort of unlock for that level of leadership in, in Costa. 

The other, the other use case that we've got for a different platform is helping us with recruitment. Again, in the high-volume areas, we're using AI-based tools to support our barista recruitment at the store level. And, again, we've only just started that, but it's already generating some great benefit both for the candidate, but also for the store managers who are doing the recruitment. 

Maree Prendergast: For us at VML, which is part of WPP, WPP obviously has a, you know, an enormous, interest in AI. We have our own, proprietary platform called WPP Open, which our employees work in and use as their operating model every day to deliver to clients, and, you know, that helps us deliver work, in a more efficient way, obviously, both operationally and creatively, etc. And so we wanted to find a way that we could use, AI tools within HR as well to, you know, help people take on that journey. It was part of an adoption process, not just for, you know, this particular tool, but across the board, like a cultural shift. But specifically, when we got introduced to Nadia, we're super excited about the opportunity to sort of integrate that really into our professional development, and I'd say leadership development strategy. 

So key for us was, and probably the best use case we have of Nadia is that we've embedded it in a program that we call Thrive at VML, which is all about career conversations. So it's those ongoing and annual career conversations. And, we have a bespoke tool that sits within Microsoft Teams, that our employees use, and we embedded Nadia into that app within Teams. So it's right in the middle of the workflow. So we didn't use it as, hey, here's a link that you can go and check out and get some coaching. We, from the, from the onset, wanted to embed it in the workflow. 

And I think that's probably, you know, the key takeaway from us. So, you know, as people are, as managers are looking at 360-review information, of course, they can use other AI tools to create summaries, etc., but Nadia really supports in how do I have this conversation around this particular person's development aspirations and some of the challenges, etc. Equally, for our employees, they can figure out how to talk about their own aspirations or talk about some of the challenges that they have or some of the goal setting that they have. And we found that to be, a really engaging partner. 

Das: Yeah. I love that in the flow of the work. And, Jennifer, how about you? ADI, what's been the journey of the last twelve months? As I know you spoke at the summit in last November, what's really specifically changed in the last six months for you? 

Jennifer Carpenter: At ADI Analog Devices, we are a deeply technical workforce. So we employ thousands of engineers. And over the last twelve months, we've been rapidly experimenting with a number of generative AI tools, Copilot 365, GitHub. We've actually customized our own agents within ADI to support things as simple as writing feedback in our annual performance cycle. What's really unique about Nadia and AI coaching specifically is we now have over 4,000 employees across 31 countries utilizing Nadia. And you know I'm all about measurement. 

So we introduced Nadia as just one more example of how employees can build up their skills and experience partnering with generative AI. And we've been asking them, why are you using AI coaching? What's in it for you? And what's interesting, two-thirds of that audience tell us they're using AI coaching for fresh perspectives throughout their professional or personal development, just having fresh perspectives. Another half are telling us they need a problem-solving partner. So they're looking to solve problems. 

And then a solid third, just as looking at Nadia or AI coaching in general as a leadership gym that they can visit anytime to help them develop their leadership skills. 

Das: Just like a few bicep curls on, like, active. 

06:07  Surging AI Adoption Over the Past 6 Months

Jennifer: Like, a few reps at the gym. But when you think about where we are over the last twelve months and even the last six months, is we're looking across all of these experiments as just that, reps in the gym to help prepare the workforce for what's coming next. In the last six months, specifically, I've just seen adoption kind of do a little bit of a hockey stick regardless of the tool that we're talking about. 

People are more willing to experiment and to use AI than they were even six months ago, and we're actually tracking sentiment within ADI. I think of this as their level of optimism that AI is going to help them improve the quality of their work or their own productivity, as well as their own confidence in their ability to use these tools. And in the last six months, we've seen a ten percent increase in positivity and agency, you know, that that personal belief in one's ability to navigate the tools. So we actually have seen that increase in in as little as four months worth of change. 

Das: Wow. Jonathan and Maree, is that similar to what you've also been seeing in the last six months? Or for you, what have you noticed is the difference? 

Jonathan: So I think in my, in our case, in Costa, the way I can track it is just by adoption. The idea of just looking at how people are adopting it, and as you've said, you've heard a couple of case studies that we've shared from some of our store managers that have, that have used Nadia with to great effect. So those stories we've obviously shared within Costa as well, and that in itself is generating that. 

I think you're right. There's this kind of sense of mystery and what is it gonna do and all the rest of it that's, that goes around the topic of AI because it's always talked about as kind of a cerebral theoretical construct. And I think the more you can get it into practical usage and people talking about those stories of, I used this tool and it's helped me to achieve this in terms of, you know, leadership problems that I'm dealing with, how do I generate better performance in my store, those are the stories that are getting people to get on board with, with trying it for themselves. 

Das: I have to say, I share the, more than probably any other story, the one of, I think, your store manager, Sharon. Is a store manager, went to a store that was kind of one of the lowest-performing within Costa's shop and, you know, one of their great managers, four weeks with Nadia as a coach, kind of daily sessions. How do I turn this around? How do I motivate people who are unmotivated? And within four weeks, had taken it in the peak holiday season from lowest-performing to top-performing store. And so just this idea of, like, on a very, like, local level, how this can transform the impact with a single manager. It's just one of my favorite stories. 

Maree, how about, how about you? 

Maree: I think similar to Jonathan, we've been looking at adoption, and we have about 2,500 people actively using Nadia on a daily basis. I mentioned, you know, the adapt different geographies around the world has been a huge game changer for us, but for us to be able to bring this at scale, to sort of, you know, countries, far away, has really been very beneficial for people. 

09:32  Driving AI Coaching Adoption: Strategies

Das: Yeah. That's amazing. And I think, you know, some of what I'm hearing in this, like, with this AI coach, you can suddenly, as HR, reach into parts of the organization that before used to be kind of outside of your reach. But even if you're reaching people, that's not necessarily the same as them adopting tools. 

Jennifer: So if you're all really pushing to reach frontline managers within different programs that you have, what's been effective in driving adoption? So what's interesting when you ask someone to try something new, some people are game and others are greatly skeptical. So one of the first tips that I always recommend, people to follow, because we did it the other way and it didn't work out so well, is start with an invitation instead of an expectation that someone will be required to use a tool that's new and different for them. 

We found that when we invited people to participate, we had much greater engagement. But we also wanted to listen to those that weren't game, that this wasn't their cup of tea, and we asked them why. And we found three patterns in our feedback on why people weren't adopting it. By far, the number one reason people are not adopting this is or so they tell us, is time. They just don't have it. And they view adoption of any new tool as one more thing that they have to make room for. And when I say it was number one, it was, like, seventy percent of the people we asked said time is the killer. 

So how we address that is in our marketing of these invitations. It saves you time. Find time back in your day by utilizing these tools to increase your productivity and to give you back time. So addressing that right out of the gates. The second reason why they said they're not adopting these tools is they're not sure how to use them or why they would use them. So if you can clarify how these tools can help, how AI coaching can help someone, that addresses the second reason why people aren't adopting. And the third is just trust. Would I trust a human to do this better than this tool? Is my data protected? Is this safe to use? So again, addressing time, what's in it for you, and the trust factor are the three largest barriers to adoption that we found that have been, that have made the difference for us in helping to increase our adoption. 

Jonathan: I was just gonna add a couple of other perspectives on it, and I think these are, so I think one of the drivers for us of adoption is the flexibility. So the possibility and the option to choose when you wanna use the tool, how you're gonna use the tool, not them to schedule in a third party to meet with you in a particular venue or at a particular time. And similar to Jennifer, I think one other thing that we've used in Costa is almost the sort of rarity factor. So, can I get signed up for this, for this program? So people kinda go, oh, I've not heard of this. What is it? How can I get on board? 

So I think that's, that's also driven adoption in some of our populations where people just get curious, and they wanna they're in your Jennifer, you're why not their cam rather than, I'm not interested. 

Maree: So I think there's probably three things that I can add. The first one was really actually leadership led credibility. So one of the things that we did with Nadia early on after we had done our own sort of piloting of the tool, etc., is that I gave it to our exec team and our CEO. And I said, you know, why don't you just try it out and see what you think? And, actually, it gave it a lot of credibility when we did introduce it because we had our CEO introduce it as something that, again, was very optional for people, but it was an available tool. But he really framed it as a strategic asset for people sort of rather than, as you said, something mandated or something that you have to adopt. And I think that overcame a lot of that potential skepticism. 

And in the local markets where we found people who are adopting and having them share some of their stories about how they're using it, etc., and then feeding that back to the population, that's created some really engaging moments. We recently had, you know, a global, what we call the career hack, like a career day where everybody could explore all sorts of different things. And AI was a big theme, and so we had a lot of people share their experiences in local markets. That sort of word-of-mouth, you know, adoption has helped as well. 

14:17  Integrating AI into Talent & Leadership Development

Das: One of the other things that we've seen being really key to, like, an effective rollout of AI within HR is that it's really tied into what you're already doing with talent development, with leadership development, with performance management. And that's something I think all three of your companies have done really well. 

So I'd love to click into some of these initiatives. And, Jonathan, I'm gonna start with you. 

Jonathan: I think the thing I would point to is we are a performance, you know, driven organization. We have numbers and data and facts flying around on an hourly basis, daily basis, weekly basis. So people are very honed into how can I perform better, how can I see the performance flow through? And I think that's where we've most connected use of Nadia, particularly with our store manager population to say, well, actually, in order to help you with your performance and your manager is gonna be part of that story. But, actually, we've got this additional tool that's gonna help you to drive performance in your store and drive a better experience for your customers, actually. 

So I think that's the, that's the biggest single hook that we've used within Costa. We are, we're in, we're on this journey actually, which I think is not an unfamiliar to many organizations where we have this kind of, post-COVID hiatus of lack of investment, and we've now been rebuilding all of our leadership frameworks off the back of that into, okay, we've now got some new tools that support leadership development in a very, across every type of leadership development. And, you know, this is one that's very specific particularly to our frontline teams. 

Das: Jennifer, at ADI, you know, you launched kind of globally to all employees and then followed up with some of your kind of specific integrations into talent moments. Talk a little bit about your journey there, with the performance management cycles and building into kind of ADI's value and action model. 

Jennifer: Sure. So, again, you can build it, but will they come? So what we've done is we have a monthly drumbeat of workshops that are timely and relevant to what employees and managers need to be doing. That might be performance discussions. That's already happening. Oh, hey. By the way, come to our workshop. We can show you how AI coaching can support you through that. We just rounded out our mid, midyear point. Let's reflect on our goals. Let's refresh our goals. Let's talk to our AI coach about how we can ensure our goals are relevant. We just wrapped our engagement survey. 

So our next workshop's gonna say, come talk to Nadia as you have conversations with your teams about your team engagement scores. We've also utilized Nadia's capability to make it as relevant to our employees as possible by training Nadia to speak about our culture and values in action. So Nadia knows the language in which we speak when we say you should be reflecting ADI values in action through your daily performance as we consider you for advancement, as we coach you for performance. 

So I do see Nadia as an extension of a preexisting talent strategy that just accelerates and allows it to hang nicely together.

17:42  Supporting Adoption Through Inclusive AI

And I think most importantly at ADI, we are a global workforce, and we have the diversity of culture and language. And something that I believe has driven adoption significantly is the ability to Nadia, to speak to employees in their native language. And where we see super users, and we define super users as using Nadia three times more often than the average, they are engaging with Nadia in local language. 

So I think it's removing that friction of engagement in the conversation that we know Nadia and AI coaching support so well. But the even better "if" is the fact that AI is being so inclusive to and comfortable for people to operate in their native language. When we widen the aperture of who's using it, we see about seventy percent are managers, thirty percent are individual contributors. But when we look at those super users, it's about fifty-fifty. Tools like Nadia are not just created for leaders or managers, but we can now open up access to leaders at all levels. And we're gonna continue to study these super users to understand through gender. We are seeing more women as super users than men, which is, again, another optimistic statistic because many studies are showing women are less inclined to engage and experiment with AI. Yet, as it relates to AI coaching, we're seeing the opposite. 

Maree: And I think for all the things that Jennifer was just saying as well, not just in, I mean, we are a global organization too. We're in, you know, fifty-three different markets. So we also have the language barrier, which has been very helpful because we haven't been able to, really be able to, as I mentioned before, penetrate those markets successfully in the past with something at scale. But I think the other thing and just touched on it then in speaking about women, not just women, but certainly people from diverse backgrounds and even not just diverse backgrounds. But that's where we kind of, when we did go through our pilot, we did test in those areas first, because we wanted to make sure that, you know, people felt extremely comfortable, that it was inclusive, etc. And I think that has actually supported adoption significantly because, you know, there is an intimidation sometimes talking to a coach or talking to a peer or talking to someone else. And Nadia has just proven time and time again, actually, that she's quite an empathetic coach as well. 

One of those aspects is let's have the conversation with your manager. Let's have the feedback from your peers. Let's make sure you really get that. Nadia has supported managers, right in the flow of that career conversation work within Thrive to not only help them, you know, use the feedback in a constructive way, and how to position that in a constructive way, but help them avoid misunderstandings. And then it's certainly and it's made it efficient. Right? I mean, before, that would take a lot of work for a manager to sit down and do all that. 

And, obviously, Nadia has created an efficiency there so they can get to more, of their direct reports. And it's the same with employees who might feel, I can't really talk to my manager about this, or I don't know how to talk to my manager about this. So sometimes it's very challenging for people to have difficult conversations, and Nadia has really supported in creating a constructive environment for people.

21:18  AI Coaching for Change Management

And then another place, we had a mandated RTO of four days a week. Obviously, in some markets, this is fine. In many, it was not. And we had a pretty short time frame of sort of three months to get people back and comfortable. And, I'm sure anybody who has tried to do this has met with the same level of resistance that we were globally, for all very valid reasons. And so we used, we had an idea to use Nadia to help people understand, and have a reflective ability or perhaps have a third party, if you like, to talk to them about how they might, you know, introduce this back into their day to day. And, also, if they were having challenges, how they could talk to someone either within HR or their managers about it. 

So we loaded Nadia with all of our RTO policies. We loaded her with our FAQs, etc. We taught her about that. In addition, obviously, she's already very familiar with the values of the company, etc. And so she was able to help, coach that and frame it in a way that when we messaged RTO four days a week, we message it with significant flexibility. Right? But that is lost in the translation of an email or a leader coming out and saying that all people: you have to be back in four days a week. And I think Nadia really helped give the perspective of there is flexibility. If there is a need to, you know, to have flexibility, it's there, and this is how you get it, in a constructive way, and this is how you approach it so that people, can understand your particular individual circumstance, etc. 

So that was really helpful as well. It really helped people with sort of sensitive transitions, which we hadn't anticipated originally, but she worked really well for that too. 

Jonathan: There's a kind of fairness and consistency in the approach that that the tool can be used for, which I think really helps to drive trust in in the, in the, you know, the technology rather than a human who is naturally gonna have some biases and inconsistencies in the way, in the way that they operate. 

We've got the message across to people that this is actually a way of reducing bias. You can't remove it entirely, but, actually, you know, people are worried about, you know, is this gonna create some bias or a difference of view that's coming through the tool. And, actually, we've demonstrated to people that, actually, this is a way of reducing that level of bias that we naturally will carry with us. So that's one of the key messages that we've got across to help with trust. 

24:00  Future Vision for AI in the Workforce

Das: We've talked a lot kind of about your journey, what it's taken to drive adoption. I wanna look forward now. Twelve months from now, we're all back together again, either virtually or in person. We're having this conversation again. Where do you hope that you are within your organization with AI? 

Jonathan: So in terms of where Costa would like to be on AI in twelve months' time, I'd love to love to see that we've got some of the integration, that's been talked about on this call happening so that there's a, there's an interchangeability between different mechanisms and different processes and different tools to support people on their development, whether that's in person, whether that's virtual, whether that's AI. And I guess the biggest single thing will be, we're seeing performance improvement as a result of adopting the tools. And whether that's, you know, my career performance or if that's actual business performance because, ultimately, that's the goal that we all have. 

Maree: For me, you know, in AI tools in general, I think, you know, we just, we hope that they'll just become the normal part of workflow, right, that they're not something that's just sitting out that you might adopt here and there, or be an on-demand tool if you like. They become more habitual, and that's across the board regardless of what AI tool it is. I think specifically with Nadia, you know, I would hope that she becomes more of a habitual growth partner for people. 

Jennifer: I would just add to, the colleagues today on the call that Nadia is gonna be joining our org charts. So what I see in the first six or twelve months is we see a lot of individual use cases, and I'm monitoring team use. I think success will really be where people see Nadia as a part of the team interaction, not just an individual coach, but a resource for the team. And we'll see performance and productivity and engagement increase as a result of that. 

Das: Yeah. From experimenting to adoption and now from adoption to impact and kind of results in the next twelve months. Jonathan, Jennifer, and Maree, thank you so much for joining and kind of sharing your stories. It's been just a fantastic conversation. So thank you so much.

How to Successfully Deploy an AI Coach

It’s one thing to buy seats for an AI tool, but it’s another thing entirely to successfully integrate it into your organization. In this deep dive on AI adoption strategy, Jonathan Crookall (Chief People Officer, Costa Coffee), Jennifer Carpenter (Global Head of Talent, ADI), and Maree Prendergast (Global Chief People Officer , VML) share the use cases that led them to roll out AI coaching with Nadia for their frontline workforces, tactics for successful onboarding, and lessons learned on the path to global adoption.

Jonathan Crookall

Chief People Officer, Costa Coffee

How to Successfully Deploy an AI Coach

Global Head of Talent, Analog Devices

Maree Prendergast

Global Chief People Officer , VML

Why AI Impact Starts with Managers

If you want to maximize AI’s impact at your company, start with managers. Hein Knaapen (former CHRO, ING), Linsday Pattison (Chief People Officer, WPP), and Paula Landmann (Chief Talent and Development Officer, Novartis) unpack the new AI toolkit to support managers, unlock capacity, and ultimately drive organizational performance.

Hein Knaapen

Former CHRO, ING

Linsday Pattison

Chief People Officer, WPP

Paula Landmann

Chief Talent and Development Officer, Novartis

Key Points

00:00  The AI Journey: Excitement to Delivery

Das Rush: Last twelve months, where are we right now? Where are most organizations or the organizations you've been working with? 

Lindsay Pattison: I would say for the last twelve months, we've had three stages of our journey at WPP. So WPP has about 108,000 colleagues around the world working, and we provide marketing services. So the first stage was excitement, optimism, experimentation, all of which is good because, actually, I do know some other companies where, creative companies where some of the content producers are very nervous still about AI and worried about IP, whereas our industry is actually super optimistic. So I think having that optimism is good. So then become very focused on mass adoption. So "forced fun," but we would shut offices for a whole day and really push the training of AI. 

So I would say the first thing was excitement, experimentation, but kind of how people are assisted by AI. The second stage was to move it to more habitual use because it's fine to experiment and be excited, but we need to then be very specific about use cases by function and creating a workflow for our client work across WP that's enabled by AI was one. And then thinking from a business function perspective, so people, legal, finance, how do we think more specifically about functional roles, how they could be augmented by AI? 

So I would say we then moved into enabled by AI, so assisted then enabled, and now it's really about delivery by AI. So, actually, fundamentally, pieces of work, even org charts being delivered by AI and baking that into our absolute offering. And as some people talk about a wave of AI, but I think it's a tsunami of AI about to hit us all. And we are moving into the delivery phase. 

01:56  Mass AI Adoption & Change Management

Paula Landmann: So I would say at the start, we first understood which would be the tools we wanted to give in the hands of everyone. And, of course, we created then our own internal ChatGPT. We agreed that Copilot would be one of them. We also assessed which type of AI coach we also wanted to proceed with. And then, of course, there were more, the specialized tools that we agreed for specific groups for particular needs that they had to solve. 

So I think from GitHub to others that many of you probably use. And then I think our journey was understanding how would we get to real usage. And we started by very intentionally giving licenses to people who used heavily, for example, Word documents or prepare PowerPoint presentations. So we went by use. And that was very interesting because we could see that those people became very quickly our change agents as well. And that evolved into a network now of over a thousand change agents across the company. So starting with that. And then there intentionally, we also onboarded our executive committee members and top leaders of this company and started using in critical meetings. And when we had leaders together, we would use it. We would have sessions around it, persona based. 

So started with a very, very intentional change management, heavy investment in change management with support also of an external partner because we believe that that the start that was really needed for people to realize the benefits and really start using it. And what was interesting is we had a lot of gen AI immersion weeks and all those sorts of things. From the start, we could sense people were trying to understand and intrigued, and we slowly saw a huge change to real adoption in the day to day. And we track usage, right, of tools like Copilot. So we see how we became suddenly the number of hours people use per week and how slowly we became a company with a heavy usage, one of the companies with which, with more active users a week. 

So I think that was the journey. And then, of course, I think as people saw the benefits in business cases, is all, it also became part of how we work. So we see in development, for example, in research that this is part of how we solve day-to-day tasks and work. And that, I think, as more as more people saw the impact and the benefits, everybody wanted everybody wanted access to the tools. Everybody started using in meetings, and then it really became a bit more viral. 

So I would say we started by trying to understand the appetite, leveraging key cohort population to drive to now, like, I would say is really part of the way we work, and there's great use cases in every single area around the company. So it's a big shift. 

04:25  North Star: AI for Business Performance

Hein Knaapen: I think Paula and Lindsay are probably a little ahead of the curve. It's, and I cannot totally understand how that, yeah, how that is impacted by the sort of work you do. So that's really interesting to see. And what I'm seeing with solutions, whatever solutions that are available on whatever part of company's processes, we are often excited about new options, about new stuff. And that's great because once we lose our curiosity, we're going downward. But that, doesn't always make it easy to keep to keep company performance as a North Star. 

And so the interesting thing is and, of course, you don't think your way into new acting. You act your way into new thinking. So if you don't try out, you don't know. I totally get that. And I like everything that Lindsay and Paula gives as examples. And how are you sure or how are you evolving to a point where you are clear, here are parts of our processes where it works and it has value and here where it's only nice to have. I just, what I'm curious about. 

Das: Yeah. And it's so, it's like, a couple things up here in that, and I kinda wanna come to this question too for Lindsay and Paula, like, what you held as a North Star through your initiatives. 

Lindsay: I think the North Star, back to Hein's point, is performance and both, you know, Paula and I work in very competitive categories. So, actually, much more simplistically, it will come to managers with the majority of our workforce. We need a competitive advantage, and by getting ahead in adopting and using AI is gonna help us win and have the business succeed. Simple as that. 

06:06  Game-Changing AI Coaching at Novartis

Das: Paula, you've embedded Nadia within your align initiative, which is explicitly an initiative for managers. And so I'd like to hear a little bit about, like, why that initiative and why an AI coach within that, before we kind of talk about change management. 

Paula: Yeah. So we actually have, they're separate, right? Nadia and Align, but we have both. So Align is a tool that we use for team effectiveness, and it's a super simple diagnostic tool. And what I love about it is it really allows team to rate themselves on habits shared by high performing teams, and then it triggers the right conversations, right, in the in the areas that the team needs. So we are using this now across the enterprise for all sorts of team conversations together sometimes with the perspectives tool, which is also an additional tool. Both are Valence tools, and I would say very, very helpful for us. 

Now coach Nadia, which is the AI tool that we have originally piloted with a few hundred people and now we're expanding to 5,000 people at Novartis, is really a game changer for us. And we've had experience now for a few months, I think almost a year with it across Novartis. And where we see is, and that's part of our North Star. This is really focused on individual development. And in essence, it helps any person who needs support in the moment they need it. So they don't need to wait for a next coaching conversation if they have a coach. They have it. It's really at their fingertips. It's pretty democratized. It creates a safe space for people. They don't have to worry about what they're asking, if somebody on the other end is judging or will do a face of any sort. 

So it's really, the feedback is really excellent. And we've surveyed with measure impact of those managers over time and also people using it. We now even embedded it, for example, in some of our leadership development interventions. We recently had a mentoring retreat for ECN executive leadership minus one, and we had an executive coach, we had Nadia, and the business leader. The three would coach individuals. And the feedback from participants was that many times Nadia was the most effective coach. 

So I think it, and then, of course, people who go through it see the benefit, they want their teams to have it, they really talk about it, and then you see the effect it really creates. And it can be as much as an opportunity for people to stop, reflect, learn, get advice as just truly a tool that hit nudges you to say, have you thought about that today, sending you an email. So that has been one of the most impactful tools, I would say, that we're currently using. 

Lindsay: And I just thought to pick up on that, Paula, because you mentioned it, I think, before. But it's as we think about adoption at scale, it's really FOMO. Right? It's fear of missing out once. And I think it's so clever that you started with your EXCO using the tools because everyone else need to understand that AI isn't cheating. AI is enabling. It's assisting. It's helping you. And then everybody wants to well, hopefully, in a high-performing organization, be better at their job. So whether that's coach Nadia really helping you think about how you have challenging conversations or you develop your career skills or whether that's Copilot, how can you functionally, you know, simplistically curate documents. It's really a way of being better at your job, and who doesn't wanna be better at their job? 

Paula: Exactly. And Lindsay, just to this point, what I found fascinating was we very intentionally also put out some videos of our EXCO members talking about where they use it. So one of our EXCO members said he used to create his own objectives. He actually used Copilot for that and got some hints from Nadia. The usage of the tools after the video went out just went up drastically. It was quite a big deal. 

Hein: Oh, beautiful. 

Paula: So that just shows how role modeling really plays a role in the day to day. Right? 

09:40  Supporting Overwhelmed Frontline Managers

Hein: And how to nudge people. Really how to nudge people. That's very, very nice. Yeah. But here's a perspective. I totally get how you guys, for your primary business, can make use of AI a lot. I was a bit or even a bit obsessed with the role of the middle manager. And so eighty-five percent of our people, they report to our full-time managers. And there's this beautiful book that I can really advise you to read. It's from last year, Bob Sutton. He's a professor at Stanford. It's called the Friction Project. And he says we are here, the leadership of the company, to be the guardians of the time of our people, for them to spend their time to being relevant for the customer. And then he says, in actual fact, we're the robbers of that time because we burden them with all kinds of pet projects. 

And then look at your, at your frontline managers. I mean, dignified, respectable people, often hardly more advanced in anything than the people they lead, and we just, we leave them alone. We often leave them alone with all kinds of practical stuff they need to drive performance today and tomorrow and day after tomorrow. And what I've also experienced over the past forty years is the most overlooked single one, most important driver of company performance is the skills of the manager. And it's from that perspective that I look at Nadia and at AI, and I'm an apprentice and a starter. I'm amazed with the functionality of Nadia, but I can also see how powerful it is because it creates a sort of safe space, if I may, a psychological safety for the manager to ask any questions they may be afraid are too stupid to ask other people. And that builds their skills, and, as a result, that builds their confidence to steer performance. 

Lindsay: And to build on that point about why and how managers are so, they're overwhelmed. Right? And they're on, I think you've talked about them being on the front line. And, actually, what we found, we looked at our use of Nadia, which thousands of people use Nadia across WPP. It's piloted first by VML, who are also speaking at the summit, the brilliant Maree. But, we looked at more senior-level users, mid-level managers, and junior colleagues. fifty-five percent of the use was by managers. Thirty two percent was, senior managers, and then the small minority were junior colleagues because they're overwhelmed with asks of them. And what was also interesting is the amount of things they use Nadia for was the most diverse because they're we're throwing stuff. They're grappling. They're trying to get up the corporate ladder. They've got lots of super tech, tech-savvy, ambitious, Gen Z below them. And these are the millennials, really, struggling with Gen X's on top who've earned their place, super literate tech-savvy people below them who can superficially become very good at everything. 

So I think there's a lot, there's a big burden on managers, and it's our job to really support them and help them move through the organization. 

Paula: If I can build on that, I, yeah. It's interesting because when we also look at topics, things that managers bring to Nadia, we also get a lot of insights of what we can do to support managers much better. So, of course, we don't see any individualized data, but on the aggregate, we see, wow, managers are really trying to understand how to influence that scale. How are we supporting them build this capability? So I think it gives also insights in in those directions as well. 

13:10  Pragmatic AI Tools for Managers in the Flow of Work

And what I also heard a lot from our managers is we're doing this gen AI weeks to support people understand what to do. And the sessions that are really persona-based focus on tools and tips for managers. They have the highest uptakes. Thousands of managers are joining. I think in in total, we had 15,000 across the company. And really, because they are overwhelmed and they want to understand how to best leverage the tools available to them. So if things are pragmatic, practical, they really appreciate. It becomes hard if it takes a lot for them to learn because it takes the time they don't have to learn. And I think that's one of the things we're really taking into account as we build the capabilities and skills of managers, how to really leverage the time they have to ensure they get what they need and can be more effective in their roles. And, of course, they end up being then the role model for their people. Right? So they're absolutely critical for us. 

Hein: Yeah. I get that. And there's the point, I guess, Paula, those are, relatively speaking, micro interventions. So you don't need to go to a three-day course. So you can take ten minutes, and that is, and that is wonderful. 

Paula: Yeah. Correct. In the flow of work and the power of nudges. In addition to the role itself, the way they have to lead their people and what we expect them to do in leading their people has drastically evolved. And this is where we hear from managers that having tools at their fingertips that increase their productivity, help them unlock hours for them to be more innovative is really helpful. And I think tools that help them, again, on the self-reflection, help them prepare, for example, for development conversations. And at times, it's the simple things. 

But I was talking to another manager this, you know, yesterday, and she told me, listen. I had to sometimes think, where do I find again the tool to help me have a difficult conversation or host development conversation? You guys now have not only Nadia who supports me, but also in Copilot, all these prompts that I use. And I'm so ready for it because it's all automated. And then I have it in front of me when I'm having set conversations. 

Lindsay: The key factors and the reasons why we're using Nadia very specifically as a tool to help managers is a democratization of coaching, and everyone talks about the value of the safe space, the testing. There's no better, you know, the reason why senior people often get to the top is simply due to experience, that they've had experiences, and Nadia allows you to shortcut and role play experiences. And so it's really the democratization of what's, of something that's really been a tool that's only been available for very senior people, which is why we love it. Speed and democratization. 

15:37  Measuring AI Impact on Workforce & Performance

Das: What does it take to go from, you know, AI is going to transform your workforce, these general vague terms, to this is how we did it, this is how performance management changed with x impact? 

Lindsay: We paid a lot of attention in thinking about strategic workforce, planning the shape of the organization, the number of colleagues we'll have in the organization, and trying to be very specific about what exactly AI will do to unlock. And I think the more specific you can be, the better. So we've created our own LLM. We put in 6,000 different roles. I mean, way too many. I know. And then we broke down and looked at every single role to say what would be based on our AI tools available now, what would be the capacity unlock? Because otherwise, people talk in very, very vague terms. Someone will say ninety-five percent of marketing can be done by OpenAI. Obviously, I would say that's nonsense. But we did a lot of very detailed work, and we understand there's a capacity unlock. So not necessarily time saving, but capacity unlock from AI of twenty-three percent. But that varies wildly from, say, sixty percent for somebody in payroll to below one percent for a clapboard operator at the start of a shoot because we still need a human to do that. 

So actually being really specific has helped us. And we did another study just very recently which showed it was about twenty, twenty-one percent. And that's great knowledge to have, but actually what we need to move to is then directing and guiding targets against that capacity unlock, one, and then thinking about what do we do with this time saved, the capacity unlock. What are these mysterious high-level activities that we're gonna enable our amazing colleagues to do because we've taken away some of the drudgery of time. So versus having we got the specifics. We have very directional data. We now actually need to be actually slightly more prescriptive on exactly gaining that unlock back, turning that into a commercial model, driving performance, and then thinking about what else we can do. Because if nothing else, we're always super entrepreneurial. So what can we now do versus what can we do more efficiently? What do we do now? What's unique to the human? What's creative? 

Paula: I think for us, what we're also trying to understand is the impact in different parts of the business. In our space, for example, the way operations or technical operations uses AI is quite different than the way somebody in sales uses or somebody in research and development. So being quite specific to measure the impact in the different areas. And I can say, so far, what we've seen is it diverts. It's not the same. On average, what we get reported is that people say, and again, it's self-reported, that they save at least four hours a week with the tools they're provided. And, of course, we see it started by two hours. Sorry. I said four hours a week. Did I say four hours? I hope so. So it's four hours a week. They are, they save with the tools we provide. And we start by two hours a week, now it's four hours a week. So we want to see a trend and see how this increases. 

And to Lindsay's point, I think what we're also focusing a lot is helping our people understand that it helps with productivity, but it also should help them really save this time to do other work, to innovate. And when we did some surveys, so we have now almost 40,000 users of Copilot, for example. And we survey people and we ask a few questions that eighty-nine percent say they feel more productive, seventy-six percent feel more creative, which is something that is also very important for us. And it's part of our culture aspiration. We want people to feel inspired at work. So that is also something that we believe is really important. 

And the last thing I would call out is we're really tracking some of the use cases in the business and very intentionally investing in use cases where there's a clear business need. So for example, in our development space, what we're doing with Copilot is you're really using it to summarize complex clinical trial data. Or we're also using research, so taking raw research inputs and making a very nice, polished presentation out of it. So being very intentional in those use cases and seeing them actually blooming in so many parts of the company is the other piece that I would say we're heavy on. 

So happy to say that moving from AI saved me time to start hearing in the surveys that the AI helped me lead my team more effectively as well. That's the journey we're in. 

19:50  How Will the Best Companies Be Using AI in 12 Months?

Das: What are some of the things we we're observing now that kind of hinted where we're gonna be twelve months from now? And then I'm gonna fold into that, you, where do you hope to be twelve months from now? 

Paula: So if I would describe, the future twelve months from now, I would say that people will stop calling it AI and just call it the work, the work we do. So I think the best companies won't treat AI as a separate stream, as a work stream. They'll really embed into everything that the company does from onboarding, performance, coaching, leadership, but on the day to day of the business areas as well, no matter which part of the business it is. And I think the companies that get it right will really be the ones where we hear from managers that they just don't use the tools, but they really felt and it helped them have everyone around them believe that they could also use and actually use it as part of their day to day in the flow of work. 

Lindsay: I think what companies, including ours, and I think anybody in the audience needs to really consider, not to bring a downer, is not what can be done by AI because almost everything can be done or enhanced by AI or augmented in some way, but what should be done by AI? So I think the other the larger macro piece is the ethics, actually, of AI and thinking about the values of the company, the values of the company that you work for and thinking responsibly about how you use AI in terms of the goods and the products and the services that you offer out. 

So not worried about tools like Nadia are just amazing and super effective, and I—we really hope are gonna help people be better managers, better leaders, and create a happier, productive, efficient, and effective workforce. But there are other aspects of AI that I think we should just be a little attendant to and think about the values we've always held as a business and how that's, how we use AI going forward, not what can it do, but what should it do for our business. 

Hein: Yeah. Yeah. And I'm not sure whether twelve months is possibly a bit too short. In the slightly longer run, I think the companies who will have done better are those that have tightly linked it to value, business value metrics. And, of course, we are now in discovery, and it's great that people get to know it and feel familiar, and that is a necessary and very useful step. In the longer run, if it's not linked to well-defined company performance metrics, it will have been another fad. That's what I'm a bit afraid of. 

Lindsay: It'll be another Metaverse, if we remember that. Do you remember that? I don't think so. I think this one's here to stay. 

Das: Yeah. I think so. And I think we've traced a nice arc here a little bit maybe along the Gartner Hype Cycle for where we are, and we'll see you where we are twelve months from now. Wonderful. Lindsay, Paula, Hein, thank you so much for joining.

Why AI Impact Starts with Managers

If you want to maximize AI’s impact at your company, start with managers. Hein Knaapen (former CHRO, ING), Linsday Pattison (Chief People Officer, WPP), and Paula Landmann (Chief Talent and Development Officer, Novartis) unpack the new AI toolkit to support managers, unlock capacity, and ultimately drive organizational performance.

Hein Knaapen

Former CHRO, ING

Why AI Impact Starts with Managers

Chief People Officer, WPP

Paula Landmann

Chief Talent and Development Officer, Novartis

What Leaders Need to Understand About AI

What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

Key Points

00:00  Personal Assistants as Proxies

Parker Mitchell: When we last chatted in, I think it was November or maybe October, you were two weeks into the university giving you a personal assistant. Can you share more what it's been like having that and we may be able to extrapolate to everyone having the equivalent of that? 

Geoffrey Hinton: So fairly recently, I woke up earlier than usual. And up until that point, I've been thinking, maybe I don't need the personal assistant anymore because when I look in my mailbox, there's only about sort of things to be dealt with. But the morning I woke up early, I discovered there were hundreds of things to be dealt with because my personal assistant was just dealing with them. That was kind of essential. 

00:48  Learning Personal Preferences

Parker: And when you look at how she has learned about you and the way you might answer questions or how you would assess a situation, what has it been like if she's gotten to know you personally better? 

Geoffrey: It's been good. She's getting much better at knowing which questions I want to answer myself, which talks I might be interested in giving, and which talks I'm definitely not interested in giving. She can pretty much recognize my former students. 

To begin with, one of my former students would send me mail, and they'd get a very polite answer saying I was busy. And I remember talking to students. I got this answer from you. Didn't sound like you. And so now I tell my students, if you ever you get a really polite answer, that's not me. 

01:38  AI as a Proxy & Specialized Assistants

Parker: Don't have time to write the full polite answer. And so in some ways, she's acting as your proxy. She's learned how you see the world and is placing that as a first filter. How would AI develop that ability to do the proxies to help people navigate how work and life might change as AI is able to automate more things? Do you see a world where people will have many different specialized assistants or just one that knows them? Any thoughts on that? 

Geoffrey: It's a very good question. Why do you need a train one being neural net to do everything? Because that's more efficient in the long run, because you can share what different tasks have in common. So there's always this tension between, having a small neural net specialized to one thing, which doesn't have much training data. If you got enough training data, that's a sensible thing to do. And so we have huge amounts of training data. 

It's quite sensible to have many small neural nets, each of which is only trained on a tiny fraction of the training data, and a manager who decides which neural net should answer each question. If you don't have that much training data, it's typically better to have one neural net that's learned on all the training data. And then maybe after you trained on all the training data, you might fine tune it to be a specialist in different domains, and that seems to be a good compromise. Train one neural net on everything, and then in particular domains fine tune it for that domain. 

Parker: I mean, it sounds like if I look through the history, people said, you know, it might do this, but it won't do a, b, c. And I think your answer sounds like it could be just a matter of time and scale, maybe data. 

Geoffrey: Go back ten years and take anything that people said it couldn't do. It's now doing it. 

03:18  How AI Helps Doctors & Patients

Parker: And so if we fast forward now ten years in the future, obviously, the implications for society are huge. But on the positive use cases, health care is one. Tell us a little bit about how why that is so personally important to you and how that could evolve over the next, let's say, five years. 

Geoffrey: What a family doctor does, the sort of first line. The family doctor knows quite a bit about you, maybe knows something about your family, maybe even knows a few things about your genetics. But she's only seen a few thousand patients. I mean, almost certainly, she's seen less than a hundred thousand patients in her life. There just isn't time. An AI doctor could have seen the data on millions of patients, hundreds of millions of patients, and so and also could know about a lot about your genome, a lot about how to integrate information from the genome with information from tests. So you're gonna get much better family doctors with AI. And we're gonna get all sorts of things like that for CAT scans and MRI scans, where AI can see all sorts of things that current doctors don't know how to see. 

Parker: I brought up that example to a doctor who had looked at the interaction between radiologists and AI, and there were a few different scenarios. 

So one is, you know, AI is confident and the doctor is confident. Same diagnosis, obviously easy. 

Geoffrey: But not, I would trust the AI. 

Parker: So they were doing a study on this. But what was interesting is if a doctor is, you know, confident it's x and AI is confident it's y, the doctor chooses to go with their own diagnosis. 

Geoffrey: Fair enough. 

Parker: Now if the doctor is not confident and the AI is also not confident, the doctor chooses the AI solution with the sort of human thinking of, like, well, if I'm not sure, I'll blame it on the AI for being wrong. I just thought the human nature of that feels so real and dangerous at the same time. 

Geoffrey: Yeah. I think that's telling us more about human nature than about what the optimal strategy is. 

Parker: Absolutely. And ways that we might misuse AI in the human-AI interaction. 

Geoffrey: The thing I know a bit more about from a paper that's more than a year ago now is you take a bunch of cases that are difficult to diagnose. So this isn't scans. This is, you're given the description of the patient, and the test results. And on these difficult cases, doctors, get 40% of them right, an AI system gets 50% of them right, and the combination of the doctor and the AI system gets 60% right. 

And if I remember right, the main interaction is that, the doctor would often make mistakes by not thinking about a particular possibility, and the AI system will raise that possibility. It'll have a list of possibilities. And when the doctor sees that possibilities, the doctor will say, oh, yeah. The AI system is right there. I didn't think about that. That's one way in which the combination works much better. The AI system, doesn't fail to notice things in the same way a doctor often does. But there, it's already the case that, and this was more than a year ago, with the combination of AI system and doctor, it's much better doing diagnosis than the doctor alone. 

Parker: And what it sounds like the AI is doing is generating a scenario-specific checklist. Here are a range of different things, and it could do that very quickly, and a doctor can just look at that and go, no. No. No. Oh, maybe this. And it sort of allows it to do a little more system one intuition on those and then pay more attention to the ones that it thinks is important versus difficult system two thinking across every possibility. 

Geoffrey: Yeah. So that's certainly one of the things that's going on. The other thing that's going on, of course, is you get the ensemble effect. If you have two experts who work very differently and you average what they say, you'll do better then.

07:13  Personalizing Education & Healthcare with AI

Parker: Anything that's processing vast amounts of data, finding patterns and similarities, and then identifying sort of promising candidates, for humans in that sort of collaborative model you mentioned, that's gonna power things. 

Part of that leads to my next topic, which is around personalization. And so we're in a, we'll be in a world where your biology is different from mine, it's different from someone else's. And so that intervention on the medical side can be more tailored to each of us. Is there research currently going on around, how that might, you know, how that might change sort of health outcomes? 

Geoffrey: I believe there is. I don't know as much as I should about this. But for example, in cancer, you'd like to use your own immune system to fight it, and you'd like to sort of help your immune system recognize the cancer cells. And there's many ways of doing that. I think AI is already being used to choose which things to mess with. 

Parker: Are most likely to work for your particular area. 

Geoffrey: So that would be individual therapy based on AI. And then, obviously, in education, AI is gonna be very useful. And, again, it's gonna be individual therapy for misunderstandings. An AI system that's seen thousands or millions of people learning about something and there are different ways in which different people misunderstand, that will be very good at recognizing for an individual person, oh, they're misunderstanding in this way. It's what a really good teacher can do. They're misunderstanding this way, and here's an example that will make it clear to them what they're misunderstanding. 

AI is gonna be very good at that, and we're gonna get much better tutors. We're not there yet, but we're beginning to get there. And I I'm now happy to predict that in the next ten years, we'll have really good AI tutors. I may be wrong by a factor of two, but it's gonna, it's coming. 

Parker: You mentioned on the AI tutor side of things for students. I think there was a study that you referenced about how much better the outcome is when people get individualized tutors. 

Geoffrey: Yeah. I can't, I don't have the citation for it, but the number I remember quite well, and I've seen it quoted elsewhere too, which is you learn about twice as fast with a tutor as in a classroom. And it's kind of obvious why. First of all, you don't, your attention doesn't lapse. You're interacting with somebody, so your attention stays on it. You don't just stare out the window and wait 'til the lesson ends. I spent a lot of my time at school doing that. 

Secondly, the person's attending to you and can see what you're getting wrong and give you, correct it. And in a classroom, you can't do that. So it's sort of obvious why a human tutor is gonna be much more efficient than a classroom. An AI tutor should be better than a human tutor eventually. Right now, it's probably worse, but getting there. And so my guess is it will be three or four times as efficient once we have really good AI tutors because they would have seen so much more data. 

Parker: There's probably another element too, I would guess, which is around motivation. And what we found is if, you know, I'm sure for you and many students, if it was an interesting topic, if it was framed in such a way that captured our curiosity, we'd pay more attention. I guess AI tutoring will be able to do that at mass scale. 

Geoffrey: Yeah. So for most of us, interacting with other people is the most important thing there is and the most motivating thing there is. And I think AI tutors, it'll be pretty motivating. Even though they're not people, you'll get the same kind of effect of someone paying attention to you and telling you interesting things. It will be very motivating. 

Parker: And 30 kids in a class might have 30 different things that are quote, unquote interesting to them, and AI tutoring will be able to tailor it to them. 

Geoffrey: Yeah. 

10:52  AI & People Symbiosis

Parker: So as you know, what we're doing at Valence is we're building a, an AI leadership coach. And so the goal is to help personalize that learning and guidance at work. One of the things we're talking to an education company, and they said it's such a shame that everything we've learned in education about how to help people learn concepts seems to fly out the door the moment they step into the work world, and they're just left mostly on their own to learn. 

So we're excited about that. Can you share how you can see that thread of learning continuing throughout someone's career and not just ending when school's over? 

Geoffrey: So I would relate this to the longer-term development of AI. AI is gonna be used everywhere, and it's gonna get to be very intelligent. If we can reach a situation where we get a symbiosis between people and AI, AI is gonna make the world much more interesting for people. Mundane things will just be done by AI, and this symbiotic relationship will allow people to learn much faster, have much more interesting lives. That's the good scenario, and I'm hoping we can get there. 

12:01  AI's Economic Impact

Parker: How should policy makers and CEOs be thinking about and paying attention to the wide range of outcomes that could emerge? 

Geoffrey: This very quickly gets you into politics because what's gonna happen is mundane intellectual work is gonna be done by AI, and that's gonna replace a lot of jobs. In some areas, that's fine. In health care, for example, if you could make doctors and nurses more efficient, we could just all get more health care. There's a kind of more or less endless capacity for absorbing health care. We'd all like to have a doctor on the side who you can ask questions about all sorts of minor things you wouldn't bother your own doctor with, but you're quite interested to know why does your finger hurt today and stuff like that. 

Health care is great because it's elastic. You can absorb huge amounts of it, so it's not gonna lead to joblessness there. But there's other things where, there's just only so much of it you need, and it's gonna lead to joblessness there, I believe. Some people think it won't. Some people think it'll create new jobs. I'm not convinced. I think it's gonna be more like, people used to dig ditches with spades, and now people who can dig big holes in the ground with spades aren't in much demand because there's better ways of doing it. 

The worry is you'll get a big increase in productivity, which should be good, but the increase in goods and services that you can get from that big increase in productivity won't go to most people. Many people will get unemployed, and a few people will get very rich. That's not so much a problem with AI. It's a problem with AI being developed in the kind of society we have now. 

13:47  Techno-Optimism: Competing Views for the Future

Parker: So what would you say to the techno optimists? Because I think that everyone can see a scenario in which AI can make, you know, take the mundane off your plate, give you personalized learning, personalized tutoring, support you as you navigate this transition. And it seems like our social and political setup is not going to lead to that outcome. So how would you square that circle? What advice would you give to people who just say it's gonna work out? 

Geoffrey: Yeah. My first piece of advice would be, do you believe that because it's convenient for you to believe that, or do you really believe it? Now, people are very good at believing whatever is convenient for them. I've seen a lot of that recently. I just think they're being very shortsighted. 

Parker: And if someone was self-aware enough to say, okay. I recognize that this might be convenient for me, and I'm willing to ask myself a question or two. What question would you want them to ponder? 

Geoffrey: One big question is, should AI be regulated? And I think regulation is gonna be essential, if we're gonna avoid some of the really bad outcomes. 

14:56  Media Coverage of AI

Parker: If you think of the media, what's one, if you had a magic wand, what's one change you would make to how they portray or cover AI? 

Geoffrey: It's interesting. I haven't thought about that because I don't have a magic wand. But I wish they'd go into more depth so that people would understand what AI is. People have used ChatGPT and Gemini and Claude, and so they sort of have some sense of what it can do, but they understand very little about how it actually works. And so they still think that it's very different from us. And I think it's very important for people to understand it's actually very like us. 

So our best model of how we understand language is these large language models. People, linguists will tell you, no, that's not how we understand language at all. They have their own theory that never worked. They never could produce things that understood language using their theory. They basically don't have a good theory of meaning. And these neural nets use large feature vectors to represent things. It's a much better theory of meaning. So I wish the media would go into more depth to give people an understanding. 

16:11  AI & Policy

Parker: If people did understand that, how do you think it would adjust the lens through which they view AI and the policy importance of regulating it? 

Geoffrey: I think they'd be much more concerned and much more active in telling their representatives we've got to regulate this stuff and soon. And in fact, people have talked a lot about will AI be able to regulate AI? I think that's wishful thinking. I think that's about as hopeful as having the police regulate the police. 

Parker: We've talked to some scientists who've been part of trials who have AI generates concepts and scientists evaluate which ones seem to be the most promising. And it seems like it's a more effective way of making progress.

Geoffrey: Right now, yes. Right now, having AI suggest things and people make the final decision seems pretty sensible. I don't think it'll stay like that. 

17:07  Superintelligence and Creativity

Parker: Then it will continue to just go up the ladder and just get better capabilities. And what is superintelligence? Explain that to a layperson.

Geoffrey: More or less everything intellectually is just better than us. If you have a debate with it about something, you'll lose. 

Parker: And what about creativity? What about those things that we consider essentially human, just as good at us? A thousand Picassos? 

Geoffrey: Maybe it'll, that'll come a bit later. Many people have suggested that because it's not mortal and they have a different view of things, the idea that it's not creative, I think, is silly. I think it is creative. It's already very creative. It's seeing all these analogies, and a lot of creativity comes from seeing weird analogies. 

17:48  AI & Subjective Experience: A New Model

Parker: Is the LLM or the AI that we have today conscious? 

Geoffrey: I would rather answer a different question. I know this sounds like being a politician, but there's three things people typically talk about. Is it sentient? Is it conscious? Does it have subjective experience? They're all obviously related. There are a lot of people who say very confidently, it's not sentient. And then you say, what do you mean by sentient? And they say, I don't know, but it's not sentient. That seems to be a silly position to hold. 

I would rather talk about subjective experience because I think it's clear there that almost all of us have a wrong model of what subjective experience is. When I, suppose I have a lot to drink and I say, I have the subjective experience of little pink elephants floating in front of me. Most people think the words subjective experience of work like photograph of. And if I have a photograph of a little pink elephant floating in front of me, you can ask where is the photograph and what's it made of? 

So if you think subjective experience of works like photograph of, then you can ask, well, where is this subjective experience and what's it made of? And a philosopher will tell you, it's in your mind, which is a kind of theater that only you can see and an inner theater. So let me give you an alternative model of what the word subjective experience of me. I believe my perceptual system is lying to me. 

So I say to you, my perceptual system is lying to me, but, what it's telling me would be true if there were little pink elephants floating in front of me. Okay. So I just said the same thing without using the word subjective experience. And what I'm doing is trying to tell you how my perceptual system is lying to me. We think there's this inner theater. There is no inner theater. The inner theater is as wrong a view of how the, what the mind is as the view that the Earth was made six thousand years ago is of how the real world works. Almost everybody has this wrong view. They think that there's an inner theater with funny stuff in it that only I can see. That's just rubbish. And once you see that, you see that these chatbots, a multimodal chatbot already has subjective experience. 

So I'll give you an example. Suppose I have a chatbot that can see and has a robot arm and can talk, and I train it up, and I put an object in front of it. Let's say point at the object, points at the object. Then I put a prism in front of its camera when it's not looking. And I put an object in front of it and say point at the object, it points off to one side. And I say, no. The object's not there. It's straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh, I see. The prism bent the light rays. So the object's actually straight in front of me, but I had the subjective experience. It was over there. 

Parker: Fascinating. 

Geoffrey: That is the chatbot using the word subjective experience in exactly the way we use them. It's saying, my perceptual system was lying to me because of the prism. But if it hadn't been lying, the object would be over there. 

21:04  The "Manhattan Project" for AI Alignment

Parker: If you had a Manhattan-style project to try to address some of the challenges, either socially or from a research or regulatory perspective on artificial intelligence. What would that Manhattan Project be? 

Geoffrey: Oh, I think too there's one really essential question we need to figure out in the long run. There's lots of short-term things we need to do, but in the long run, we need to figure out, can we build things smarter than us that never have the desire to take over from us? We don't know how to do that, and we should be focusing a lot of resources on that. 

Parker: You know, alignment is a core a core of the sort of Manhattan Project. Is there any KPI? I know that's gonna sound sort of, you know, mundane, but any KPI that we could track to say, are we making progress on these alignment questions that you have? 

Geoffrey: Well, my worry, my main worry about alignment is how do you draw a line that's parallel to two lines at right angles? It's kinda tricky. And humans don't align with each other. 

22:10  AI Concepts: Probability Distributions

Parker: Is there a concept that is really important for people to grasp that is hard for you to explain in a way that a layperson can viscerally understand it? 

Geoffrey: I think often it's to do with probability distributions. The whole idea of a probability distribution people find hard to understand, think of it as a thing. And so in a large language model, you give it a context and it's trying to predict the next word, and it has a probability distribution over words. And people find that hard to grasp. 

Parker: And crucial because that's—that science.

Geoffrey: And it's perfectly straightforward if you understand probability. But unless you understand the idea of a probability distribution and changing what you're doing when you change the weights within the neural, the connection strengths is changing the probabilities that will assign to all the various words or word fragments. That's a concept ordinary people find difficult to grasp. 

23:02  AI is Underhyped, Not Overhyped

Parker: What would you say is the most overused buzzword in AI right now? 

Geoffrey: Well, the most overused buzzword by critics of AI is definitely hype. So for years and years, we've been saying AI is overhyped, and my view has always been that it's underhyped. 

Parker: I think that's a, that is a very important message to get out to people. I've seen that same thing. Oh, there's hallucinations. AI is never gonna catch up. Exactly. And there's, we've talked about sort of the rough edges of technology. There's always rough edges, but you have to look at the central sort of engine of it and the possibilities are so powerful there. 

Really appreciate the conversation. It's enlightening. I enjoy it so much and I know that our viewers and listeners and watchers will as well. So thank you. 

Geoffrey: It was a lot of fun.

What Leaders Need to Understand About AI

What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

What Leaders Need to Understand About AI

HR Is Now R&D

In this powerful conversation with Ethan Mollick — Professor of Entrepreneurship at Wharton and best-selling author of Co-Intelligence — one thing became clear: AI is already reshaping work, organizations, and leadership—and HR is now the R&D lab for that change.

As companies race to adapt and adopt, Ethan argues that HR leaders have an unprecedented opportunity (and responsibility) to guide the workforce through the AI transition. That requires urgency, but it also requires a clear vision for how to bring AI into the workplace. And the only way for leaders to set that vision is real, hands-on experience with frontier models and AI work coaches like Nadia.

Ethan Mollick

Professor of Entrepreneurship, The Wharton School

Key Points

00:00  AI's Immediate Impact on Work & Organizations

Parker Mitchell: Ethan, it's such a pleasure to welcome you today. I've been an avid reader of your book, Co-Intelligence. You've got it next to you. I've got it in front of me. And I think that topics that you are raising are such important topics for our CHRO audience. So first, welcome and thank you. 

Ethan Mollick: Thanks for having me. I'm thrilled to be here. 

Parker: I think the first idea that I want to explore is this idea of how much change we think AI is going to bring to the work that we do and the organizational structure that has been our construct for, you know, the past generation or two. What's your take on what an, how much change is this going to bring, both to work and to organizations? 

Ethan: So I think, people tend to think of AI as like a future thing. My argument would be, and I work with lots of companies and do lots of research and talk to all the labs, is that everything we need for massive changes of work are already there with the current AI models. Like, we don't need more advancement, and I think we're gonna see changes in all aspects of work. It's not gonna happen instantaneously. It'll happen over time, but it's gonna happen, I think much quicker than people are expecting, which is a lot of what, calling it analytical work is already going to change deeply. That means we're going to change organizational structures, how we approach management, how we approach coaching and helping and tutoring. There's just a wide variety of changes. We've never had an intelligence prosthesis before, and now we do. We can summon a form of intelligence on demand. That changes how organizations work. 

01:31  Overcoming Hurdles to AI Organizational Change

Parker: And so if we're at a stage where the base technology, the foundational models, and the LLMs have the capabilities now, what are the hurdles to the bigger changes? What are the steps that are gonna need to happen for this to begin to impact the organization? 

Ethan: So the issue is that, I have a sort of three-part model that no one should take too seriously about how you think about AI and organizations, which is you need leadership, lab, and crowd. And so leadership is that you actually need senior leadership to articulate a vision of what the world looks like with AI. How do they wanna transform their work? They need to be users of AI to understand enough of how to operate this. They need to get a sense of how these systems work, but they also need to think about what the reward systems for using this are. Why would people wanna show that they're using AI, you know, and how are they doing it? 

Then we talk about the crowd, which is how you're getting everybody using AI system. So that could be whether or not we're talking about the kinds of things that you guys are building, right, at Valence, which is like actual deployed, you know, like, package systems or whether they're using chatbots on their own, but how we incentivize them to use this because people are hiding AI use everywhere. People are turning to AI all the time as a coach, as a help with work. They're just not telling, you know, management about it because they don't want people to know they're using AI. They're worried about the job outcomes if they use it. They're saving time, and they don't wanna give that time back to their companies. So you wanna incentivize the crowd to do stuff. 

And then you finally need a lab. You need to be experimenting about what the future looks like in building new tools and bringing in nontechnical people from the crowd who are really good at AI to do transformation inside your organization. 

03:05  Leadership's Role in AI Transformation

Parker: And I wanna double click on each of those areas. So on the leadership side of things, one of the when I talked to CHROs, the, memo from the CEO of Shopify, which I know you cited in there, was one of the things that has gotten a lot of attention about saying, you know, we just need to move from, you know, let's explore it to I'm gonna mandate it. How influential do you think that memo's been? And do you think it's missing the mark in any ways? 

Ethan: So, if you look at that, and Duolingo was the other one, I think, you know, I think it helped show people a model. Everyone always wants a model for what to follow, which is tough in a revolution because you kinda wanna carve your own path out while following in some areas, revolutionizing others. I kinda don't like the memo. Right? I think what it does is establish urgency that we have to do something about AI, but it doesn't articulate a vision. There's no what to do about it. It's just use AI and everyone becomes more efficient, and then that, you know, changes how we do things. 

I expect leadership to articulate a vision of the future. I want them to, to be able to show you why you, as an individual worker, would want to transform your work with AI. What does my job look like? How do I get rewarded for using AI? What happens if I use it badly? Am I punished? What is this, what does my job look like a year from now when I come into work? And I think just saying urgency is great, but I'd hope for a vision as well. 

04:28  AI & Employee Motivation: Beyond Efficiency

Parker: So on that individual motivation note, I think there's, you know, there's probably potentially two different paths. There's like a positive, and there's a side that might not be as great. And we talked to one of our people deploying our coach in the CHRO, it's an 80,000 person company, and she said one of the challenges is that, they've experimented with it, is people spend more time verifying output than they do creating something. It's much more efficient, but it's not as motivating. And I heard a similar study with researchers in the drug lab world and sort of, pharma companies. They don't wanna just review the output of a hundred different ideas that AI will generate. They, their value is coming up with their ideas, their identity. 

Ethan: So I actually wanna push back on a little because we actually don't know that much about the motivation piece. The study, there there's a bunch of early studies on radiologists and other people using AI, but they were using old school machine learning forms of AI. Right? It's really important, even though the name is the same, they're very different, and that created algorithmic aversion because what would happen is the AI would just give you an answer to a question, and you'd feel deep, you know, as a doctor that it was overriding you. 

Interestingly, the other study, which was, I, which was on scientists using AI turned out to be fake. There's a big scandal about this. It's the MIT study, and that's the only one we have of high-end AI use showing it's demotivating in the kind of way that we're talking about here. That doesn't mean that isn't. Right? And I think there's reason to worry about constructing work and meaning, but I think we don't know a lot. What we actually find in some of our studies is the opposite of algorithmic conversion, which is that people who use AI are actually much happier because they delegate out the stuff they don't wanna do rather than the high-end stuff. 

So I think that this is where it really falls on things like the CHROs to think about how do we think about motivation in the world of AI rather than view it as like an on or off switch. There's a lot of different ways to interact with these systems that might be motivating or demotivating. 

06:35  CHROs as AI R&D Leaders

Parker: I think that's such an important point. So it's really unpacking what it is that people find motivating about their work. How do we amplify that? How do we use AI to take away probably some of the things that are drags on their motivation? And using that framing feels like a much richer approach to articulate, as a leader. 

Ethan: And it fits into a much larger issue that I think really lands on the laps of the CHROs overall. I think that you are the place where you're the place for somebody to stand and the lever to use to change the world. Right? Where is innovation coming? I the HR is R&D now because the people in your organization are gonna figure out how to use AI. Where is this, the skilling problem we're all about to hit, which is that everyone's using AI to do their work, so they're not learning anymore at an interim level. That's going to land first in the laps of CHROs. The idea that now, you know, that we have this new set of capabilities for education, for coaching, that's landing first in CHROs. 

So, like, I think that that the leverage point for organizations is the HR function at this point. And the imagination about what to do with that is really a challenge for leaders in that space. 

07:36  Leaders Have to Use AI Regularly, Not Delegate It

Parker: I love that lens and that take. So sort of, you know, the company is the R&D lab and the CHRO can be helping try and coordinate that. When you check the CHROs, how many of them grasp that and how many of them have embraced that? 

Ethan: So I think what I'm finding at the C-level is that there's a very easy answer to this question about who uses it, who panics, and who feels urgency, and it's have they used AI a lot? Like, and the number one piece of advice I give in the book that turned out to be most accurate. Right? Because you never know when you write these things, but it's the most important piece of advice is you just have to use these systems. If you have not tried to use it for every business decision you have that legally and ethically can, you don't know what they can do. And you need to use a frontier model. You need to be using o3 or Claude 4 or Opus or Gemini 2.5. Use one of those three and use them for everything for a day and get ten hours in. And half of what I find I have to do is just show people, hey. Yeah. AI can do that. And then they start to be like, oh, wow. There's a real thing here. 

So what separates us is CHROs who delegated using AI to other people and getting reports back or just reading about it, aren't getting it. And by the way, there's nothing to be ashamed of. One of the big, baffling things that I've been thinking about a lot, and my wife who runs the AI lab with me and does a lot on AI training and, you know, we've both been trying to figure out why are smart people avoiding using these systems? It's sort of a bigger kind of issue, and I think one that valence has been thinking about in really interesting ways, which is, like, some people just gravitate toward using these things, and some people want more instructions, and they get scared, and they get nervous, and they just walk away from them. 

So I think, you know, the big issue is how do we get people to actually use these things? Because once the CHRO uses it for a bunch, or HRO uses it for a bunch, they start to get the idea pretty quickly. 

Parker: One of the things that we talk about is sort of that personal epiphany moment. And I have found the people who are most likely to be vocal advocates for this have basically a personal epiphany moment once a month, if not more. And that's from personal usage with new models and the epiphany moment from twelve months ago might not be, you know, the same one as now, but it's so important to have that. 

Ethan: Yeah. I mean, I talked in the book about three sleepless nights. Like, unless you've had an existential crisis, and I really mean this, you haven't got AI. Because you have to have this moment of, like, oh, no. Why is it doing this? This is so good. Right? Like, you know, if you haven't had that, if you haven't started worrying, like, what does this mean for work? What does this mean for my job? What are my kids' jobs? The people working under me, the state or nature of a company. If you haven't had that concern, then you haven't used these systems at all. 

And I, you know, and one of the things I like about the conversations that we've had before this also is that I think you've got a deep sense of this, and, you know, people need to have this kind of crisis. And I think a lot of software vendors out there and consultants try and insulate you from this moment of epiphany. Right? They're like, just use the system. It'll do this stuff for you. But I think what makes the AI so interesting is, you know, to work with it, you have to have this sense of, like, oh, it is actually pretty smart. I do trust it to do stuff. 

Parker: It is. I have had probably three sleepless nights, probably every two weeks for the past. That feels about right. Eighteen months. And I think, I couldn't agree more that it is not about saying, hey. Don't worry. We've got this. It's, we're in this together, and there are not clear answers, and we have to have more clear conversations. That's one of the reasons why we're bringing the CHROs together. 

Ethan: Like, use voice mode and ChaptGPT 4 on the app, but just have a conversation with the AI about a problem that you're having. I talked to a Harvard quantum physicist who told me all his best ideas come from talking to AI, and I was like, is it really good at quantum physics? Like, no. No. But it's really good at asking questions. So I think having a conversation with it. Give it a bunch of documents you're having issues. Have it turn a, you know, create the documentation for this. Here's some documentation. Read this as a naive reader. Give me feedback on it. You know, create a PowerPoint for me on this. Sometimes it could just do things you wouldn't expect. Do this analysis. Generate 400 ideas for how to solve this problem. Have that conversation. Push it to do work with you. Push back, like, I think you can do more than this. Make it better. Like, this you know, like, I don't think you're taking into account the fact that the industry is changing so rapidly or you're not taking into account tariff risks. And, like, just push it and see what happens. 

I think too many people use it like Google and ask for query, get a result back. The more context the AI has, the more converse the deeper the conversation, the better the answers are gonna be. 

Parker: One of the things we've always said, which you mentioned there is using voice mode, and we say go from question to situation. Describe your situation in just stream of consciousness, voice mode, and then give it feedback, say, hey, exactly as you said, this is not how I would think about it. I would think about it that way, and the results are often just extraordinary. 

Ethan: You'll be very surprised. I mean, every measure we have of the quality of these models is, you know, is much better than the average human across any, you know, whether you could like emotional intelligence, people vastly prefer talking to the AI versus talking to doctors. Right? And, you know, reading the mind in the eyes of, you know, theory of mind stuff, very good, creativity, very good. I mean, these are really good systems. 

12:29  The Urgency of AI Adoption & Literacy

Parker: So we're gonna dig into adoption at scale in a moment, but let's talk for a moment about urgency. You've been advocating a position that we very much agree with, which is the time is now, even though everything will be changing and six or twelve months from now, it'll be different. Why is it urgent to get AI tools and AI awareness and AI literacy widespread in workforces today? 

Ethan: I mean, there's a lot of reasons for this. First of all, again, the change is baked in. You're not, your job is not going to the same five years. Right? I mean, I think AI people sometimes think the world changes overnight. And even if we have super intelligent machines and people have argued that o3 already can do almost all human tasks if we just did it properly. That's a model from ChatGPT that's already out. It doesn't matter because of exactly the reasons that everybody I'm talking to knows, which is that organizations are complicated and contingent and all kinds of things happen slowly over time. 

But I would argue the change is already baked in. Right? You know, you talk to anyone in a profession who's doing detailed work day and they're like, this already does a lot of work. We just have to figure out how to make it work inside our companies. So I would argue that you have to be aware of this change, the shockwave that's already happening inside the systems. 

The second is there's huge advantages to getting this right. Like, there's unsolved problems, especially in HR, that we can solve. I mean, you're addressing coaching. I'm thinking a lot about tutoring and mentoring and teaching. These are things that were impossible to do at scale before. Now we can do them at scale. What does that mean? How does that change what we can do? There's crises that are already gonna be happening you have to deal with. The training crisis is already going on. Right? 

This summer, if you've, you'll notice that all of your interns are using AI to do all their work because they're not dumb. They wanna show people that they're smart, so they're gonna use AI because AI is better than an intern. And all your middle managers are gonna stop turning to interns to help them and, you know, and learn, having the interns learn their jobs because AI does a better job and never complains. 

So, like, you have crises that are already built in the system that you've addressed. There's opportunities. There's crises. There's change already in the system. I cannot emphasize enough how much things are already changing. And I think the worst thing you can do is assume somebody knows the answers to all these problems. I think we're trying to solve some of these issues in education. You're trying to solve some of those of the coaching, but I don't think we can or we can argue that there's any one person you go to solve AI for you. Because I talk to the AI labs all the time, and they tell me they use my Twitter feed to figure out what AI can do for you. Like, nobody has answers. There's no instruction manual. 

So if you're waiting for someone to hand you a fully built out instruction on how to use AI, you're gonna be waiting until everybody else already figured this out, and that's too late. 

14:51  Habits of Early AI Adopters

Parker: And the people that are doing this, you know, the outliers that have adopted this mentality, what are some of the habits or rituals or maybe how they allocate their time? Are there any patterns that you'd say, oh, these are what the people who are paying close attention to this. This is what they're doing. 

Ethan: So that's that lab portion that we talked about. You don't need to be the person doing this all the time, but you absolutely need people in your organization doing this all the time. And it and people from an HR perspective doing it, by the way. One of the other mistakes we will make is they hand this to IT or even more general counsels. No offense, general counsels in the room. But, and it becomes a thing that is that gets wrapped around a technology or legal background. This is inherently like working with people. 

The best users of AI I know are good managers, are good teachers. Like, the skills that are make you good at AI are not prompting skills, they're people skills. And so you need to be taking that approach of, like, having people experimenting with it on a people skill basis and trying to figure out what give me a feedback and, like, oh, you could do better than this. It honestly responds really well to the kind of feedback that you give a human to, the AI responds really well to. 

So part of this is about experimenting personally, and I think you need to be experimenting more personally than you think you are. And by the way, it'll quickly become from time sink to, like, oh, I'm gaining time back. But then even aside from that personal skill, you need people who you think are very smart, who are very good, who are inside your domain, who are also experimenting with this and showing you what they learned excitedly on a regular basis. 

To me, the easiest way to find those people is in the crowd. The people who are already using AI, and by the way, your organization is riddled with AI use. Like, all that's happening is unauthorized AI use because we know from a, like, the number of people reporting they're using AI at work in America in a representative survey went from thirty percent to forty percent over the course of two months. Like, everyone's already using it in your organization. So your idea is to destigmatize use and then, of the people, there's probably people who are already trying to evangelize people. You probably had to already had meetings with these people who are like, oh my god, AI does all this stuff. And I don't know whether you've, you know, pushed them off somewhere else or whether they're working with you, but those are the people who you, become part of your lab. 

Parker: And I'm trying to recall a stat that I believe I read, in one of your posts, which is that, regular use of internal GPTs sort of, sort of plateaus at twenty percent. Is that, is that the number that you.

Ethan: Yeah. That's, anecdotally, twenty to thirty percent is the maximum I find from internal system use. 

Parker: And so, anecdotally, if we're at, say, call it twenty, forty percent in a survey, that means at least twice as many people are privately using it than publicly. And it's just important. That's an interesting concept for a CHRO to think about. Every person I hear using it, there's actually a second one who's not sharing it. 

Ethan: Yeah. That's a good way to phrase it. Right? It's probably at least double. And then of the rest of the people, some of them are just waiting on training, and some of them just need clear instructions. Like, not everybody is gonna be an innovator. They need to get ideas, either for the kinds of, you know, well-built products that, you know, that we're talking about here or else from things that come out of the lab. 

17:50  Overcoming Resistance to AI Adoption

Parker: And I wanna spend a moment on people's motivations. I've sort of joked with someone that, you know, once you become, you know, senior enough in a company, your private job is almost to become a monopoly. You kind of wanna have a unique perspective, a unique network, a unique something that makes you valuable. I mean, people don't do that consciously, but I think unconsciously, they don't wanna give up everything that they are good at. If AI is a superpower for them, is there a real conflict of interest between people finding ways to make themselves way better and not necessarily wanting to share that with five colleagues down the hall? 

Ethan: Absolutely. I mean I mean, it's worse than that. Right? Like, that's one of, like, six reasons you don't wanna share AI use. Right? Because, like, not only are do you not wanna give away your competitive advantage, but also people think you're brilliant right now. And, like, because you're using AI, you don't wanna show them you're not brilliant. And then also, like, look, people and companies are dumb. They know efficiency gains translate into headcount reduction. So they don't wanna show you the efficient, higher efficiency because there might be a reduction. Even in companies where they don't worry about that, the efficiency gains mean I'm expected to do more work than I did before. Why would I ever wanna do that? So I don't wanna show you that I'm showing efficiency gains. Right? 

So there's layer upon layer of reason why people don't wanna show you why they're using AI. 

Parker: And what are some of the techniques that either CHROs or leaders use to overcome those natural hesitancies? 

Ethan: So there's a few things. There is, on the most basic level, it is that clear vision. Right? Having an executive level vision of, like, what does job, what does work look like with AI? Why should I feel safe that I'm using it? Right? Having executives role model their use. Another really important reason for CHROs to use the system is if you use it all the time for everything, other people will start to see it being used all the time, and that will let you do things differently. But then there is actual change in reward systems. I've seen the pretty crazy things ranging from, like, you know, one company I spoke to, I don't recommend this, but they got, the CEO really realized how big, he did this was six months or eight months after GPT four came out. He gave ChatGPT to everybody in the company and then fired everybody who didn't use it by the end of the month after arguing they should use it many times. And then gave $10,000 prize at the end of every week to everybody, to whoever came with the best AI idea. 

I've seen CHROs who say, before you hire anyone in a company, you have to, the team has to spend two hours trying to automate that job with AI and then rewrite the job proposal or do the same thing with it when you're asking for money. I have seen, people whose model just is all channel AI use. Like, I, if you turn in a report and you haven't told me how you're using AI to do this, I'm gonna reject the report. If you have, if you are, you know, you everyone has to go through not just training, but that training is about hands-on use and you have to certify at the end of every class that you built something with AI. Like, it is hard to emphasize if you think it's as big a deal as I think you and I think it's going to be, it's hard to overdo your push for adoption. 

20:43  Co-Intelligence: AI Augmenting Human Work

Parker: I think that could be the, you know, the headline. It's hard to overdo the push for adoption. We've, you and I have talked about the importance of, like, co-intelligence versus AI to automate. And obviously, AI will automate a number of parts of it. How did you come up with the concept of co-intelligence, and why is that such a central piece of the work that you promote? 

Ethan: So right now, and I mean, really, I do have to put a caveat here. Right? I wrote Co-Intelligence, you know, a year and a half ago or so. I still think it's a really, it's like, it describes today's world really well. But I think models are getting better. There are jobs that the AI is doing that is better than human. But for the vast majority of work, the AI is a supplement to human work. Right? We've never had a way of making people smarter, right, on a general-purpose basis, and now we have the ability to do this. 

So, for example, take, you know, a deep research report. There's, deep research reports are very powerful. They're really good. When I talk to lawyers about them, for example, they say they save forty hours of work producing it and then maybe takes an hour to have someone junior check the results. So, like, but now everyone has an analysis on tap. How does that change your job? We know code, like, we know AI fills in gaps that you have. 

So we just completed a study at Procter & Gamble where we had 776 workers. This was my colleagues at MIT and Harvard University of Warwick. And, we had them work individually or in teams of two. Cross-functional teams on real work tasks. Individuals who have worked with AI performed as well as teams. Right? Absolute co-intelligence stuff. And they were, by the way, as happy as teams, just to make that, refer to the happiness point earlier. We like, these performance improvement things are very real, but they had come from working with AI, where humans use AI to check to do work for them, but they also have enough expertise that they know when the AI is weak at something or making something up. 

22:29  AI's Impact on Entry-Level Roles & Apprenticeship

Parker: That level of expertise brings, sort of, it to mind the question that I think you were talking about earlier that the, the idea of some of the threats. The bottom rungs of the ladder are obviously from a career perspective, the most easily replaceable with AI, interns, entry-level roles. And many of those entry-level roles are where people learn that taste, that judgment, that ability to assess quality. How do you see some of that playing out if AI can do the things that early career people used to be focused on? And what recommendations would you give to CHROs on that, you know, that issue? 

Ethan: Think about how the education system works. I teach at a great business school. I teach people to be generalists. Right? I don't teach them to work for Procter & Gamble or J.P. Morgan. I teach them to be, to do product development or to think about business analyst cases. And then they learn to be specialists by working the same way we've taught specialists for four thousand years, apprenticeship. Right? They work under people and they do junior work, and then, in return, mid-level people get help on how to do the junior work, and the young younger people get correction over and over again 'til they learn how to do something. They learn expertise, which is very hard to teach. That all is breaking right now as we talk. Right? Because every junior person is dumb if they're not using AI to do the work. Right? Why would they not use this to do all the work that they're doing? And meanwhile, more senior people are realizing that, you know, working with people is often really annoying. Right? Like, especially junior folks. Most people are not trained to be good mentors. They're not trained to be good coaches, so they have to figure this out on their own. And some people are good, some people are bad at it, they’re stuck doing this too. 

So that training pipeline that was always implicit has broken. Right? The coaching pipeline has broken this summer, if it, or last summer. And it has to be reconstructed at the CHRO level. Like, we have to take this deliberately. That means taking L&D seriously as a thing that we have to do. That means thinking about what skills people need to learn in a deeper and more serious way. 

Parker: We did a report with Charter. We sponsored a report with them on this apprenticeship model and asking whether an AI coach like Nadia could perform some of those functions of how apprenticeship works, which is providing you a framework, giving you a challenge that meets your level, having you get feedback on how well you did, giving you escalating that challenge and sort of role modeling in some cases what that looks like. And it was a fascinating idea. Nadia does a bit of that, but there's opportunities that AI can solve some of the issues that AI might cause as well. 

Ethan: I mean, I think, you know, part of this is we haven't like, so much about AI is teaching us what we don't know about humans very well. It was good enough. Right? Like, if you were actually designing a way to build experts, you would never design the method we have right now. We hope you get paired with a good manager who has time to do it and build a relationship with you and is a good coach. I mean, you know, you wouldn't have teaching be done in lecture halls in the way we, like, there's a lot of broken systems already that were good enough because there was no alternative. 

I think AI is both, you know, poison and cure. Right? It's, in a lot of ways, what it is a very bright light on organizations. And I think we're gonna see this, by the way, everywhere. Like, let's take one very narrow thing that is the first thing everyone does with AI in organizations, which is they do their performance reviews with AI. Why do they do performance reviews with AI? Because doing performance reviews is really annoying. And so the AI will write a performance review, and then I, they brag about this. Right? But then what was the whole point of the performance review now? If the AI is doing it, how can you not go and change performance review? So it shines a light on this issue. And in doing so, forces you to think about the process of what you should be doing better, how good you are at the job. So I mean, I think this is both poison and cure. 

Parker: We talk a lot about, we call that the second bounce of the ball. I don't know if that's quite the right term, but the first bounce of the ball is, let's automate the things that systems that we put in place. But we put in place systems that are inherent compromises to the ultimate goal that we have been limited by the technology and performance reviews is an example that we talk about. 

So now if, for example, if you're being coached by Nadia as a manager, you can have the performance review framework kind of built in and you can be nudged and say, hey, you haven't talked about how Ethan's done on this particular topic in this past month. Let me just ask you a couple of questions. I'll just keep that in mind. And so it becomes like an ongoing process that is woven in versus something you do on, you know, November 20 and you try to get it done as quickly as possible because you need to like check mark it. 

And so those ideas of, like, how will we reinvent the talent function for first principles given the power of AI is an important topic. Do you have recommendations to chief talent officers on how they might begin to rethink the talent function? 

Ethan: I think that's, you're gonna see the same crisis across function after function, which is we need to sit back and say, why are we doing this? The point of the performance review is not to do a performance review. It is to provide an opportunity for, you know, management to reflect back on how someone's doing, to sort people, to give people feedback to improve their job, all of those things. Maybe five other things inside your organization that it does. Right? Allocate who's a good manager or not. Maybe it serves some secret point value thing, right, of, like, you know, that tells that lets you win political games. We need to start exposing what these things actually do to rebuild them. 

So as a chief talent officer, part of what you wanna do is think about, what are my goals here? Right? And process becomes goal in a lot of cases inside organizations. And that can't be the case anymore. So the really smart people, and this is, by the way, why you can't turn entirely to external consultants. You can bring in the smartest consultants in the world. They cannot solve this problem for you because it's an internal problem about how your organization operates at a deep level. It requires deep expertise in the subject. And I think this can be uncomfortable. Right? In some ways, it's more uncomfortable than realizing AI is really good, is realizing, like, oh, why am I doing this? Like, how do I think about this again? Or is the way I'm doing talent development only because of the way that it's allocated you know, talent development is, you know, learning as a reward rather than a tool. Right? Well, we have to change that or do we wanna keep that the same? So I think there's a really profound set of questions to ask. 

28:43  Customizing AI Models to Organizational Nuances

Parker: One of the things you mentioned is that experts can't come in from the outside because you know your business the best. There's unique elements of your business. Would, could you make that same argument about AI models? AI models are almost like, if the expression is a zip file for the world, they have incredible IQ, but they don't know your organization. How would you help AI models understand the nuances of an organization so they'd be even more powerful in helping answer those types of questions as a partner to a chief talent officer? 

Ethan: Okay. So let let's do this in kind of layers. The easiest way, not talking about sort of the deeper applications like the kind of thing you're building. The easy way to do this is just context. The AI thrives on context. So pace, like, the other thing that is really interesting that we haven't even talked about what chief talent officers and CHROs are thinking about is that, actually, the best instructions for AI are the manuals you use for onboarding employees. The best thing you can give an AI is an example of the best piece of work you do. You don't want to have the AI read every piece of work you've done. You wanna actually give it, here's an example of the perfect report that you should do and why it's great. If you can train people really well, the AI actually thrives on exactly all that training material. 

So your starting point is really just starting to think about, like, you know, what do I have in hand already that I've spent all this time building for humans, and you literally just paste that in. So context, instruction manuals, rubrics, examples. Those are things the AI thrives on that change it from generic to specific. Because it's not that it's a zip file, but I mean, it is in some ways. It's a zip file with the entire web is a rep of that, everything. But, like, that latent space the AI contains, some of that is trained on your content. It's certainly on your industry. Like, it is it knows a lot about this. You can often tell it, write this in the style of, you know, and pick your favorite HR executive, and it knows how to do that. 

So I think we have to, you know, you have to realize, like, it actually has not just the genericness in it, but specificity you have to invoke. 

Parker: One of the things that is a feature that we began really rolling out for a couple of pilot companies that they're very excited about is take the best practices of your top performers. So Nadia will actually go interview top performers. How would you handle this situation? How would you give this kind of advice and build a bank of, exactly as you described it, best practices, but then, you know, that everyone's coach can access. And that is very, very powerful because it's not just the knowledge in the, in the, you know, in the rubrics and the reports, but it is the knowledge in the top performers’ heads about how they try to help situations specific to a company. And exactly as you said, the more context that, you know, an LLM can access, the richer that answer is gonna be. 

So I think there's elements of that. How do you pick your best practices, curate those, and then make those available to the LLM AI instances that people have access to? 

Ethan: Yeah. I mean, I think curation ends up being a really important point in all of this. Like, the idea that you, like, don't push the responsibility off to others necessarily as much as you wanna think through these issues yourself. Right? Curation is one of the most important things. I think people don't understand enough about how AI provides abundance. Like, you don't need 10 ideas. Ask for a thousand ideas. Ask for 500 ideas. Seventy-five slogans for this. I like 12, slogan 12. Give me 17 variations on this. I don't like the first paragraph. You know? So curation, taste, all that matters a lot, and at the executive level, there's a lot of that available. 

So my wife is probably one of the best, product engineers on the planet. She is, like, Google uses her prompts as the gold standard to compare their new models to. So those prompt versus her but, like, she's never coded a day in her life. She has a doctor in education. She worked at, you know, in HR and training. Right? Like, that's her that's her background. And it turns out that if you're really good at theory of mind, if you're good at understanding what someone might be confused, what they're good or bad at, if you're good at breaking down tasks into steps, if you're good at troubleshooting when someone goes wrong, if you're good at thinking through the kinds of problems someone might mentally have and why they might be confused about a topic, you're going to be good at AI. 

I think one thing that I like about the approach that you and a few of your other organizations are taking is trying to think deeply about one area that you, and realize that the models are very smart, they need scaffolding and help to accomplish something, but, like, we don't have broad-based expertise in this yet. 

33:04  A 3-Part Model for the Changing Nature of Work

Parker: I know you've talked to a lot of C-suites. What are some of the things that resonate most with them when you have a conversation with, with these leaders? 

Ethan: So I think that people are excited by the idea of, that this is a way to change the nature of work and change the game. And leaders who are seeing this and realizing, wow, there are advantages beyond ROI. Like, the thing that worries me most is when people start with a pure ROI perspective, which inevitably leads you to the same thing, which is tech, the same problem people always had, technology used to cut costs. Right? And I think when people start to realize, wow, this changes what we can do. It changes how we serve customers. It changes what we do to what we can provide to people. It's not a one to one replacement, for a person. It is an expansion. It is, it lets people do the impossible. 

I have sort of a three-part test that I ask people for their internal AI use. Right? I say, what thing that was important to you have you realized AI now does for everybody on the planet and you no longer need to do? If you don't have anything in that category, you've made a mistake. There is something that you thought was valuable that's no longer valuable. What's that jettison? What impossible thing are you doing now that you haven't done before? And then, what are you building that doesn't work yet, but might work as AI gets better? And if you don't have an answer to those three questions, I don't think you're far enough along yet. 

Parker: I mean, it's a fascinating future. I imagine every six months, the prognosis would change. Ethan, I just wanna say thank you. This was just such an energizing and invigorating conversation. I know that many of the CHROs in the audience are gonna find it really thought-provoking, and thank you for taking the time. 

Ethan: Great. This was a lot of fun. Thank you.

HR Is Now R&D

In this powerful conversation with Ethan Mollick — Professor of Entrepreneurship at Wharton and best-selling author of Co-Intelligence — one thing became clear: AI is already reshaping work, organizations, and leadership—and HR is now the R&D lab for that change.

As companies race to adapt and adopt, Ethan argues that HR leaders have an unprecedented opportunity (and responsibility) to guide the workforce through the AI transition. That requires urgency, but it also requires a clear vision for how to bring AI into the workplace. And the only way for leaders to set that vision is real, hands-on experience with frontier models and AI work coaches like Nadia.

Ethan Mollick

Professor of Entrepreneurship, The Wharton School

HR Is Now R&D

AI & the
Workforce 
Series

November 14, 2024, New York City

How your workforce adopts AI to learn, perform, and grow is this year’s defining challenge.

Valence's community of CHROs and talent leaders meets regularly to explore this evolving frontier.

Our 2024 AI Summit convened 200 executives from 100 organizations, collectively representing over 7.5 million employees. And the June 2025 virtual summit helped a thousand leaders explore how to close the gap for real AI impact at work.

Recent Events

2025 Virtual AI Summit: 
The Adoption Gap

Fortune 500 CHROs and leading technologists providing the insight edge to help build your roadmap to AI results.

AI is joining the org chart and adoption comes before productivity. Senior leaders shared what does and doesn’t work to drive adoption, based on real initiatives.

Learn More

An Evening with AI: Perspectives for Talent Leadership

This invitation-only event in London brought together an intimate group of global CHROs and senior HR leaders.

Attendees heard from Chief HR, Talent, and Learning Officers from WPP, LEGO, Novartis, and ING, as well as FT award-winning author and technology journalist, Parmy Olson.

Learn More

Who you'll hear from

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Button Text

Upcoming virtual summit

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Button Text

Highlights from our 2024 Summit

Lorem ipsum dolor sit amet

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Button Text

Upcoming virtual events

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Button Text

The Era of Personalized Knowledge

Parker Mitchell, CEO, Valence

AI and the Future of Talent

Lucien Alziari, CHRO Prudential

Lindsay Pattison, Chief People Officer WPP

Change Management for AI

Cameron Hedrick, Head of Learning & Culture at Citigroup

Diane Gherson, former CHRO at IBM

Lisa Naylor, Global Head of Leadership Development at Novartis

AI Myths

Brent Hecht, Microsoft

AI Coaching Early Use Cases

Tim Gregory,  Managing Director, HR Innovation & Tech, Delta

Lesley Wilkinson, Chief Talent Officer, Experian

Jennifer Carpenter, Global Head of Talent at Analog Devices

Upskilling an AI Workforce

Rachel Kay, CHRO, Hearst

Chris Louie, Head of Talent, Thomson Reuters

Tina Mylon, Chief Talent and Diversity Officer, Schneider Electric

Game Changing Coaching

Dr. Anna Tavis, Chair of the Human Capital Management Department, NYU

Join our mailing list for early access
to the 2025 AI & the Workforce Summit

Subscribe to our newsletter

Join our mailing list for early access
to the 2025 AI & the Workforce Summit

Subscribe to our newsletter

AI & the Workforce:

The Human + AI Era

Join us in NYC,
November 6th

The AI-augmented workforce is here. Our tools are evolving fast. Leaders have to evolve faster.

We're gathering the AI experts and talent innovators who are reimaging how work gets done. Together, we'll write the playbook for a new era of work: Human + AI.

Save Your Spot

Highlights from Past Summits

600

+

Leaders

15

M+

Employees represented

450

+

Companies

HR is R&D now. Everyone's using AI to do their work, so they're not learning at an intern level: that's landing first in the laps of CHROs. We have this new set of capabilities for education, for coaching: that's landing first with CHROs. The leverage point for organizations is the HR function.

– Ethan Mollick, Professor of Entrepreneurship, The Wharton School

EXPLORE THE SERIES

Valence’s AI & the Workforce Series

Technology and talent leaders at the frontier of work discuss how AI is changing our organizations, how we lead them, and how we work with them, with detailed use cases and learnings from real-world deployments.

2025 Virtual Summit: The Adoption Gap

AI isn't the hard part. Adoption is. But this isn't disillusionment. It's a challenge to leaders. Valence's 2025 virtual summit gathered Fortune 500 CHROs and leading technologists to explore how to close the gap for real AI impact at work.

explore sessions

FEATURED SPEAKERS

Ethan Mollick

Professor of Entrepreneurship,
The Wharton School

Reid Hoffman

Co-Founder at LinkedIn, Manas AI, & Inflection AI

Paula Landmann

Chief Talent and Development Officer, Novartis

Diane Gherson

Former CHRO, IBM

2024 Summit

At our inaugural summit in November 2024, hundreds of talent leaders representing millions of employees came together in New York to lay the foundations for the future of work in the AI age.

explore sessions

FEATURED SPEAKERS

Lindsay Pattison

Former Chief People Officer, WPP

Prasad Setty

Former VP of People Operations, Google

Lucien Alziari

Former CHRO, Prudential

Bill McNabb

Former CEO, Vanguard

AI Unpacked

AI is the fastest-evolving technology in human history. Anyone who's certain of what the future holds probably isn't paying enough attention. Cut through the myths and explore what's possible with some of the best minds in AI.

explore sessions

FEATURED SPEAKERS

Geoffrey Hinton

Nobel Laureate in Physics, "Godfather of AI"

Ethan Mollick

Professor of Entrepreneurship,
The Wharton School

Brent Hecht

Head of AI and Chief Scientist at Valence

Jeff Dalton

Head of AI and Chief Scientist at Valence

HR & The Future of Work

The question isn't if we will adopt AI. The question is how. Will we use this technology to build stronger, more capable organizations, or just to cut costs? Learn how talent innovators are leaning into this moment to unlock capacity, empower their people, and redesign how work happens.

explore sessions

FEATURED SPEAKERS

Jennifer Carpenter

Global Head of Talent, Analog Devices

Lisette Danesi

Global Corporate People Lead, WPP

Hein Knaapen

Managing Partner, CEO.Works, former CHRO, ING

Lesley Wilkinson

Chief Talent Officer, Experian

Feedback from past summits speaks for itself

“The fast-paced development of AI in the workforce will change the way we think about personalization, upskilling, and work excellence, as well as what is possible for the talent function of tomorrow.”

“An energizing opportunity to exchange on the possibilities of AI to shape work and the workforce of tomorrow.”

“Thank you for ushering in this new era so magnificently.

Nadia may be the most prolific coach in the world right now ...”

"WOW!!! So many great insights, nuggets and perspectives that are helping me look at and approach AI in a new way."

"You thought of everything; the most relevant topics and all covered with the right amount of detail."