AI is changing the way we work.

Sign up for our newsletter and be the first to know about exclusive events, expert insight, and breakthrough research—delivered straight to your inbox.

Submit

Please share a few additional details to begin receiving the Valence newsletter

By clicking submit below, you consent to allow Valence to store and process the personal information submitted above to provide you the content requested.

Thank you! Your submission has been received!
Please close this window by clicking on it.
Oops! Something went wrong while submitting the form.

AI Coaching Early Use Cases with Delta, Experian, and ADI

AI will change every workplace, but where does that change start? In this panel from Valence's AI & the Workforce Summit, HR leaders Tim Gregory (Managing Director, HR Innovation & Tech at Delta), Lesley Wilkinson (Chief Talent Officer at Experian), and Jennifer Carpenter (Global Head of Talent at Analog Devices) reveal how leading organizations are testing and experimenting with AI, the most valuable starting use cases, and how to expand from initial use cases to effective AI at scale.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Video Transcript

Parker: Terrific. I think Tim is somewhere and will be joining. We're going to be talking about AI coaching. I think, you know, we'll start with the coaching bit and the investment in managers and why it is so important to make investments in managers. We'll just do a quick go around to understand how you and your companies are thinking about that use case.

Leslie, I will start with you because, as far as I know, you have the most. exciting news that's fresh off the press. Do you want to share with the room what that news is? I think it fits in nicely with this coaching thing. 

Lesley: Thank you for giving me the opportunity to say it out loud. I've been saying it out loud all morning. We just found out this morning that we're in the top 25  great places to work in the world. So, the top 25, just found that out. 

It feels good because our philosophy is about people first. It feels like the right reward for being a people-first organization. So if your question is about why would we invest in something like coaching, I guess it's just because we know the potential that coaching has to unlock performance. It unlocks performance and it unlocks human potential. Why not give that to everybody? Our job is to unlock performance and potential and yet, we've given it, historically, to such small groups of people, it's been cost-restrictive, it's been time-restrictive, and it's been based on that really messy human relationship thing. So if you can find another way of doing that, then why not? 

Parker: Jennifer? 

Jennifer: Yes, hi. Congratulations. 

Lesley: Thank you.

Jennifer: I think just to build off what you just said, when we built the business case at Analog, I immediately went to equipping managers. Then I thought to myself, am I doing an actual disservice by thinking in that narrow of a business case? Because when you look at the representation of our management, they're largely heavily based in North America, largely a male audience, et cetera. So we widened our business case to actually have 40% managers, 60% individual contributors. I'm very happy to say the invitee list had 50-50 gender representation in it. So I think it's really important. We're going to study not only the adoption patterns–and I have a little bit more data to share about the adoption patterns–but also longitudinally understand how performance is being impacted, and so forth. Something I would say to all the talent leaders in the room: when you are thinking about your early adoption use cases, make sure you're not thinking too narrow of a view. Democratize access to these types of products because we do believe very wholeheartedly that it will ultimately help improve individual's performance and performance of the company. I think we all have a really deep responsibility to think very broadly about how we are inviting people to be early adopters in these things.

Parker: Tim, how about you at Delta? What's the business case for investing in leaders and managers? 

Tim: For us, the front line where employees engage with our customers, that's where it's all one. We put a great deal of effort in making sure that those experiences are phenomenal. We want to make certain that the employee experience ultimately drives the customer experience. So for us, it's really self-evident. We’ve got to really focus on making sure that those managers and our frontline leaders are delivering those moments that delight our customers. 

Parker: Wonderful. Jennifer, you're briefly mentioning, sort of, your pilot. I know that you're just in the design phases, but you've done a really thoughtful job–as you said–about who are the right people and you want to set that up so you can measure ROI, you can measure uptake, usage, feedback. Can you share a little bit more about how you design that with the room today?

Jennifer: Sure. And I'll tell you exactly how I was thinking about it. Did your mother ever tell you when you make an assumption, it makes an ass out of you and me?  So I thought we can't make assumptions. So instead of being an ass, why don't we ask? We really were trying to be thoughtful and I did make a mistake, I did make an ass out of myself. In some of our early pilots, we just gave the generative AI tools to people. And crazy, some people didn't want it. There was no parade being thrown for the generative AI product that we just kind of shoved down their throats.

With Nadia, what we're doing instead–and I really recommend anybody do this–is you invite with a very delightful proposition and then you allow people to opt in and opt out. What we found is we had 1600 people opt in, it's about half of who we invited. Of that, about half responded, about 81% said they’d love to, about 18%, 19% said they did not. But I now know why they did not. 

They also said the reason they opted out is they didn't have time. Now, I don't know if I believe them in that because I also kind of embedded another research question in the mix. Only 40% of them agreed that they're confident in their ability to use AI tools. So, do they not have time? Or are they more reticent because they're not sure? That tells me a lot about adoption. There's going to be people who are more a passenger than a pilot on the plane. Who are those passengers and how do we give them both the optimism that it can improve productivity and quality and the agency that they can do this, that they can be effective. 

Parker: Was there a difference in the percent who were confident in their use of AI for those who accepted? 

Jennifer: Yeah, it was crazy. Get a load of this. Of the people who opted in, 75% of them agreed with the statement that I have confidence that I can use this thing. The people who opted out only agreed that they have confidence in their ability 40% of the time.

Parker: That's quite the jump. So it's probably a pretty good explanatory variable, with time being the excuse.  I think this is what we're going to see, story after story, not everyone is going to adopt it with the same level of enthusiasm and the thoughtfulness with which we design these rollout programs is crucial.

Lesley, you've had a chance now, I think you're in month six or month seven of the rollout of Nadia in Experian. Can you share a little bit more about how you designed–I think–a very thoughtful, curated experience for people who are using her as a coach? 

Lesley: Actually, it's a bit longer than that, Parker. We're almost coming up to a year of our first experiment. We roll everything out through a series of experiments. That's the way we design products, that's the way we design HR products as well. So it's really natural to just go for a series of experiments. Our first experiment was, I'm going to call it “Vanilla Nadia.” Which was, putting out there the coaching solution. We had 100% take-up from the first experiment group within the first three hours of extending the invitation. It's the dream of us learning and development people, it's never happened before. But less repeat take-up. 

So we were wondering why. and one of the  problem statements was about: does it feel relevant to me and my organization, my job? So, experiment two, we train Nadia on our own characteristics of leadership, on our own philosophies and leadership content. Then we started to see the repeat use as well as the immediate take-up. 

So, experiment three, we have our engagement surveys, what happens when our engagement surveys come out, how do our managers get to use that? A shout out for my brilliant colleague, Brad, who's in the audience, who runs our engagement surveys and said, you know, we need Nadia to sit with a manager and explain, and talk, and coach on: what are the next steps?

Then we did the same with performance management. So, can you imagine, you're about to do a performance conversation. You can role play it, you can role play it verbally. That was the next experiment. That's when we started to get really strong ROI. One of the main measures of this is, do we see an increase in leader effectiveness on Great Place to Work? And yes, the answer is it's a 5% uptick in leadership effectiveness on the people who use Nadia, who don't use Nadia. Our last experiment is a program offering for all middle managers, the kind of leadership stuff you need to do on a day-to-day basis. The whole program will be based on Nadia.

Parker: I want to double click on that in a moment. But, Tim, I know we're earlier on in our journey, but one of the things that has come up in the conversations that you and I have had is you're looking at a range of technology options. You're looking at internal, external, current vendors, new potential partners.So you scanned the landscape. What was it that caught your eye about Nadia as Delta began their initial trial and now deployment? 

Tim: Sure. I think as Bill had mentioned very early on, there's only so much capital that goes around and you need to be able to create a competitive position and explain the value of AI.So I can share with your audience sort of how we got our leaders to understand this and then where, sort of, Nadia and the Valence tool kind of fit into our model. 

It started with having to explain this ML/AI thing that you did at the very beginning there. It was like, Hey, Tim, you know, it didn't seem like that long ago, we're describing everything is ML and AI. Now, it's just AI. What happened to the whole ML thing and how is this different than that? At Delta, we've been using machine learning for quite a long time for fuel management, even for weather prediction.  In that case, if you can imagine like an X, Y graph with a slope on it, and you've got a couple points on it, and you can change the value of those points, and change the slope to find a better prediction, and you can explain why it is that that particular point changed. You moved it this percentage and the prediction quality got better. You can update that information real time with the Internet of Things. 

So we've been doing that sort of thing for a very long time. This is fundamentally different with generative AI and the use of these neural networks that can learn, in ways that makes explainability a little bit difficult sometimes. The first thing was just kind of explain that, just like you did for your audience here. Then what we did is, we took this idea of learning and its ability to learn. You know, Lesley, as you'd mentioned, grounding it in your own information, we went down a similar sort of path. But we explained the process, we created four categories of this generative AI learning, sort of, use case scenarios. The first one was really focused on things you could do very quickly and add value. You could say, Hey, look, we're doing some stuff in generative AI, but it doesn't know you, it doesn't really understand the organization.

Then in the fourth category, you have stuff that knows you very, very, very well, so we'll talk about that in a second. The first category, we found, there were lots of our existing vendors that you could flip a switch, configure it, and turn it on in your existing environment. You could check the box and you've done something in AI. The value of it is questionable, it doesn't really know your enterprise, it doesn't understand the context of your business. Really, what's happening in a lot of those applications, they're just making calls out to these big frontier models that have been trained on Reddit, and so forth. You can do them very quickly and very easily oftentimes, but the value is minimal.

The next category is where you can actually ground it in the information of your organization. You can do it relatively quickly. The work that we've done with Valence, I think it was a couple months, maybe, from the very first time we had a conversation to the point we had employees in there using it and evaluating it. We put them through Qualtrics so they could evaluate the experience, and so forth. But this category is where it starts to get grounded in the information of your organization. 

The next category is, really, more sophisticated. We're really doing some of those things internally right now, where the model is actually being trained on the data. So we're using Delta IT and we're building some things and there's some opportunities for us to work together down the road. I know you guys got a great direction in your head in that space as well. Then the final category is stuff that we don't really have anything in anywhere in Delta right now, but the most cutting-edge technology that you're seeing with these reasoning models and compound AI, where you have smaller models working together to solve big problems at the enterprise level with an agentic, agent-based technology and all those sorts of things.

That is an extraordinary realm, particularly as the cost of silicon goes down, the compute value gets better and better. So that's kind of how we set it up. We said AI/ML, we're going over here, ML with generative AI.. Here are the categories and Valence sits right in there. It's like great value, quick to implement, grounded in your enterprise context. It made it real simple for us. 

Parker: One of the themes that I think I'm hearing from the three of you and from other conversations is that there will be a multiplicity of solutions. There is no one solution. It's important, I think as Bill said, to have a portfolio of options. Jennifer and Lesley, I know both of you have really thought about how Nadia as a solution will coexist with Microsoft Copilot.

Jennifer, you've got some data that you've already sort of found in the early days. Can you share more about how you,shared Copilot and the uptake there, and the use case, and then how you see the link with AI coaching. 

Jennifer: Sure. Parker and I have also talked about, Oh, Jennifer, you've been using generative AI for so long. Well, so long, when you think about, we're here in November. Just to remind everybody, November 30th, 2022, was when ChatGPT was released. A few days after that, I got in there and it helped me write, while I was at IBM, my team's performance reviews. Then, November 6th of 2023, ChatGPT released the ability for people like us to create GPTs, our own little agents.

So a year after that, I created a GPT to write performance reviews and I shared it with my executive leadership team. I said, hey, just for us, try it out, what did you think? The feedback they gave me was you're too nice. Your feedback assistant is too glowing in their language, funny enough.

Now, this past October–so that's three cycles–in three weeks, I worked with our teams to create a Copilot studio GPT that we put into Workday. Now, when you think about that hockey stick that Brent was talking about earlier today, a use case of one just helping me, a use case of maybe six people just helping six people, in that period of time. we had 22,000 engagements with employees writing their self input and setting goals. That was in the last month. Interestingly enough, because–remember, I never make an ass of myself, I always ask–I asked those people who were using the writing assistant, think about how long it would normally take you to sit down and do the dreaded task of reflecting on your annual contributions and setting your goals. How much time did that thing take to save you that we just produced in three weeks? I was floored that 45% of the people said it saved them about half the time. They got it done in half the time. I was even more surprised that 3% said it saved them 75% of the time. 

But, you asked me to say, how did this relate back to Nadia? I can track that we now, because those people sampled a writing assistant and began to use it, they're some of the highest adoption and opt-in of Nadia, just a few weeks later. So my hypothesis is the more we introduce these experiments and we get people more optimistic and more confident in their ability to use these things and to see value, they're going to be more, and more, and more willing to try and reap the benefits that we know that Nadia can provide to them. But we have to get over this mindset issue that we have of it's either–you know, I love the comments that come through because I don't just ask them statistical questions. I ask them, what else do you want to know? One person who doesn't want to use a writing assistant or a coach said that they don't like the BS, bingo generator of AI. Another person doesn't like Nadia because it says her ex husband's girlfriend's name, so I get all kinds of insights from the comments. But you know you have to listen and really understand what is it that is getting in people's way, but I'm just thrilled to say as we do these little micro experiments we're one, helping people save time and improving quality, but getting them more ready for what's next. Because I think that's what all of us, as talent leaders, are responsible for doing, preparing people, and, in some ways, protecting them as well. We're building up this muscle and all these little experiments are helping us get them into shape, whether they want to go to the gym or not. We're helping them do the reps. 

Parker: And it's so crucial what the speed, the exponential speed that you talked about, to get them starting now because at some point it might just become too much, too overwhelming to get them started.

Lesley, I know that, you know, when Nadia was rolled out, it was pre-Copilot, but now Copilot coexists with Nadia. What are some of the lessons that you've drawn from that experience? 

Lesley: Well, actually, I'm just going to–Jennifer, such an interesting example. So maybe phase four of that example is actually Nadia role playing performance conversations, which is what we're experimenting with now, which is something that–big fan of Microsoft Copilot–works fantastically for the kind of example that you've just given. It can't right now role play and give feedback, but unleashing that power now all we've done at the moment is give that to leaders to do that, it’'s really changed the quality of a performance conversation. The next step is to give it to all employees. I think using Jennifer's is just a great example, actually.

Parker: Tim, as you've got a huge frontline workforce, and this is a frontline workforce that is probably not as familiar with technology as the HQ workers. How does Delta think about ensuring that both parts of the workforce have access to this technology that could be transformative? 

Tim: All of the technology that we develop, we really start there and engage where employees can connect back to the organization, so, obviously with the frontline, it's mobile technology. So, we're very focused on making certain that, their voices are heard, particularly in the early stages when we're designing these solutions.

I mentioned earlier when we did the pilot with y'all, we had seen–I said “y’all,” I'm actually from New York. I moved down there six years ago and now I’ve got the “you all” thing. But they are included in the early design phases. We implemented Qualtrics for all the measuring pieces se got to hear all the verbatims and then fine tune it and improve it more. So, really everything, 90% of our entire workforce is out there engaging with customers. So it really is at the heart of all that we do. 

Parker: And Jennifer, you've been using AI sort of personally for a while, you've experienced some of the change in capabilities. If you try to picture yourself and Analog Devices as maybe 12 months in the future. What are some of the hopeful use cases that you could imagine emerging? 

Jennifer: When I think about the outcomes of the goals, like let's set the intention. One outcome I hope is that we see a direct correlation to increase business performance and revenue growth as a result of this. Because, you know, when you think about this passenger versus pilot mindset, one study that was done by Jeff Hancock from Stanford's Social Media Lab–and BetterUp Labs as well–they looked at across a population of 10,000 workers, 18 industries, and they found those people with a pilot mindset–you know, that high optimism, high agency–were 3.6 times more productive. Now, I have got to trust Stanford, they know what they're doing, I wonder how they calculated that productivity measurement, but let's say it's more right than wrong. If we can introduce, 12 months from now, and help people get into that upper right hand quadrant of pilot mindset, productivity increases, innovation is unlocked, collaboration with AI also will foster creativity with their colleagues. And I have to believe that that's going to improve revenue growth because it's going to improve, you know, performance and hopefully include engagement and make it a best place to work. So I think if we really harness this right we're going to get people prepared, we're going to make more pilots and we're going to be more profitable. 

Parker: Lesley, the last example that you shared was around the integration into a leadership program, and I think that's a new use case where we've only worked, really, with you and Brad and Sophie and many people closely on this. Can you share more about the idea behind it in just the early stages of the genesis so far?

Lesley: Yeah, actually, and I've only just realized this in that last question you asked and Jennifer's response, actually, on the problem statement. And I think one of the opportunities, the problem statement is about the problem of humanness and rather than worry about how AI can cause issues to take away our humanness, how instead does it help us solve the problems that come with humanness?

What has been making me think is that, you know, the biases that we all carry around with us in the people processes, in our people decision making, it all comes together when we have conversations, when managers, how many times as HR professionals have we said, or have I said, Oh, if only the leaders would, you know, this is all about the leaders and their relationship to. Well, does AI have a space in there to intervene in that relationship? Or, to actually take away some of the bias in that human-to-human relationship? We're running a big leadership program next week, which is still face-to-face, so that ability to have retreats is still important. And on the plane over, I was going back and rereading Thinking Fast and Slow, so it's really got me thinking about this first-order thinking and how do we get the hell out of that.

And that, I think, is what we're trying to do in the leadership program, with Nadia. We're trying to replace the problematic bit of humanness with a machine and allow space for human-to-human conversation between people and with their managers. So Nadia will take over some of the basic tooling and some of the space for reflection and then leaders will do the rest.

Parker: I mean, this idea of AI being closer to being a human, I mean, Diane mentioned it. Thinking Fast and Slow, interesting that you brought that up. It was one of the pieces that I was able to talk to Geoffrey Hinton about. How can we take that analogy and apply that to LLMs and how do we help them think slow, because it's hard for us to, but they're able to do so with, you know, the size and the scale. 

Tim, you've talked about. four different dimensions and you know, I saw your eyes light up at that fourth dimension of the sort of fourth pillar of what's possible. What are some of the steps that Delta is taking to do the kind of experiments that we've been talking about today and something that also feels pretty high-stakes?

Tim: Sure. We're really preparing for the future in two ways. One, as we all know in the digital age, data is the oil of the digital age. We've heard that for quite a long time now. I think there's a new dimension to that, with the age of AI, and that's a reward signal. That's another really, really important thing. The machines as they're initially trained on the data will present a prediction. They need to be aligned with the culture of your company, and so on. And that's really where these signals come in. So to the degree that in our case, a hiring manager is engaging with a prediction on what a particular skill may need to be for a particular job, they can evaluate that. They can give it four stars, thumbs up, thumbs down, or maybe the initial prediction had a hundred words in it and they changed 50 of them. All of that is a signal that is really, really important to improving the quality of the prediction down the road. 

So what we're doing now to prepare for the future is really focusing on that data. So we're building some applications that will capture the initial prediction the machine made, store it, take the next value, the preferred value that the human provided, put that into a relational database as well, and generate transactions. So that, ultimately, those input/output pairs, those comparatives, can be used to train the machines. 

Everything we're building, we're really thinking about how we can keep that signal. We think it's going to be a strong, competitive advantage over time. This technology, unlike anything else we've seen before. Traditionally, you purchase it, and then five to six years later, it's depreciated and sort of its value does this: I think, Brent, who showed that sort of hockey stick, it nearly goes vertical with some of these things. As long as you're capturing signal and you've got good quality data, the value of these assets are going to be extraordinary. I think I wouldn't be going too far out on a limb to think at some point, you'll see, you know, from even from a Wall Street perspective, evaluating the quality of these capabilities that you have within your organization, how do you value these assets? Like, in our case, we're building things that know all the skills that we need to run our business. What is the value of that on the asset as opposed to what you traditionally have with a lot of technology? So brave new world coming up. But to answer your questions, it’s really focusing on the data, making sure that we're capturing signal, making sure people understand the value of that, to prepare for the future.

Parker: Wonderful. Jennifer, you talked two years ago, I mean, you were an early adopter 1, 6, 22,000. What are the experiments that you're running right now that are sort of in the 1, 6, 12 category that might explode a year from now? 

Jennifer: We're focusing on things that are fairly turnkey, I would argue. So, you talked about Copilot. We have a mini experiment, about 300 people, that we've been studying very closely on Copilot 365. That's going to ramp to mostly 14,000 employees, I would think, within the next year. That's really just focusing in on core productivity and just early adoption. I think getting people accustomed and climbing that, you know, that optimism and agency ladder, so to speak.

We also have a growing software and digital capability, AI capability within ADI. So we're introducing, obviously, GitHub Copilot and other coding instances and things like that. That's really going to, I think, drive enormous value. So, in the next 12 months, it's around upscaling and readiness and listening. Because we always have to adapt. And I think Nadia is going to come in and be at kind of the softer edge of, it's not just about productivity, but it's about your performance. It’s about a little more personable, perhaps, than maybe driving the productivity dimension so hard.

So I'm really excited to see in, I think it was, the first five days we got 1,600 people to opt in. We do have plans to continue to scale out Nadia even beyond the 1,600. But we want to study and learn so that we're going to make it even better for people for the next wave. 

Parker: I mean, it's funny we hear Diane Gherson talking about, you know, the delight, and we want to make Nadia delightful, and we want to make it a great experience, but we also want to make it a Trojan horse for productivity. We want to help people be better and faster at their job.

Jennifer: Well, people want to be better and faster. 

Parker: They want to be better. They want that. That's what they're excited about. Lesley, what about for you? Are there any experiments? You and I have a lot of chats about sort of the agile mindset in HR and it's obviously working with the results that you're getting and the recognition. What are some of the experiments that are most exciting for you these days? 

Lesley: As an organization, we've been using AI for the last couple of years. So, for new product development, it's super exciting for driving financial inclusion and equal access to products, that gets me really excited. In the people space, for productivity, for the front end of our systems, and in our leadership offerings, we built something called the Leader Exchange, which is our own program, which has AI built into it for search and dashboarding. But it's this space, really, I think it's in the space that Nadia can do that other forms of AI can't do right now, which is to replicate the human dynamic, but remove some of those problematic areas of human dynamic.

I've been thinking a lot about the hockey stick. This is going to be, I think, a much shorter period of time, before we get up into the turn. And then the thing we're really working hard at is how do we create the conditions in the organization and the skills in our people in HR and in the wider organization to constantly see what might be coming around the corner, to understand how we quickly respond. Paying as much attention to that skill set as to the actual experiment on the technology that comes at us. 

Parker: And I think that mindset is just so crucial. It's sort of: try to look around the corner, try to pay attention from, you know, every level of the organization. And then I just really love the, you know, the reinforcement of the idea of experiments. I mean, that's how we are going to learn in a world that's changing so quickly. Preferably measurable experiments as, as we've all talked about. And it's been wonderful to have you share your experiences with the group today. 

So, thank you all. Thank you for flying from, you know, none of these are New Yorkers today. They might have New York roots at some point, but they've all traveled to get here, so thanks, Tim. Thanks, Jennifer. Thanks, Lesley.