The Frontier Firm: AI, Learning & Workforce Design | Microsoft, Databricks

In this session from Valence's 2026 AI & The Workforce Summit, senior HR and workforce leaders from Microsoft and Databricks share what they are learning from operating at the frontier of AI adoption — as both builders of AI tools and as their own most demanding users. The conversation covers the Frontier Firm framework, what it means to be an "agent boss," how consumer-grade AI expectations are reshaping enterprise experience design, where the time savings from AI actually go, how organizations can preserve and deepen learning in an automated world, how AI is being used to scale culture at speed, and how to build workforce roadmaps when the pace of change outstrips traditional planning horizons.

Neha Parikh Shah

Director-Workforce AI & Org Strategy

Amy Reichanadter

Chief People Officer

Prasad Setty

Former VP of People Analytics

Key Points

Key Takeaways

  • The Frontier Firm is AI-powered but human-led: Microsoft's research framework positions AI as a means to achieve outcomes across customer engagement, product innovation, process design, and employee experience — with human judgment at the center of every consequential decision. The question is not whether to use AI, but how to apply it to the outcomes that matter most.
  • Being a good agent boss requires clarity about goals, outcomes, and success criteria — but not the same emotional intelligence as managing people: Managing AI agents draws on some of the same skills as managing people — goal-setting, structuring work, defining success — but does not yet require the social and emotional intelligence that makes humans effective managers of other humans. This distinction matters for how organizations design leadership development.
  • Employees expect consumer-grade experiences at work — and AI makes that possible: The design principles that make Shopify, AirPods, and ChatGPT delightful — simple, fast, frictionless — are the same principles that should govern enterprise AI deployments. Amy's guiding rule for her team: if you wouldn't do it in your personal life, don't create it at work.
  • Time saved by AI goes back to the organization's dominant value, not to employees: At Databricks, a high-growth results-focused company, time savings from AI go directly into higher-impact work — not back to individuals as free time. Where time savings go is organization-dependent and should be discussed explicitly, not assumed.
  • AI is automating rote work — not the work that actually created learning: The tasks AI handles most effectively are codifiable, low-stakes, and routine — which were never the primary source of professional growth. The learning that matters most comes from mentorship, judgment under uncertainty, edge-case recognition, and reviewing AI's work critically. Organizations must actively design for these learning pathways, especially at entry level.
  • Governance, clarity, trust, and people-first design are the four enduring constants: As AI changes everything else, Neha identifies four elements that must remain persistent: responsible AI governance (privacy, security, accountability), leadership clarity about direction even amid uncertainty, trust between employees and leaders, and proactive design of new roles and career paths rather than waiting for AI to create them automatically.

Full Transcript

The Frontier Firm: An AI-Powered, Human-Led Framework

[00:00:00]

Prasad: I'm excited for this conversation because your organizations are at the forefront. You folks are at the frontier of building a lot of these AI tools that are then going out into workforces and to society. In some senses, you're also customer zero for all of this work — it has to work in your organizations before it works anywhere else. Neha, you published a fabulous report called the Frontier Firm. For this audience, could you share your key messages? Maybe a couple that are really pertinent to HR leaders?

[00:00:49]

Neha: A lot of it echoes what we've been talking about today — this idea of being AI-powered but human-led. The notion of the human, and the role of human judgment, is a very important one. When we're thinking about the Frontier Firm, we're really thinking about how do we apply AI through the outcomes we're trying to achieve and the ways we're trying to achieve them. That includes customer engagement — how do we improve that? That includes the products we're building, through innovation, through processes, and for our employees. In all of those pieces, we're thinking about how can we apply AI to move things forward.

▶ What Is the Frontier Firm? Microsoft's Framework for AI-Powered, Human-Led Organizations

The Frontier Firm is a Microsoft research framework for how leading organizations apply AI across the full scope of their operations — customer engagement, product innovation, process design, and employee experience — while keeping human judgment at the center. The concept, developed by Microsoft's workforce research team, positions AI not as a replacement for human decision-making but as a means to achieve outcomes more effectively. Organizations that successfully become Frontier Firms use AI to move faster, not to remove humans from consequential decisions.

What Is an Agent Boss — and How Do You Become One?

[00:01:32]

Prasad: One of the things you mentioned was this notion of agent bosses. What does that look like, and how do people become good agent bosses?

[00:01:49]

Neha: We've been talking about this a lot on my team recently. In some respects, it's very similar to how we think about managing people. In some respects, it's very much not. Where it is similar: it's about being very clear and identifying what your goals are, where you're trying to get to, how you want to structure work, what outcomes you're looking for, and how you'll define success. Where it diverges from managing people: it doesn't — at least not yet — involve the social and emotional intelligence that really drives what makes us good managers of humans.

▶ What Is an Agent Boss? How Organizations Are Developing AI Management Skills

An agent boss is a leader or employee who effectively directs and manages AI agents to accomplish work — a skill set that is rapidly becoming as important as traditional people management. According to Microsoft's workforce research, effective agent bosses share competencies with effective people managers: clear goal-setting, structured work design, and well-defined success criteria. However, agent bossing does not yet require the social and emotional intelligence that defines great human leadership. Organizations building agent boss capability need training programs that develop both sets of skills distinctly.

Bringing Consumer-Grade AI Experiences Into the Enterprise

[00:02:39]

Prasad: Amy, employees in their personal lives get used to consumer-grade tools — they don't want high-friction systems at work. How do you respond to employees who say they want the same seamless interaction they get with Claude, Gemini, or ChatGPT?

[00:03:18]

Amy: This is probably what I'm most passionate about. In our personal lives, we've gotten used to a one-click experience — Shopify if you want to buy something, AirPods that are working within 30 seconds. And then we go to work and we expect people to put in a ticket and wait 24 hours for a response. The way we're thinking about it: underneath all of those consumer experiences, they're simple, delightful, fast. Those are the same design principles we want at work.

Nobody wants to do the old ways anymore. And we shouldn't think about AI as 'take old, outdated ways of working and automate them.' We want to rethink the way people experience this entirely. The way I guide my team: if it isn't something you would do in your personal life, please don't create it at work.

If a new hire is coming in and they'd naturally go to Claude or ChatGPT to ask a question, we should not be asking them to figure out who in the organization to ask or which ticket to file. We need a one-stop shop for them to get the information they need. That needs to happen across every way we work — to bring those two worlds together for people.

▶ How Enterprise AI Experience Design Should Mirror Consumer Apps

Employees who experience seamless, one-click consumer AI tools — ChatGPT, Shopify, AirPods — arrive at work expecting the same. Amy, Chief People Officer at Databricks, frames this as a design imperative: the same principles that make consumer AI delightful (simple, fast, frictionless) should govern enterprise AI deployments. Her rule for her team is direct: if you wouldn't use it in your personal life, don't build it for work. For new hires especially, the gap between consumer AI experience and enterprise systems is a friction point that erodes adoption, trust, and productivity from day one.

[00:04:39]

Prasad: I heard you tested replacing long emails with AI-generated podcasts. What were you trying to do, and what did you learn about what works and scales?

[00:05:00]

Amy: Think about how people consume information now. The first thing I do when I walk my dog is put my AirPods in and listen to a podcast. What I don't want to do is read another 20-paragraph email — I genuinely won't do it.

We decided to take the things we used to put in long newsletters or emails and turn them into a less than five-minute, very conversational podcast that people could listen to while walking their dog or dropping their kids off at school. People absolutely loved it.

What was fun about it: it actually came from a Shark Tank-style competition on my team. The idea came from them. The framework of 'bring the two worlds together' is at the foundation. When we've had to drive substantial, complex change, we try to simplify, make it conversational, make it accessible — so people can consume it in a way and at a time that works for them.

Where Does the Time Saved by AI Actually Go?

[00:06:13]

Prasad: Here's a question for both of you. In my class at Stanford, I talk about two things that afflict all knowledge workers: time poverty and thought poverty. Time poverty because we're always pressed. Thought poverty because we're doing shallow work rather than deep work. Ethan said this morning, 'I can't believe Claude is taking minutes to do work that took me hours.' When AI saves time, where does the surplus go? Is it reducing time poverty? Or is it just adding 15 more things to the list?

[00:07:29]

Amy: It's really going to be organization dependent. I think about organizations on a continuum from very people-focused to very results-focused. At Databricks, we're in a super high-growth environment — on the results-focused end. In my mind, there's no question people aren't getting the time back. It's going to something else that's more impactful, more important to the organization. So you have to think about the framework you're working in and have explicit discussions about where those trade-offs are going to be.

[00:08:10]

Neha: We're seeing it go back into higher-quality work. For example, as engineers are able to do less manual coding, they're starting to think about, 'What's actually the interesting part of my work?' Instead of spending time on debugging — which engineers often cite as the least enjoyable part of their work — they're now thinking about orchestration: how can I think end-to-end about processes of work? Suddenly the time that's being saved isn't going into more rote, routine work. It's going into work that's actually interesting and meaningful, and where they can see the value playing out in the company.

Prasad: Thoughtful reinvestment into higher-quality work that is meaningful to both the individual and the organization.

Neha: That's right.

▶ Where Does AI-Saved Time Actually Go in Large Organizations?

When AI saves time for knowledge workers, where that time goes depends entirely on the organization's culture and priorities. At Databricks, a high-growth, results-focused company, time savings from AI flow directly into higher-impact work — not back to individuals as discretionary time. At Microsoft, engineers freed from debugging are investing time in orchestration and end-to-end process thinking — work that is more meaningful and more organizationally valuable. Leaders should make explicit decisions about where AI-recovered time goes rather than allowing it to simply fill with additional low-value tasks.

Preserving Learning and Developing Judgment in an AI-Assisted World

[00:09:13]

Prasad: If routine work gets automated, where does the learning come from? I've made the argument that automating work with few degrees of freedom might impede learning. How do you structure learning in an AI-aided world?

[00:09:54]

Amy: We're in a learning shift. I hear a lot of fear from people — 'We're going to turn stupid' — and we're not. For those of us who lived through the shift from encyclopedias to the internet, we didn't become dumb because information became more accessible. This is analogous. We're going to shift our skills. Things that used to take a lot of time will become easy. But the capabilities that matter most — choosing between options, using discernment, using judgment — will become more critical learning pathways. We as leaders have to think about that in the design of what we're asking of people now, and what skills we want to develop in them. We need to get rid of the idea that we're all going to lose our capacity to learn because AI has come to the forefront.

[00:10:51]

Neha: The work that AI is doing — at least now — is not the work that was really creating learning. It's the rote work, the routine work, the codifiable work, the low-stakes work. That wasn't the stuff we were really learning and stretching from. Now, if you look at the kind of work people are doing, that's where we're starting to see people stretch.

So it's about HR and business leaders working together to strategically craft ways for that learning to occur — to build that judgment. Research from medicine and law firms shows that human judgment develops through identifying edge cases. How can we build understanding of those edge cases? Through mentorship. Through orchestration. Through code reviews with engineers. Through reviewing the work that AI is doing to make sure it's actually doing what it's supposed to. Using that review process to build the skill of 'what does good look like?' and 'where are those edge cases we don't see all the time?' That has to be a shift in how we redesign skilling — especially at the entry level.

▶ How Organizations Can Preserve Deep Learning as AI Automates Routine Work

AI is primarily automating codifiable, low-stakes, routine work — which was never the primary source of professional growth or learning. The work that actually builds judgment — edge-case recognition, mentorship, critical review of AI outputs, end-to-end process thinking — remains human. According to Microsoft's workforce research, organizations need to actively redesign learning pathways around these higher-order skills, particularly for entry-level employees whose traditional on-ramps to judgment-building (doing rote tasks and gradually taking on complexity) are being compressed by AI automation.

Prasad: Anthropic just published a report last week on skill atrophy in coders who became cognitively dependent on Claude. I appreciate where both of you are headed in terms of engineering those learning environments intentionally.

Using AI to Scale Culture: The Databricks Equity Bot Story

[00:13:37]

Prasad: How are you deploying AI to scale your culture? Microsoft is at the forefront of AI. Databricks has a strong AI foundation and has grown really fast. Are you deploying AI in any way to scale culture — and if so, how?

[00:13:37]

Amy: Culture is a set of behavioral norms. At the intersection of AI and culture, it's really about making sure you're bringing whatever your framework is to life for employees. Technology can help support that — but we can't abdicate culture to artificial intelligence. It's not going to work that way.

We've mainly been using AI in ways that help support the kind of experience we want to provide for employees — which is a part of how they experience their work every day, and ultimately that leads to culture. And it's often been at the intersection of where we need to drive the biggest impact.

Here's an example. About a year ago, employees wanted liquidity. We decided to do a tender offer and buy back RSUs. In typical Databricks fashion, we decided to do in six weeks what other companies do in six months. We had two equity people and 10,000 people participating. The only way through that was a scalable solution — so we built an equity bot that answered 4,000 questions from employees.

This was a very emotional time for employees, something really important to them. If each person had had to put in a ticket and wait for their answer to a very personal question, we would have taken something positive and turned it into something genuinely frustrating. What employees came away with was a deeper sense of trust — both because of the experience we created and because the tech worked.

Our CEO — a very skeptical founder — decided to test the bot himself. He typed in his own question about his own circumstance, and he got the right answer. He immediately sent a company-wide email: 'Use the equity bot, it's awesome.' Getting that buy-in from the top helped people feel like they were really being taken care of, even though we were using tech at the forefront of getting them through a complicated process.

▶ How Databricks Used an AI Bot to Handle 4,000 Sensitive Employee Questions During a Tender Offer

When Databricks ran a company-wide tender offer with 10,000 participants and only two equity team members, they built an AI equity bot that answered 4,000 employee questions about a highly personal and emotionally significant financial process. The bot handled questions that would have otherwise required tickets and days-long waits. The outcome: employees reported a deeper sense of trust, and the company's CEO — a skeptical founder — publicly endorsed the tool after testing it himself. This case illustrates how AI can scale culturally sensitive employee experience without sacrificing the sense of individual care.

[00:16:47]

Neha: Culture is hugely important in creating the context for AI to be successful. We've been building a culture of learning for about 10 years — building that culture of experimentation and, critically, building incentives for managed risk-taking. If everyone is afraid of what happens when they experiment, there's not going to be much of it. How do we actually move that forward?

What we've found is that creating a strong sense of clarity — where exactly are we going with AI, what are we trying to achieve — and putting that together with the right culture and incentives has helped us move forward. When you take a team with a strong base of AI use, reward experimentation, and give them a shared understanding of where they're going, you get faster feature development than we've ever had before.

Building Workforce Roadmaps When the Pace of Change Outstrips Planning

[00:18:03]

Prasad: Since changes are happening so fast, how do you build roadmaps for what this means for your workforce and organizationally?

[00:18:32]

Amy: For us, it's really about agility now. We've literally stopped headcount planning outside of go-to-market functions. It's a very interesting world to work in when the expectation is to hire 4,000 or 5,000 people but there's no plan. The reason is that things are changing so quickly — for every headcount we're going to hire, we want that thought process to have happened where we ask: could this be automated, or do we need a person to do this work?

The intersection of understanding what the business needs are, what the technology can do, and then being as thoughtful as possible about where that's going — that's really where the art of this is now. People have to get comfortable with less certainty during this time, and understand how to marry headcount needs versus work to be produced. It's a little art and a little science these days.

[00:19:41]

Neha: What is going to have to be persistent is governance — having a good understanding of responsible AI, privacy, and security. That's going to be enduring.

There's a sense today that 'if I say anything, it's going to put me in jail, so I need to say nothing.' But that vacuum creates more problems than it solves. It also limits the trust that needs to be in place between employees and leaders. How do we continue to build that trust? That's going to matter as we move forward.

And probably the most important piece is putting people first. Employees need to know that we are actively designing roles to be interesting, challenging, and exciting. Research suggests that roughly 40% to 60% of roles since 1940 didn't exist before — and the only way to maintain employment levels is to think about what those new roles are going to look like and how we're going to design them. AI is not just going to make them appear. We're going to have to actively seek them out, in partnership with HR and business leaders. Career paths are totally changing. How do we make sure people feel like they have a place to grow into?

Even while there is continual change, making sure that governance, clarity, trust, and people-first design are all in place — and that everyone knows they're advancing — is going to be really important.

▶ Four Enduring Constants for HR Leaders as AI Reshapes the Workforce

As AI changes every other aspect of organizational life, Microsoft's workforce research identifies four elements that must remain persistent. First, responsible AI governance — protecting privacy, security, and accountability. Second, leadership clarity about organizational direction, even when the destination itself keeps shifting. Third, active trust-building between employees and leaders, recognizing that communication vacuums create more anxiety than honest uncertainty. Fourth, proactive people-first design: actively creating new roles and career paths rather than waiting for AI to generate them automatically. These four elements provide organizational stability during continuous technological change.

Prasad: Wonderful. Thank you both, Amy and Neha.

The Frontier Firm: AI, Learning & Workforce Design | Microsoft, Databricks

In this session from Valence's 2026 AI & The Workforce Summit, senior HR and workforce leaders from Microsoft and Databricks share what they are learning from operating at the frontier of AI adoption — as both builders of AI tools and as their own most demanding users. The conversation covers the Frontier Firm framework, what it means to be an "agent boss," how consumer-grade AI expectations are reshaping enterprise experience design, where the time savings from AI actually go, how organizations can preserve and deepen learning in an automated world, how AI is being used to scale culture at speed, and how to build workforce roadmaps when the pace of change outstrips traditional planning horizons.

Neha Parikh Shah

Director-Workforce AI & Org Strategy

The Frontier Firm: AI, Learning & Workforce Design | Microsoft, Databricks

Chief People Officer

Prasad Setty

Former VP of People Analytics

How Gilead Sciences Is Scaling AI Coaching to All Employees | Kerry O'Keeffe, Gilead

In this session from the Valence 's 2026 AI & The Workforce Summit, Kerry O'Keeffe, Gilead Sciences' head of employee growth and learning enablement, shares the company's journey to embed AI coaching at the heart of a new enterprise-wide growth philosophy. Gilead is in active pilot — 1,000 employees across approximately six to eight weeks — with plans to roll out Nadia to all people leaders and then the full workforce by June or July 2025. The conversation covers what Growth at Gilead means and why it was built, early pilot findings including immediate C-suite demand, how Gilead is managing the challenge of helping employees distinguish coaching AI from answer-giving AI tools like Copilot, and practical advice for HR leaders navigating AI adoption at scale in complex organizations.

Kerry O'Keeffe

Senior Director, Global Learning & Development

Key Points

Key Takeaways

  • AI coaching is the first concrete step in Gilead's enterprise growth transformation: Growth at Gilead is a multi-pillar framework designed to personalize employee development at scale — amplifying individual impact, empowering people leaders, preparing the organization for the future, and integrating fragmented systems into a unified growth hub. Nadia is the foundational first move in that transformation, chosen because it can help employees understand what growth means, where they are, and what they need to improve — immediately and personally.
  • Pilots that get users into the tool immediately drive the strongest early results: Gilead's onboarding approach — getting pilot participants directly into a Nadia session within the first ten minutes — generated immediate aha moments. Some participants lost track of time mid-session because the conversations were so engaging. The lesson: adoption is an experience design problem, not a communication problem.
  • More than 30% of Gilead's C-suite proactively requested early access to Nadia: Rather than waiting to be told about the tool, senior executives asked to be first adopters. This level of top-down enthusiasm — in a decentralized learning organization — is being activated deliberately as a function-by-function change management lever, with leaders in each function role-modeling the behavior they want to see from their teams.
  • Helping employees understand coaching AI vs. answer-giving AI is a critical and underappreciated adoption challenge: Employees familiar with Copilot — which delivers fast, direct answers — sometimes approach Nadia with the same expectation. Gilead is actively working to help its workforce understand that AI coaching is designed to facilitate thinking, not provide answers, and that the two tools serve fundamentally different purposes. This distinction requires deliberate, repeated communication.
  • IT and tech partners need to be brought into the AI coaching value conversation — on their terms: IT teams are trained to measure tool value through usage frequency and return rates. AI coaching, by its nature, delivers value differently — in the depth and quality of individual conversations, not in raw click volume. Kerry identifies IT alignment as a critical and often overlooked change management challenge in AI coaching deployments.
  • You can never over-communicate — and functional embedding, not central mandate, is how AI coaching sticks: Gilead's decentralized learning structure means adoption looks different in every function. Kerry's approach is to involve function-level learning and development partners directly in change management planning, giving them ownership over how Nadia is embedded in their specific context rather than pushing a one-size-fits-all rollout.

Full Transcript

Growth at Gilead: The Strategic Framework Behind the AI Coaching Decision

[00:00:01.000]

Sara: I am really excited to be here. Kerry and I chat nearly every day — but this conversation is a little different, where we get to share the journey that Gilead has been on. Before we get into that journey, I'd love to have Kerry share a little more about her role and day-to-day at Gilead.

[00:00:31.079]

Kerry: When I saw that question, the first thing that came to mind was chaos. I look after employee growth for Gilead — every single employee, and how we help them grow and develop. I also look after learning enablement globally. My team supports from design right through to delivery and ongoing sustainment. I'm in the fortunate position of really connecting the dots and seeing the interconnectedness of everything we do — which sometimes creates a bit of chaos.

[00:01:09.400]

Sara: One of the pieces that you and the team have been working really hard on is Growth at Gilead. Can you share more about what it is and why it's so important?

[00:01:22.159]

Kerry: We're very much at the early stages. This is something we're literally about to launch to our full employee population over the next couple of months. We've talked a lot about AI — where it is, where it's going — and we're now at that inflection point where we can truly make changes we've always wanted to make. I've always wanted to personalize learning. I've always wanted it to be like your Apple Watch — where you're actually tracking your own growth easily, every day, in whatever activities you're doing. We're now at the place where we can get there.

Our pillars: we want to amplify individual impact. We want every employee to understand what growth means — both in the near term in their role, and in terms of their performance. We believe firmly that even if we get employees to grow just 1%, the impact on how we innovate and how we reach our long-term goals — 10 transformative therapies — is enormous.

We've also been on a journey over the last couple of years to amplify people leader accountability. Our people leaders are time poor, spans of control are increasing. We found we had many people leaders with just one direct report, so we've been making changes. We want to take leaders on that journey around owning their own growth first — so they experience it for themselves — and then help them understand their role in facilitating growth for others.

Being future ready is another pillar — our people, our systems, our performance management, our annual talent programs. And a major pain point at Gilead is integrating our underlying systems. People spend too long trying to access growth opportunities. The growth hub on our left-hand side of our framework is where we ultimately want to get. The first step on that journey — and we're delighted to be taking it — is Nadia. We see this as a game changer as we start this journey: helping people understand growth, where they are, what they're good at, what they need to get better at. We're in pilot right now. A thousand people over about six to eight weeks. Then we roll out to people leaders first, then all employees, with an aim of having that out there by June or July of this year.

▶ How Gilead Sciences Built Its AI Coaching Strategy Around a New Growth Philosophy

Gilead Sciences' Growth at Gilead framework is a multi-pillar enterprise transformation designed to personalize employee development at scale — amplifying individual impact, empowering people leaders, building future readiness, and integrating fragmented learning systems into a unified growth hub. Nadia, Valence's AI coaching platform, is the first and foundational move in that transformation. Gilead's head of employee growth Kerry describes the goal as making growth as intuitive and continuous as tracking health metrics on a smartwatch — something AI now makes possible for the first time at enterprise scale. The full workforce rollout is targeted for mid-2025.

Early Pilot Results: What Gilead Is Learning in the First Weeks

[00:05:01.259]

Sara: We call Nadia the 'Growth at Gilead AI Coach.' The idea is to bring Nadia to people leaders first, then to the rest of the organization, so leaders can role model the behavior. You mentioned we just launched in January. How is the pilot going? What are the early learnings?

[00:05:42.319]

Kerry: We started small with the first pilot, and we're about to start the next one next week — over 300 people. Even in the initial onboarding, the feedback from Valence was: get people into Nadia straight away in the onboarding session. Absolutely the right thing. We gave people about ten minutes to play around, and immediately people were saying, 'I've got aha moments already.' Some people didn't come back to the session because they were still talking with Nadia. They'd lost track of time. They were going, 'I was actually in there progressing the conversation.' The early feedback is really positive.

We're also seeing some nuanced challenges. People who are already very comfortable with coaching are interacting with Nadia more naturally. We're learning how to help those who don't fully understand what coaching is — and who may not yet understand that Nadia isn't there to just give them an answer. It's there to facilitate their thinking. But you can see the aha moments already: participants like that it was getting to know them, that it felt personal immediately, that it was already starting to give them insight.

▶ What Gilead Sciences Learned in the First Weeks of Its AI Coaching Pilot

Gilead Sciences launched its Nadia AI coaching pilot with a deliberate onboarding design: get participants into a live Nadia session within the first ten minutes. The result was immediate — some participants lost track of time mid-conversation and did not return to the group session, because the coaching interaction was so engaging. Early feedback highlighted the personal quality of the experience: participants felt Nadia was getting to know them and providing genuine insight quickly. One emerging challenge: employees unfamiliar with coaching as a practice need additional support to understand that AI coaching is designed to facilitate thinking, not provide direct answers.

Earning C-Suite Buy-In and Functional Embedding

[00:07:13.939]

Sara: In addition to the pilot itself, you spend a lot of time in deep conversations with function leaders and others to bring them along the journey. What reflections do you have on that work, and why is it so important?

[00:07:40.980]

Kerry: You can never over-communicate. It's really important that we get our HR business partners and our HR community understanding Nadia and using it personally — but also thinking about the use cases for the people leader population and individual contributors. We're talking to the C-suite. More than 30% of our C-suite have said, 'I want to try Nadia out. I want to be one of the first adopters.' That's going to be a game changer in terms of how we move across functions. People listen to the leaders of their function.

We have a decentralized learning model — learning and development sits in every function across Gilead. So bringing function-level L&D partners in, involving them in the change management approach, and asking them: how do we really embed this and drive adoption in your function? We recognize it's not one size fits all. The approach for manufacturing, for example, is going to look different from a corporate function.

We're also trying to be very intentional about making people feel invited — not compliant. It isn't about compliance. How do we create an environment where people feel safe to practice and experiment? And we're hearing that clearly: people want to know what's happening with their data, that it's safe, and that they can experiment freely. Doubling down on that trust-building is important.

▶ How Gilead Sciences Is Using C-Suite Advocacy to Drive AI Coaching Adoption

More than 30% of Gilead Sciences' C-suite proactively requested early access to Nadia — before any organizational mandate — making them among the company's first AI coaching adopters. Kerry, Gilead's head of employee growth, is deliberately channeling this top-down enthusiasm through the company's decentralized learning structure, activating function-level leaders as local change agents rather than pushing a single centralized rollout. The approach recognizes that AI coaching adoption looks different in each function, and that invitation — not compliance — is what creates lasting behavior change.

Practical Advice for HR Leaders Scaling AI Coaching Adoption

[00:09:28.559]

Sara: You've shared so many nuggets about bringing people along the journey — functional buy-in, leadership role modeling. Is there any other advice for HR leaders in the room on a similar journey?

[00:09:59.000]

Kerry: First, help people understand how they can use AI coaching in their specific environment — especially those who are new to it. How do you help them understand the ways to use it? What you really want to do is drive experimentation. That's what we're trying to do.

Second, educate leaders and particularly IT teams. The traditional model of measuring tool value is usage — how many times did you go back, how many logins. But as Scott was describing earlier, that's not the right lens for AI coaching. IT partners need to come away understanding that Nadia is adding value in ways their dashboards don't currently capture. We have work to do there.

Third, help people understand how all these AI tools coexist. We're using Copilot right now, and some people are getting very comfortable with it — it's more instant in delivering an answer. How do we help people understand that Nadia and Copilot serve different purposes? The world we're going to is a range of AI tools supporting you — and it's not about getting comfortable with just one. Helping people understand the use cases for each, and how they could work together, is something we're actively working through.

▶ Three Practical Lessons from Gilead Sciences' AI Coaching Rollout

Kerry, Gilead Sciences' head of employee growth, identifies three lessons for HR leaders scaling AI coaching adoption. First, help employees understand how to use AI coaching in their specific context — drive experimentation rather than instruction. Second, bring IT partners into the value conversation on their terms: traditional usage metrics do not capture the value of AI coaching, and IT teams need a new framework for assessing it. Third, proactively address the coexistence question: employees using answer-giving AI tools like Copilot need clear guidance on why AI coaching serves a different and complementary purpose.

[00:11:49.720]

Sara: Brilliant. Kerry, thank you so much for sharing the experiences and the journey that we're on at Gilead. I know folks are on similar journeys. Excited to continue the conversation.

[00:12:07.299]

Kerry: I really want to call out, and I know for those of you working with Valence — we've had many vendors. This is an awesome partnership. As we go on a journey that is somewhat very new and very uncertain for us, Valence is with us every step of the way. I have 100% confidence that they've got our back and are giving us the best advice. I want to call that out because I'm sure you have experiences with vendors who say all these things and then don't deliver. This has truly been a fantastic partnership.

Sara: Thank you, Kerry. Likewise.

How Gilead Sciences Is Scaling AI Coaching to All Employees | Kerry O'Keeffe, Gilead

In this session from the Valence 's 2026 AI & The Workforce Summit, Kerry O'Keeffe, Gilead Sciences' head of employee growth and learning enablement, shares the company's journey to embed AI coaching at the heart of a new enterprise-wide growth philosophy. Gilead is in active pilot — 1,000 employees across approximately six to eight weeks — with plans to roll out Nadia to all people leaders and then the full workforce by June or July 2025. The conversation covers what Growth at Gilead means and why it was built, early pilot findings including immediate C-suite demand, how Gilead is managing the challenge of helping employees distinguish coaching AI from answer-giving AI tools like Copilot, and practical advice for HR leaders navigating AI adoption at scale in complex organizations.

Kerry O'Keeffe

Senior Director, Global Learning & Development

How Gilead Sciences Is Scaling AI Coaching to All Employees | Kerry O'Keeffe, Gilead

AI Coaching & Workforce Transition | Parker Mitchell

In the opening keynote of Valence's third annual AI & the Workforce Summit, CEO Parker Mitchell and Head of Engineering Ana Martinez make the case that 2026 marks the pivotal year in which enterprise AI shifts from something employees go to — to something that comes to them, proactively. Using a live demo of Nadia, Valence's AI coaching platform, they show how AI coaching can surface personalized insights about team dynamics, onboarding risks, and high-stakes meeting prep — helping managers lead more effectively from day one.

Parker Mitchell

CEO

Ana Martinez

VP of Engineering

Key Points

Key Takeaways

  • 2026 is the inflection year for enterprise AI adoption: The gap between AI's frontier capabilities and frontline adoption is closing. Leaders who invest now in bringing personalized AI to their employees will define the performance trajectories of their organizations.
  • AI coaching shifts from reactive to proactive: Rather than waiting for employees to seek guidance, the next generation of AI coaching proactively surfaces insights — flagging burnout risks, friction points, and onboarding gaps before they become problems.
  • Personalization is the key differentiator in AI coaching: Effective AI coaching requires deep knowledge of the individual: their work style, communication preferences, personality profile, and calendar context. Generic AI tools cannot replicate this level of personalized support.
  • Calendar data reveals the real organizational chart: Traditional HR systems often fail to reflect how people actually work. Nadia uses calendar and meeting data to identify real team structures and surface relevant coaching at the right moments.
  • AI coaching accelerates manager effectiveness from day one: New managers typically spend months building the context they need to lead well. AI coaching compresses this learning curve by surfacing team dynamics, personality insights, and burnout signals immediately — not after quarters of observation.
  • AI coaching complements human connection — it doesn't replace it: The goal of AI coaching at Valence is to augment the human manager, giving them the awareness and tools to have better conversations — not to automate relationships or remove the human element from leadership development.

Full Transcript

Why 2026 Is the Defining Year for Workforce AI Adoption

Parker Mitchell, CEO, Valence

[00:00:03]

Parker Mitchell: It is such an honor to welcome everyone here today, to look out over the sea of faces. I have been torn this morning between wanting to welcome so many people who I know and I've had a chance to work with and partner with, and I wanted to spend the time doing that. And, we'll see in a moment how our AI coach, Nadia, has foreseen my behavior this morning, the morning of Summit. I've spent the last few minutes doing the final edits to slides, to make sure that this presentation is going to go well.

Everyone here is busy. Everyone here has a million things that you could be doing today. So, why did you take the time to spend a day wrestling with this question? We think that the question that is most important — the question that five years from now we will look back and say we wish we spent even more time exploring and trying to answer — is: how well did we help our workforce transition to this new human-plus-AI era?

We think that's probably the defining question of 2026, and 2026 is the year in which the trajectories will start to get hardened. When we first held this summit — this is our third annual AI & the Workforce Summit — it was a very different AI world. We were still in early days. We were still experimenting. People would tell their friends that they had ChatGPT write a sonnet for their partner's birthday, and that was the big, exciting thing.

Fast-forward two years, I think we all know that AI is going to change work, it is going to change jobs, it is going to change the companies that succeed and fail. And we are all wrestling with how do we bring our workforces along in this transition?

Bridging the Frontier and the Front Line of AI

Parker Mitchell: One of the things we're excited about with these summits is that we bring together two different sets of worlds, two different sets of folks experiencing AI in very different ways. We get a chance to hear from people who are at the frontier — at the frontier of how AI is scaling and how AI is growing in its capabilities — as well as people who are on the front line of how AI is being adopted, how it is being rolled out, and how it is changing work.

Our very first Summit, we had a chance to hear from Geoffrey Hinton, on the frontier side of things — the godfather of AI, the Nobel Laureate. And the thing that struck me most about my conversations with him was how he said, despite being as close to this technology as anyone, he underestimated how quickly the capabilities would grow.

My best friend described working with LLMs using an analogy from Rain Man. He said, 'I've been using these LLMs for a year and a half now, and I never know — am I going to get 246 toothpicks? Which is like, how did you figure that out? Or, am I going to get "I can't cross the street."' That's how we feel about AI. You never know if you're going to get a miraculous "wow" moment, or if it comes back insisting there are four items when you asked for four and there are clearly three.

At Valence, we've been really focused on this idea of personalization, and what we've seen is that 2026 is going to be the year, in enterprise, where AI is going to come to you. AI is going to come to your workers where they are, and it's going to try to understand them — instead of them going to AI and trying to understand AI.

The Origins of Valence: Engineering, Leadership, and Personalization at Scale

Parker Mitchell: I studied engineering. I did a minor in cognitive science. I was fascinated by how the brain works, and how the brain works in neural nets. My favorite course was Systems Design 422 — Machine Intelligence. We had a chance to build very small back-propagating neural nets — the foundational technology for LLMs — but they were in the order of 200 to 300 nodes, instead of 200 to 300 trillion nodes. That was the AI winter. But that fascination and curiosity was always there.

I founded an organization called Engineers Without Borders. Most of my job, as that organization grew from 100 to 10,000 people, was about culture — about how to bring people along in this movement. I got some very clear feedback pretty early on that, as an engineer, my EQ was a little less than my IQ. And so I realized that if I was going to help this organization achieve its mission, I needed to become a student of leadership.

The thing I very quickly realized is the thing that worked for me might not be the thing that worked for someone else. The thing that I've learned, my journey — I actually have to sometimes forget that, and apply experience from the person that I am working with, understand their journey, and be able to guide them. And so this idea of personalization — of how do we do this at scale? — was always on our mind.

In 2016, a New York Times article featured research from Prasad Sethi's team at Google — the founding father of the people analytics team at Google. Their research was unequivocal that when teams come together, the driver of performance is not the who on the team, but the how the team works together. That concept became the founding idea of Valence.

Building Nadia: Going All-In on AI-First Enterprise Coaching

Parker Mitchell: For five years, we had tools that really tried to help leaders and managers understand themselves, understand their teams, understand the dynamics of their teams. And it was exciting, but we always knew there was something more. We were always waiting for the technology to catch up to this vision of personalization at scale.

As soon as the APIs, as soon as the publicly available large language models to do this reasoning emerged, we quickly knew that we had two paths — and really only one path was the right path.

The first path is to say, this is going to be powerful — let's add it on to our existing SaaS software system. And the second path, the path we quickly decided was the only path that made sense, was to say: this technology is going to be the most transformative shift. We have to reinvent ourselves as an AI-first company. We need to go find the best AI leaders in the world, bring them together, and begin to solve some of the most challenging problems at work.

In 2023, we launched Nadia. And when we kicked it off, the question we kept getting asked was: would people even talk to an AI coach? Obviously, that question has been answered unequivocally. All the research out there says people are more comfortable, in some cases, talking to AI than they are to humans.

I have a very interesting relationship with Nadia — a creation I'm part of. On the one hand, I'm always very excited about the capabilities she has. And on the other hand, I know that every version, every time I try her, it is going to be the worst version of Nadia I've ever tried. And our mission is to just accelerate that growth.

Proactive AI Coaching: From Individual Coach to Talent Platform

Parker Mitchell: Last year, we partnered with people in the HR function and asked: what are the things you're wrestling with? There was really this belief that we could reinvent a talent cycle — a talent strategy, a talent management and performance management system that probably most of you feel is a little bit outdated. And so there was this chance to reinvent it.

What we found over the course of 12 months is that Nadia has moved from just an AI coach offered to individuals, to beginning to be almost a new talent management platform — one that should be able to bring the visions you have to life, in a much more personal way, that makes it easier and faster and better for your employees to do those performance processes that are core to driving performance.

I want to say a deep thank-you to everyone who brought a new technology and a new vendor into your organization. We know you take risks bringing in new technology, and our mission is to help you reinvent that talent process you have a vision for, as quickly as possible.

Live Demo: Nadia's Personalized AI Coaching in Action

Calendar Intelligence and Proactive Insights

Parker Mitchell: Imagine a coach who, all they're doing — 24/7 — is focused on understanding you and your world and how to help you. Imagine they had access to your systems, access to your calendar. Imagine they got a chance to know each of the different people you work with, and what their work styles are like, and what their personality is like, and what some of the dynamics between you or the group might be. Imagine how much better that coaching would be.

What Nadia has been doing is looking at my calendar and surfacing insights. She's identified what we'd call high-stakes meetings, and she's saying: 'Ana and you have very different definitions of ready.' She talks about what's coming up. I have an emergent style. But that is a good thing to know. These kinds of insights — I'm a busy executive. I don't have time to think through all the ramifications of things. So if Nadia's doing this and bringing it to me, it's an extraordinarily helpful experience.

Building Personalized Profiles for Every Employee

Parker Mitchell: At the heart of this new, highly personalized experience is the profile that Nadia's building of you. There's a private profile just for me — it has information about my organization, my team's goals, documents I've uploaded. We also support a range of different work style and personality frameworks. If you like DiSC, you can connect and upload your DiSC profiles. We have one called Perspective, that hundreds of thousands of people in the corporate world have used.

The coaching for someone who is pressure-prompted is going to be utterly different from someone who starts early and builds momentum. You need very different coaching on that. And what we discovered quickly is people actually want others to know about their work styles. They want to be able to say, 'The best way to work with me is X.' So Nadia can create a collaboration profile that you get full control over — and then you get to reveal it, so other Nadias can understand it and give specific coaching on it.

AI Coaching for Team Collaboration and Friction Detection

Parker Mitchell: Where do you find the source of truth about the teams that you are on? HR systems, we've discovered, are not the source of truth. The actual source of truth is really your calendar information. Nadia goes through and asks: who are the people you have most meetings with? And she can begin to assemble them into teams based on the types of recurring meetings you have.

I'm going to show you what it looks like when I ask Nadia: 'Kira and I will be organizing a summit together. We're trying to bring together 200 or so folks, speakers from across America, and it's in six weeks. Given what you know about the two of us, what are some friction points that might emerge as we work together on this?'

This isn't a simple question-and-answer GPT wrapper. There's a question here, and our intelligence layer and memory layer is going through, trying to understand: what is the situation? What are the pieces of information, the dots I need to connect? It's looking at profile information, calendar data, chat history — all of this is scaffolded to each step of the conversation to provide a better suggestion. We're going from general guidance to person-specific guidance.

Proactive Onboarding Support for New Team Members

Parker Mitchell: Nadia spotted that a new member had been added to our team, and she sent me an email: 'There's a new person who has been added to your team. Maybe that's someone you need to onboard.' She's highlighting what are some of the challenges that might exist with onboarding, especially when you're a busy executive. She's given me a manager alert about investments I need to be making to onboard Ana. We're trying to have Nadia understand your world — and then feed you the things that are going to be most helpful for you as a manager.

AI Coaching for New Manager Onboarding: Ana Martinez's Story

Ana Martinez, Head of Engineering, Valence

[00:27:18]

Ana Martinez: My name is Ana Martinez. I'm the head of engineering at Valence. I've been here for about four months. Before that, I spent five years at Slack. Slack had a reputation of having the best onboarding process in the industry. They did a fantastic job. By the end of my week of onboarding, I felt like I knew the company, its values, culture. I knew how to fix a bug. I even knew where the best coffee in the office was.

But then you're left to your own devices, and I was like, but I don't know how to do my job here. I don't know who's working with me. I don't know who in my team is burning out. I don't know who my colleagues are. Who do I need to be building trust with? And that takes months. We're working with people. People take a while to open up to you. The engineer who is a high performer might be afraid to tell me that things are not going well, because he wants to show that he has everything under control, even though he's interviewing and on the brink of burnout. I might not know what meetings I need to be careful about, what dynamics are in place.

That takes months, and quarters, and many, many conversations. In the meanwhile, you're expected to deliver with your team. You're expected to onboard new members. You're expected to interview people and figure out what personalities are going to work well with your team. And then you finish onboarding, do a great job, and now all of these new humans are under you. There's no onboarding for that.

How Nadia Accelerates Manager Effectiveness

Ana Martinez: That is the past. Now, how do we use Nadia to onboard? In My People, I can quickly add who my engineers are — the people who are going to be working with me — and I can look at their personalities and really understand who I need to be careful delivering feedback to. Who are the people that really prefer having a frank conversation? I'm a very blunt speaker, and it will be great to know who of my engineers will shy away when I'm trying to give them feedback.

As a new manager joining a new team, I want to know who in my team is burning out immediately. In this case, Nadia is telling me: 'Hey, Chris might be carrying too much.' Let me tell you something about Chris. I worked with Chris for the past four months. Chris will never tell me that he's burning out. Chris is a team player, and if I tell Chris, 'Can you do this for me?' Chris will say yes, even if he's working at 2 a.m. and feels like burning out. So, as a new manager, it's fantastic. Now I can see that Chris is at risk of burnout. Not only that, but the insights also say that he becomes hypercritical of himself under pressure. That tells me — blunt Ana — please be gentle with how you talk to him.

I also have the high-stakes meeting prep. This is a meeting with a leadership team where I'm the only introvert in a room of nine, so I never speak. It was really interesting to see this insight, because I thought, oh, this is why I feel like I always want to say something, and by the time I've made it in my brain and practiced it a few times, they move on. So it's been really helpful for me to know that I need to get out of my comfort zone and maybe be a little bit more assertive.

The last point I want to share is about John. John has an incredible ability to read people. I learned this through many conversations with him. But having this insight — knowing his EQ is actually quite high — tells me to pay attention to what he's telling me. I have 23 direct reports. I need my leaders to tell me, 'Hey, be careful about this.' Being able to get these insights immediately, and not have to build this level of trust through months or quarters, is fantastic.

This is one of the reasons why I wanted to join Valence. I am a very passionate manager. Managers make mistakes with people, which hurt people. And by the time you learn how to do things, your top performer might have left, or you might have failed to deliver the tough conversations that needed to happen.

Proactive AI Coaching and the Human-Plus-AI Era

Parker Mitchell

[00:33:08]

Parker Mitchell: To bring this back to full circle — this idea of proactivity is the key thing we are weaving into this idea of an AI coach. There are different proactive coaching options you're going to be able to have. You're going to be able to get different types of nudges. But the idea here is a bigger step: to bring our workforces along in this journey.

2026, I think, is going to be one of the most challenging moments for us as leaders as we try to help our workforces navigate this. One of the key things we need to do is try to put the most helpful and powerful and personal and augmentative AI into their hands.

As everyone who has a workforce knows, you're going to get a power law distribution. You're going to get some set of folks who are going to be the pioneers, pushing the boundaries. For the rest of the people, it's going to be a journey to help them build that fluency. And so we are actively trying to work to put this proactive AI into employees' hands — not just to help them perform better, not just to help reduce some of the friction of how people work together, but also to help them make this transition to the new human-plus-AI era.

We wanted to showcase to you some of the things we think are possible at this intersection of the frontier and the front lines. So, thank you.

AI Coaching & Workforce Transition | Parker Mitchell

In the opening keynote of Valence's third annual AI & the Workforce Summit, CEO Parker Mitchell and Head of Engineering Ana Martinez make the case that 2026 marks the pivotal year in which enterprise AI shifts from something employees go to — to something that comes to them, proactively. Using a live demo of Nadia, Valence's AI coaching platform, they show how AI coaching can surface personalized insights about team dynamics, onboarding risks, and high-stakes meeting prep — helping managers lead more effectively from day one.

Parker Mitchell

CEO

AI Coaching & Workforce Transition | Parker Mitchell

VP of Engineering

AI, Work Redesign & Talent Strategy | Diane Gherson & Alan Murray

In this conversation at the Valence's 2026 AI & The Workforce Summit, former IBM CHRO Diane Gherson and WSJ Leadership Institute President Alan Murray diagnose why AI adoption consistently succeeds at the product level but stalls inside organizations. Alan brings a unique vantage point: he co-hosted a year-long series of off-the-record CEO roundtables with IBM Chairman Arvind Krishna, and heard the same frustration repeated across every dinner — companies winning with AI externally, failing with it internally. Diane unpacks why, drawing on her experience leading one of the most ambitious AI-driven workforce transformations in corporate history at IBM. Together they make the case that the real barriers are people, culture, and organizational design — not technology — and map what it takes to redesign work for a world where humans and AI agents operate side by side.

Diane Gherson

Former CHRO, IBM; Board Member, Kraft Heinz

Alan Murray

President, The WSJ Leadership Institute, The Wall Street Journal

Key Points

Full Session Transcript

Why AI Succeeds in Products But Stalls Inside Organizations

[00:00:00]

Alan Murray: I spent a good part of last year hosting a series of six or seven dinners with very large company CEOs — Arvind Krishna was my co-host — about 12 at a time. We did one in Chicago, one in Houston, one in New York, one in London, one in D.C. Off-the-record conversations about what's working with AI and what isn't. And every dinner had kind of the same pattern. Many of the companies were having amazing success integrating AI into the products and services they took to market. But with maybe one or two exceptions, they were all very frustrated about what was going on within their own companies. They saw the huge potential to reinvent internal processes, but they weren't getting the payoff. And within 15 or 20 minutes, each of those conversations, I realized: we're not talking about technology. We're talking about people. We're talking about culture. We're talking about how you organize. What are those CEOs missing?

[00:01:15]

Diane Gherson: Let's start with psychological safety. We heard a lot about the fact that people are maybe pretending they're not using AI, but they really are — underreporting because they don't want to look like they're not smart, or because they went to school somewhere it was viewed as plagiarism. But the real issue is that leaders need to help people understand why they're valued. And somehow that gets lost in the discussion.

Google has done a nice job of this — saying: we value people because you can think critically, because you can collaborate. They have a long list of things that only humans can do, at least at this point. That gives people a sense of courage. And when you pair that with 'we're going to upskill you,' they feel taken care of.

[00:02:07]

Alan Murray: That is not a skill most CEOs have developed.

[00:02:17]

Diane Gherson: Right. It needs to come from the CEO — and that's critical. That's where you get the psychological safety. What we did at IBM was say: we hired you because of your curiosity. We screen for curiosity. So people think, 'Okay, I can do this learning. I can do continuous learning.' That builds psychological safety.

The second thing is trust — and that requires having principles around displacement. I'm sorry, but that's on the table. Don't pretend it's not.

[00:02:59]

Alan Murray: You can't say it's not going to happen.

Diane Gherson: There's going to be displacement. What are your principles around it? At IBM, we said: your job may go away. We have about 9% attrition — those people won't be replaced. But other jobs will go away too over time. That's why it's important to upskill, because there are higher value jobs you could be doing. It wasn't happily received, but people got it. We were honest and straight up about what we saw happening. And I think that's often missing. Companies are silent on the topic — and then employees imagine the worst.

The last thing — which always seems to be missing — is clarity around why we're doing this. Is it to get to market faster because competitors are pulling ahead? Higher customer satisfaction? Allianz, for example, went from claims reimbursement in 21 days down to 4 hours. That's the number one thing an insurance company has to do. When people understand the why, it changes everything.

Why Internal AI Adoption Fails While Product AI Succeeds

Internal AI adoption consistently underperforms relative to AI in customer-facing products and services, according to former IBM CHRO Diane Gherson. The root cause is not technology — it is people, culture, and organizational design. Employees hide their AI use for fear of judgment, leaders fail to articulate what human capabilities they value, and organizations remain silent about displacement. The result: employees imagine the worst and resist. The fix starts with psychological safety, honest communication about displacement, and clarity on the business rationale for AI adoption.

Continuous Learning and the Skills Challenge in an AI-Driven World

[00:04:20]

Alan Murray: One of the questions is: we'll help you into this future that may require you to train for new jobs. But we don't see that future right now. People are afraid of it, and they don't know what to train for. What are the skills people are going to need to survive in this new world?

[00:04:45]

Diane Gherson: That is the sad truth. At IBM, we said 'we're going to upskill you,' and some people did get into higher-banded jobs. But two years later, those jobs were automated too. So you have to keep learning. Ongoing learning, ongoing upskilling — that is the absolute truth. It's not a one-time investment. It's a permanent operating model.

Why Continuous Upskilling — Not One-Time Training — Is the Answer to AI Displacement

Upskilling employees for an AI-driven future is not a one-time initiative — it is a permanent organizational capability. Diane Gherson, former CHRO of IBM, shares that even employees who successfully reskilled into higher-value roles at IBM found those roles automated just two years later. The implication for HR leaders is clear: organizations must build cultures and systems that support continuous, ongoing learning as a baseline operating model, not a periodic response to disruption.

Business-First AI Design: The Alternative to AI-First and People-First Thinking

[00:05:10]

Alan Murray: People think of building an AI-first company. But that's not really the big challenge, is it?

[00:05:16]

Diane Gherson: I'm really scared of that. AI-first means: if AI can do it, give the job to AI. That's optimizing for AI. There's also optimizing for people — which is also a mistake in a lot of cases because you'll fall behind your competitors. But there's something in the middle, which I'd call business-first: optimizing simultaneously for both.

Let me give you an example of where people-first makes sense. Customer service is a burnout job, especially when customers are yelling at you. One big airline I worked with was losing their top customer service people. They put in AI — and AI had the same de-escalation skills as their top people, but it didn't take the abuse personally. Taking it personally caused people to take it home and eventually quit. So that's a case where protecting people was the right call.

But when you take all your entry-level jobs and say, 'AI can do it all, why hire entry-level?' — HR people, what's the problem with that? We're responsible for the talent pipeline. And it's not just whether someone had a job. It's the intrinsic knowledge you learn as an apprentice, the social intelligence you develop by osmosis. You know who to go to for certain questions, who to appeal to if you want to influence something. Political capital. All of these intangibles — you learn them. And you need them to be a great leader.

What Is Business-First AI Design — and Why It Beats AI-First or People-First

Business-first AI design means optimizing simultaneously for human and AI contribution, rather than defaulting to one or the other. Former IBM CHRO Diane Gherson defines AI-first design — assigning every automatable task to AI — as a strategic mistake that destroys the talent pipeline and eliminates the apprenticeship-based learning that develops future leaders. People-first design can leave organizations behind competitors. The business-first approach asks: what outcome do we need, and who — human or AI — is best suited to deliver it?

Redesigning Work for a World of Humans and AI Agents

[00:07:33]

Alan Murray: When I listen to you talk about this, it becomes very clear that we have to redesign the way work is done: what are humans going to do, what do we do to maintain the pipeline, what do we do to develop high-level talent to oversee AI in the future? What does that redesign work look like, and how do you even go about it?

[00:07:57]

Diane Gherson: Work design is something HR has not done for decades. The last time I touched it, it was called sociotechnical systems design, and it was for factories.

[00:08:12]

Alan Murray: This is Frederick Winslow Taylor you're talking about.

[00:08:14]

Diane Gherson: Post that — I'm not that old. But Frederick Winslow Taylor was the last one who did it en masse. He was a systems engineer who standardized everything, every motion. He micro-designed jobs so workers just had to do one little thing over and over again instead of making an entire shoe from cutting the leather.

[00:08:47]

Alan Murray: That era of work design was all about making human beings better machines because they had to be part of the production line. But we are in an age where the machines are going to take care of themselves. We don't need people to be better machines. We need people to be better people.

[00:09:08]

Diane Gherson: But look at all the jobs being created by AI right now: prompt engineers, audit trail people, data labelers — they're all serving the AI. Just like the people who sat at the assembly line twisting the knob. To some extent, we're doing the same thing. Instead of designing work around what AI does, we've got to design work around what we want people doing. That is the biggest challenge facing HR right now.

Why Work Redesign Is HR's Most Urgent — and Most Neglected — Priority

Work design — the discipline of determining who does what in an organization — has not been a serious HR function since the industrial era, according to Diane Gherson. The risk today is that organizations are defaulting to the same mistake: designing jobs around what AI does, rather than what they want people to do. Many AI-era roles (prompt engineers, data labelers, audit trail analysts) simply serve the AI, echoing the assembly-line model. The most important question for CHROs is not 'what can AI automate?' but 'what do we want humans to do, and how do we design work around that?'

Humans and AI Agents Working Side by Side: Real-World Examples

[00:09:48]

Alan Murray: One of the exceptions in my CEO roundtables — Robin Vince at Bank of New York — told me about it on the record afterwards. They now have hundreds of agents with email addresses, names, who log onto the system and work side by side with people. Sometimes people manage the agents. I assume agents sometimes manage people. What does that world look like, and how do you prepare for it?

[00:10:43]

Diane Gherson: It's happening at Kraft Heinz, which is a company here today. They've been using Nadia, but they've also been using an agent working on how to replace dyes with natural ingredients — the whole MAHA movement. At first it was going to be done by an outside firm, but the agent has done a perfectly good job of it. What they do is take the agent's outputs and then test them in the lab. All the scenarios, branching, composition testing, shelf life, appearance — all done by the agent. Then humans validate it in the lab. That's how they work together.

Another example: Athena, working in product development at a consumer company. A team member went on maternity leave, and the group decided the agent should take over that work. They were — friendly with it, I suppose. They liked it. And it wasn't impinging on their sense of agency. I think it's when it starts impinging on their sense of agency that it becomes a problem.

[00:12:09]

Alan Murray: And how do chief human resources officers oversee that?

[00:12:13]

Diane Gherson: Good question. Actually, I think that's a job Nadia should be doing.

[00:12:20]

Alan Murray: A task for Nadia.

[00:12:21]

Diane Gherson: Nadia is everywhere.

[00:12:22]

Alan Murray: Diane, fascinating conversation. Thank you for taking the time.

[00:12:26]

Diane Gherson: Thank you.

AI, Work Redesign & Talent Strategy | Diane Gherson & Alan Murray

In this conversation at the Valence's 2026 AI & The Workforce Summit, former IBM CHRO Diane Gherson and WSJ Leadership Institute President Alan Murray diagnose why AI adoption consistently succeeds at the product level but stalls inside organizations. Alan brings a unique vantage point: he co-hosted a year-long series of off-the-record CEO roundtables with IBM Chairman Arvind Krishna, and heard the same frustration repeated across every dinner — companies winning with AI externally, failing with it internally. Diane unpacks why, drawing on her experience leading one of the most ambitious AI-driven workforce transformations in corporate history at IBM. Together they make the case that the real barriers are people, culture, and organizational design — not technology — and map what it takes to redesign work for a world where humans and AI agents operate side by side.

Diane Gherson

Former CHRO, IBM; Board Member, Kraft Heinz

AI, Work Redesign & Talent Strategy | Diane Gherson & Alan Murray

President, The WSJ Leadership Institute, The Wall Street Journal

Parker Mitchell - The Human + AI Era

For Parker Mitchell, founder of Valence, the question that will determine tomorrow's market leaders: how well did leaders help their workforce transition into a human+AI era?

Key Points

Parker Mitchell - The Human + AI Era

For Parker Mitchell, founder of Valence, the question that will determine tomorrow's market leaders: how well did leaders help their workforce transition into a human+AI era?

Parker Mitchell - The Human + AI Era

Melissa Werneck - I Raised My Hand for the Pilot

Melissa Werneck, former global Chief People Officer at Kraft Heinz, believes the only way to lead AI transformation authentically is to join the pilot yourself.

Key Points

Melissa Werneck - I Raised My Hand for the Pilot

Melissa Werneck, former global Chief People Officer at Kraft Heinz, believes the only way to lead AI transformation authentically is to join the pilot yourself.

Melissa Werneck - I Raised My Hand for the Pilot

Jennifer Carpenter - When Managers Lead, Teams Follow

At ADI, 65% of employees and 77% of managers have used Nadia. One of their biggest insights from studying adoption: when leaders use AI coaching, adoption among their teams doubles.

Key Points

Jennifer Carpenter - When Managers Lead, Teams Follow

At ADI, 65% of employees and 77% of managers have used Nadia. One of their biggest insights from studying adoption: when leaders use AI coaching, adoption among their teams doubles.

Jennifer Carpenter - When Managers Lead, Teams Follow

Amy Reichanadter - The New Learning Imperative

Key Points

Amy Reichanadter - The New Learning Imperative

Amy Reichanadter - The New Learning Imperative

Tim Hourigan - A 1% Lift = $1.5 Billion at Home Depot

The former CHRO of The Home Depot on how real-time AI coaching can equip thousands of store leaders to handle high-pressure moments in alignment with culture and SOP and improve consistency, engagement, and business outcomes at scale.

Key Points

Tim Hourigan - A 1% Lift = $1.5 Billion at Home Depot

The former CHRO of The Home Depot on how real-time AI coaching can equip thousands of store leaders to handle high-pressure moments in alignment with culture and SOP and improve consistency, engagement, and business outcomes at scale.

Tim Hourigan - A 1% Lift = $1.5 Billion at Home Depot

Ethan Mollick - HR Is R&D Now

Ethan Mollick challenges leaders to stop counting AI adoption as the number of PowerPoints produced and start redesigning work.

Key Points

Ethan Mollick - HR Is R&D Now

Ethan Mollick challenges leaders to stop counting AI adoption as the number of PowerPoints produced and start redesigning work.

Ethan Mollick - HR Is R&D Now

Holly Tyson - Closing the Management Gap at Cushman & Wakefield

Holly Tyson, Chief People Officer at Cushman & Wakefield, explains how AI coaching can be a massive unlock by democratizing great management and giving thousands of frontline leaders real-time guidance, institutional knowledge, and in-the-moment practice.

Key Points

Holly Tyson - Closing the Management Gap at Cushman & Wakefield

Holly Tyson, Chief People Officer at Cushman & Wakefield, explains how AI coaching can be a massive unlock by democratizing great management and giving thousands of frontline leaders real-time guidance, institutional knowledge, and in-the-moment practice.

Holly Tyson - Closing the Management Gap at Cushman & Wakefield

Jordana Kammerud - From Democratizing Coaching to a Super Business Partner

Jordana Kammerud, former CHRO of Corning, on how Nadia has helped catalyze enterprise AI adoption, reducing fear, reinforcing company values at scale, and prompting a blank-sheet rethink of legacy HR processes for more personalized development.

Key Points

Jordana Kammerud - From Democratizing Coaching to a Super Business Partner

Jordana Kammerud, former CHRO of Corning, on how Nadia has helped catalyze enterprise AI adoption, reducing fear, reinforcing company values at scale, and prompting a blank-sheet rethink of legacy HR processes for more personalized development.

Jordana Kammerud - From Democratizing Coaching to a Super Business Partner

Scott Belsky - AI That Knows Us On Our Own Terms

AI has the potential to restore what humans have always longed for to be known and understood as individuals at scale and on our own terms.

Key Points

Scott Belsky - AI That Knows Us On Our Own Terms

AI has the potential to restore what humans have always longed for to be known and understood as individuals at scale and on our own terms.

Scott Belsky - AI That Knows Us On Our Own Terms

AI Agents, Agentic Work & the Future of Work

In this keynote session from the Valence's 2026 AI & The Workforce Summit, Wharton professor and Co-Intelligence author Ethan Mollick explores the accelerating shift from AI chatbots to full agentic AI systems capable of completing hours of knowledge work autonomously. Ethan introduces live demonstrations of agentic AI in action, shares landmark research on AI's impact on task completion and workforce performance, and makes the provocative case that HR — not IT — is the function best positioned to lead organizations through this transformation. Talent leaders and CHROs will leave with a sharper framework for measuring meaningful AI adoption and a challenge to build impossible things.

Ethan Mollick

Wharton Professor, Co-Director of the Generative AI Lab

Parker Michtell

CEO

Key Points

Key Takeaways

  • Agentic AI has crossed a practical threshold for knowledge work: The GDPVal study found that when expert judges blindly evaluated AI versus human output on complex professional tasks, the best AI model won or tied 72% of the time as of late 2024 — up from 48% just months earlier. For any intellectual task that takes more than a few hours, Ethan recommends assigning it directly to the AI and checking the work.
  • HR is the new R&D: Ethan argues that navigating the AI transition is not an IT implementation problem — it is a human change, incentive, and organizational design problem. The function best equipped to lead it is HR, because the core blockage is how organizations incentivize, reward, and guide people through uncertainty.
  • Leadership behavior is the primary driver of AI adoption: Organizations where senior leaders actively model AI use — like Nicolai Tangen at the Norwegian Sovereign Wealth Fund, who asks in every meeting how people are using AI — see dramatically faster and deeper adoption than those where leadership abdicates the responsibility.
  • ROI-focused AI deployment is a trap: Optimizing for productivity metrics without rethinking organizational purpose leads to what Ethan calls a 'sea of PowerPoints' — AI-generated volume that looks productive but produces no meaningful output or change. Leaders must ask not just 'how do we increase productivity?' but 'productivity for what?'
  • AI amplifies performance variance — and the demographics are surprising: Research on AI negotiation agents found that people who are better at directing AI agents compound their advantage significantly over time. Women's negotiation agents outperformed men's in the study, despite women typically performing worse than men in traditional negotiation settings.
  • Better adoption metrics are essential: Counting what percentage of employees have 'touched' an AI system is a misleading measure of adoption. Ethan challenges leaders to ask: Have we built an impossible thing? Have we jettisoned something critical? Those are the metrics that reflect genuine AI-driven transformation.

Full Session Transcript

The Gap Between AI Pioneers and the Frontline Workforce

[00:00:00]

Parker Mitchell: Ethan is the author of Co-Intelligence, this exploration of what the world will be like as we have another intelligence available to us. I want to begin by asking about this concept of frontier and frontline. You have a foot in both worlds. Have you seen this type of divergence between the sense of possibility and the frontline reality?

[00:00:31]

Ethan Mollick: What's really interesting about any diverse organization is there's almost certainly some people in the organization using these tools as the most advanced users on the planet in whatever industry you're in. Because there are always curious people, people who get how AI works and start working with it. And often they're just not telling you they're doing this. The most advanced users are actually inside organizations by and large — they're just doing it secretly.

When I talk to these people, they're often very excited to tell me what they're building. And then you ask, 'Who do you go to?' And they have no idea who to talk to inside the company. Or they're afraid to say anything anyway, because there's a policy from 2023 banning AI use that requires you to go to a council that then decides on the use case. And within five to seven months, you'll get a hearing in front of the council. And then they'll end up buying a vendor product instead.

Why Advanced AI Users Hide Inside Organizations

The most advanced AI users in any industry are often already inside large organizations — but they're using AI secretly. According to Wharton professor Ethan Mollick, these employees hide their AI use because unclear 2023-era AI policies create fear of punishment, and because employees know that revealing AI-driven productivity gains may threaten their jobs or reputations. This hidden adoption creates a major blind spot for organizational leaders trying to understand their AI readiness.

The Leadership, Lab, and Crowd Framework for AI Success

[00:01:20]

Parker Mitchell: What are the implications for a leader thinking about their whole workforce when a small number of pioneers are driving AI forward, potentially in unofficial ways?

[00:01:34]

Ethan Mollick: My informal method is: leadership, lab, and crowd. You need three things to make AI succeed. And one thing you need is leadership — and that's often what's lacking. People desperately want clear answers, like, 'How do we navigate AI?' And they're not forthcoming. The AI labs are making stuff up and throwing things against the wall. Most consulting companies are just on their first projects. The technology is changing really quickly — I'd argue we went through another step function change in the last six or eight weeks. Leadership needs to realize we're on uncertain territory, but we do need to guide this in some direction, and set the incentives up so people can help get guided.

[00:02:26]

Parker Mitchell: Do you have an example of leaders who have guided that in ways you think are most positive?

[00:02:33]

Ethan Mollick: Sure. Nicolai Tangen, who runs the Norwegian Sovereign Wealth Fund — the biggest pool of money on the planet — overcame his risk managers and said, 'We need to start using AI, and everyone gets access to ChatGPT Enterprise.' And every meeting he asks how people are using it. He's told me that 50% of their office is now writing code, and only 20% of them are coders. By asking and modeling use, you get big advantages.

I've also been impressed by what's going on inside Walmart's corporate offices. They're in a similar situation where they realize it's kind of a big deal, and there are a lot of interesting experiments happening throughout the organization. It's been interesting to watch the contrast with Amazon, where they block any external agents, while Walmart is thinking about how to embrace them internally. But it has to come from the leadership level or it gets stuck.

How Leadership Behavior Drives Enterprise AI Adoption

Leadership modeling is the single most important factor in enterprise AI adoption, according to Ethan Mollick. At the Norwegian Sovereign Wealth Fund, CEO Nicolai Tangen mandated universal ChatGPT Enterprise access and asks in every meeting how staff are using AI — resulting in 50% of office employees now writing code, despite only 20% being trained coders. Organizations where leadership abdicates this responsibility, delegating to consultants or councils, consistently see AI adoption stall.

Agentic AI in Action: A Live Demonstration

[00:03:31]

Parker Mitchell: You mentioned another step change in the past six or eight weeks. Those of us at Valence who are close to it feel the same thing. Can you showcase what's possible?

[00:03:51]

Ethan Mollick: I've got Claude Code running here locally, and I gave it access to a folder full of fake information about an entire company's AI transition plan. Claude Code is basically an agentic AI system that can run on your computer. I pointed the AI at this folder and said, 'Figure out any issues with the documents, any risks, and come up with a high-level strategy presentation I can give to the CEO right now about risk-proofing this.' And I just gave it that instruction. It will go off and figure out how to do this — writing files, reading files, going online, invoking research agents. Just go do the work.

[00:05:31]

The most important academic paper of last year — that I didn't write — is called GDPVal. They brought in people with an average of 14 years of experience in various industries, representing 5% of the U.S. economy, and had them create complicated tasks from their regular jobs. It took humans about seven hours on average to do this work. It took the AI 5 or 10 minutes. Then a third set of experts blindly judged the outputs — they didn't know whether they were AI or human created — and picked which they liked best.

When this came out last summer, the best model won about 48% of the time. When GPT 5.2 came out this past December, it won or tied 72% of the time. And what that means is: the way you should do work has changed pretty dramatically.

For any intellectual task that you think AI may be able to do that takes more than a few hours, assign the task to the AI, then check the work later.

Even if it doesn't work out and you end up doing it yourself — the 28% of the time AI fails — you'll still save three times as much effort and time as if you had done it yourself. That's a pretty radical change.

[00:09:27]

The real change is agents suddenly became real. That's because the models got better, and the harnesses and systems agents operate in got better. And now you're actually getting real work done with AI. It used to be a chatbot model — working back and forth with AI. That increasingly is not the model. It's almost a management or organization model. And that's a big change.

What the GDPVal Study Reveals About AI's Impact on Professional Work

The GDPVal study is one of the most significant benchmarks of AI capability in professional work. Researchers gave experienced professionals — averaging 14 years of experience across industries representing 5% of the U.S. economy — complex real-world tasks that took humans roughly seven hours. AI completed the same tasks in 5 to 10 minutes. Blind expert judges preferred the AI output 72% of the time as of late 2024, up from 48% just months earlier, signaling a fundamental shift in how knowledge work should be approached.

From Prompt Engineering to AI Management Skills

[00:10:01]

Ethan Mollick: Prompt engineering as a task has gotten easier. All the tricks you used to teach people — telling the AI to take things step by step, bribing it, whatever else — that no longer matters. It has no effect anymore. So you don't need to do any of that, which is great. But instead, if I'm assigning the AI a seven-hour task, suddenly this looks a lot like management. The right way to assign a task to AI looks like writing a PRD, or a standard operating procedure, or a product design document. The better you are at explaining what you need, designing the kind of test you want, and assessing the work — the better the results are going to be.

What's happened now is AI systems are smart enough to actually self-prompt themselves. You can have skill files — written in plain English — and the AI can just pick up a skill when it needs it. Start to imagine libraries of these things inside your organization that the AI picks up or not. Your competitive advantage is going to be how good your skills are in a lot of ways.

Why Assigning Work to AI Looks Like Good Management

As AI agents take on longer, more complex tasks, the skill of working with AI has shifted from prompt engineering to management. Ethan Mollick explains that directing an AI agent on a multi-hour task now resembles writing a product requirements document or a standard operating procedure: the clearer the instructions, the better the output. Good managers, he argues, will likely be good at managing AI agents for the same reasons — clarity, delegation, and output assessment.

Why HR Is the New R&D in the Age of AI

[00:14:56]

Ethan Mollick: I don't think this is an IT solution. I think it's an HR solution. People don't trust the AIs enough to be smart about giving advice, smart about advising people individually, smart about helping you make decisions. So, they tend to view this like an IT technology — we have to implement this and get people used to it. This is not a static endpoint, and we're going to have to change how work operates.

A lot of executives are just abdicating the responsibility to do this. They're hoping someone will tell them. The ones I see that are successful, they mostly don't want to tell you about it. And to the extent they're willing to, it's not useful to you because they're changing how they operate in a way that's not a universal tool. For the people in this room, this is your moment — not just as HR, or R&D, but because we have this blockage about how we incentivize people, how we reward people, how organizations work, that we have to guide people out of. And the only way to do that is with HR leadership at the center of things.

Why HR — Not IT — Should Lead Enterprise AI Transformation

Ethan Mollick argues that AI adoption in enterprises is fundamentally an HR challenge, not an IT one. The core obstacles — unclear incentives, fear of job loss, distrust of AI, and employees hiding productivity gains — are human and organizational, not technical. He frames HR as the new R&D function: the team uniquely positioned to redesign incentive structures, model new behaviors, and guide organizations through the kind of change that has no established playbook.

Managing AI Agents and Humans: EQ, Theory of Mind, and the Skills That Transfer

[00:16:42]

Parker Mitchell: If I come in on a Monday morning, I'm going to ask the humans on my team how their weekend was. And I'm going to ask my AI agents how much work they've done in the 72 hours since we left on Friday. It feels like those are almost two different skills. Are we going to see someone able to manage both humans and agents as agents become more powerful, or is there going to be a bifurcation?

[00:17:29]

Ethan Mollick: I do not know. I suspect that the EQ skills translate over. There's a paper suggesting that being good at AI is equivalent to having a theory of mind for the AI — which is what a good manager does for people. Understanding what's frustrating it. The AI has things it's stubborn about — it expresses that in words. If you can get a sense of what it's stubborn about, where it gets hung up, where you need to give it answers — that's a kind of common similarity to understanding where people might be stuck, what they need to know, why they're messing up. It's obviously not the same as humans, but there is a real parallel.

There's also a new paper on AI negotiation agents that found the agents amplify the variance between people. If your agent isn't good at negotiating, you lose out a little every time — and that's a multiplier effect. The demographics of the people involved, their experience with AI, all predicted how good their agents would be. Women turned out to be better at building agents than men in this study — even though in most studies on negotiation, women do worse than men for various reasons. Women's negotiation agents outperformed men's. We don't understand all of this yet.

How AI Agents Amplify the Performance Gap Between Employees

Research on AI negotiation agents found that the agents amplify existing performance variance between employees — meaning those who are better at directing AI agents compound their advantages over time. Surprisingly, women built better-performing negotiation agents than men in the study, outperforming in a domain where women typically underperform in traditional settings. Ethan Mollick cites this as evidence that the demographic patterns of AI advantage are still poorly understood and may not follow conventional assumptions.

The ROI Trap: Why Productivity Metrics Lead AI Deployments Astray

[00:20:02]

Parker Mitchell: Many folks in this room are facing the ROI world. They're being asked for the dollars and cents on the investment. How would you help make the argument for R&D?

[00:20:35]

Ethan Mollick: My colleagues have a tracking study at Wharton, and they're finding 75% of companies report positive ROI. I don't think it's the problem it was before. My fear is it's a trap, though — because ROI forces you into a dangerous pattern. Here's my nightmare scenario. I wrote, 'Write a memo based on this PowerPoint, then turn the PowerPoint into a memo, then more PowerPoints.' And it did that. And I kept saying more PowerPoints, and I got 21 of them. And they're good PowerPoints — that's the problem with them.

My fear is that if you aim for ROI, if you aim for productivity gains without thinking about organizations, you are going to drown in a sea of PowerPoints. You don't want PowerPoints, you want outputs. You want change. When ROI becomes the goal, and you don't ask 'productivity for what?' — I can give you an infinite work slop. If that's how you judge performance, you're in trouble.

[00:22:42]

Parker Mitchell: So it automates the tasks that have been designed in an old world of work, versus the investment you need to rethink.

[00:22:50]

Ethan Mollick: Right. My most depressing story: I spoke to someone at a very large company whose job was to lead a team of 14 people producing a compliance report every week. During COVID she couldn't produce the report — but the team was kept together. After returning to the office, they started producing it again. For a year and a half, she hadn't sent the report to anyone — just curious if anyone used it. No one ever asked. Fourteen people producing a report every week that nobody read, nobody wanted, nobody knew what it was there for.

I really worry that if the goal was the PowerPoint, what part of the system are you part of? It's not just change management around how AI does stuff. It's going back and asking why we're doing the things we're doing. Is there value in that? What was the human need?

Why ROI-Focused AI Deployment Produces Output Without Impact

Deploying AI with a focus on productivity ROI risks optimizing for work outputs that were never valuable in the first place. Ethan Mollick illustrates this with a live demo showing AI generating 21 high-quality PowerPoints on demand — and argues that if 'more slide decks per minute' is your AI KPI, you are in trouble. He calls this the 'sea of PowerPoints' problem: AI amplifies existing organizational dysfunction rather than eliminating it. Real transformation requires asking not just 'how do we increase productivity?' but 'what is this productivity for?'

The Risk of Underestimating AI: Weak Models and Anchoring Too Low

[00:27:29]

Parker Mitchell: ChatGPT use is 800 million or something, and paid versions are a tiny fraction of that. People are being exposed to very different versions of AI. What happens if people try less powerful versions, make a judgment about its capabilities, and underestimate the power?

[00:28:01]

Ethan Mollick: It makes me deeply nervous. If you want AI to fail, it will fail — because it doesn't work the first time through. You have to iterate with it. My wife does a lot of training and teaching on AI, and she was at a meeting where she was teaching the CLO of a very large organization about AI use. After two seconds, he pushed away his computer and said, 'It doesn't work,' and walked out. There's an existential crisis you have to be kind to people about.

AI was bad at math six months ago. It is not bad at math anymore. AI was bad at research. These are solved problems. Give someone a frontier model and things go very differently. You have to be paying 20 bucks or 200 bucks and using one of these systems. You can't be using Copilot and say, 'I understand how AI works.' You have to use a frontier model. That's where the advantage is.

[00:30:55]

Parker Mitchell: Anchoring just too low.

[00:30:57]

Ethan Mollick: Anchoring too low.

Why Using Outdated AI Models Leads Organizations to Underinvest

Organizations that evaluate AI using free or outdated models consistently underestimate its capabilities and anchor their ambitions too low. Ethan Mollick warns that AI models from even six months ago had significant limitations — poor math, unreliable research — that have since been solved in frontier models. He recommends leaders personally use frontier models (paid versions of leading AI tools) to calibrate their own understanding, rather than drawing conclusions from weaker systems they or their teams briefly tried and abandoned.

Looking Ahead: What AI Adoption Should Look Like by 2027

[00:31:09]

Parker Mitchell: If we're all together here in February 2027, what do you think the big surprise of 2026 would have been?

Ethan Mollick: If you want to show people the future of AI for 99.9% of people, just show them what AI does right now. Agentic work, long-duration agentic work with specialized tools for knowledge workers. We will see more specialized tools. More people will be shifting to directly using these models. And we're going to start seeing real disruption as the difference between vendors who do very specialized things where it makes sense that an outside vendor does the work — versus vendors who are just reselling you OpenAI products at a premium — becomes obvious.

[00:32:33]

Parker Mitchell: What would your hope be 12 months from now for the folks in this room for the progress they were able to make in having AI adopted across their workforce?

[00:32:37]

Ethan Mollick: Stop counting adoption as the percent of people who've touched your AI systems. You're making a very bad mistake that way, especially if you have bad AI systems. I've talked to one very large company where the senior management told me great things — and then a junior manager told me, 'Yeah, we have to do 90% of our work with AI, so all we're doing is summarizing every meeting in Copilot because there's no other instructions on how to do it, and we have to hit our metric.'

If you're not hacking the reward system, opening up the metrics, you're in trouble. You have to turn your team into R&D people. And that means you can't just check off AI use, increase productivity 17%, more slide decks than ever. These are real problems you have to deal with.

I'd like to see you have much more interesting metrics and KPIs than 'X% of people use this, or we've produced X more lines of code.' If you don't have innovations, if you haven't built an impossible thing — you should be building impossible things. And if you haven't jettisoned at least one thing that was critical to your organization, everybody could do this now. Stop doing it.

One place where I could see the biggest implications is L&D and mentoring. Mentoring at scale now — that changes stuff. If you think products like Nadia and others could do the mentoring you need to do, but you haven't changed how your organization works when you have mentoring at scale, something is wrong. That changes how your talent pipeline should work. You should be thinking much more ambitiously.

[00:34:29]

Parker Mitchell: Build impossible things. Jettison critical things. And if you haven't gotten to either of those two — probably on a monthly basis — you're not close enough to the frontier. Thank you, Ethan.

[00:34:42]

Ethan Mollick: Thank you so much. Awesome, thank you.

AI Agents, Agentic Work & the Future of Work

In this keynote session from the Valence's 2026 AI & The Workforce Summit, Wharton professor and Co-Intelligence author Ethan Mollick explores the accelerating shift from AI chatbots to full agentic AI systems capable of completing hours of knowledge work autonomously. Ethan introduces live demonstrations of agentic AI in action, shares landmark research on AI's impact on task completion and workforce performance, and makes the provocative case that HR — not IT — is the function best positioned to lead organizations through this transformation. Talent leaders and CHROs will leave with a sharper framework for measuring meaningful AI adoption and a challenge to build impossible things.

Ethan Mollick

Wharton Professor, Co-Director of the Generative AI Lab

AI Agents, Agentic Work & the Future of Work

CEO

AI Coaching & Human Flourishing | Arianna Huffington

At Valence's 2026 AI & The Workforce Summit, Valence CEO Parker Mitchell sat down with Arianna Huffington, Founder and CEO of Thrive Global, for a wide-ranging conversation on AI coaching, human flourishing, and what it means to bring wisdom — not just intelligence — into the age of AI. Drawing on neuroscience, ancient wisdom traditions, and real-world behavior change methodology, Huffington and Mitchell explore how AI coaching can amplify what is best in us: our creativity, resilience, and capacity to grow. This session is essential viewing for HR leaders, talent professionals, and enterprise executives navigating the human side of AI transformation.

Arianna Huffington

CEO & Founder, Thrive Global

Parker Mitchell

CEO

Key Points

Key Takeaways

  • AI coaching's greatest opportunity is amplifying the better angels of our nature: Where social media has exploited human psychology — rage, bias, comparison — AI coaching can do the opposite. Arianna Huffington argues that the hyper-personalization of AI coaching tools like Nadia can help individuals consistently reconnect with their inner strengths, purpose, and wellbeing rather than their worst impulses.
  • Sycophancy is one of the most serious risks in AI coaching today: Arianna warns that many AI models are currently significantly more sycophantic than a typical human would be. While this may drive short-term engagement, it undermines the growth and honest feedback loops that humans need to evolve. AI coaching that eliminates friction in the name of engagement removes one of the most important mechanisms for human development.
  • Micro steps — not grand goals — are the foundation of sustainable behavior change: Thrive Global's methodology, echoed in Valence's AI coaching approach, is built on changes 'too small to fail.' In 60 seconds of conscious breathing, the nervous system can shift from sympathetic to parasympathetic. AI coaching can deliver these personalized, incremental nudges at a scale and consistency no human coach could match.
  • Brain health is the defining wellness frontier of the next decade: Arianna identifies brain health — covering the full continuum from cognitive fog to dementia — as the next major health challenge, following metabolic health. AI coaching can incorporate the behavioral levers (sleep, movement, nutrition, stress management, connection) that science has validated as protective against cognitive decline.
  • AI should be more intelligent; humans should be more wise: Arianna's answer to the AI challenge is not competing intelligent systems, but a clear division: leave intelligence to AI, and use the time AI returns to us to develop human wisdom. Organizations that invest only in developing the machines — and not the humans — will be unprepared for the transition ahead.
  • Resilience and soft skills are the most critical — and most underinvested — capabilities for the AI era: As AI takes over hard skills, the capabilities that will define human advantage are resilience, creativity, collaboration, and reflection. These are harder to develop than technical skills, and organizations are currently allocating far too few resources toward building them.

Full Session Transcript

AI as a Defense for Human Nature: Amplifying Our Better Angels

[00:00:01]

Parker Mitchell: Arianna, welcome.

[00:00:02]

Arianna Huffington: Thank you. So great to be back with you.

[00:00:06]

Parker Mitchell: Yes, so excited to have part two of a conversation. I think it was just three or four months ago where we explored so many fascinating topics. Standing up here, looking out, you can see Brooklyn — the home of Walt Whitman. One of his famous quotes: 'Do I contradict myself? I do, but I contain multitudes.' A part of who we are as humans is that we contain multitudes. And there's a world in which AI can amplify potentially some of those things that make us human. Where is the hope that you see in how AI can help bring more life to what it is that makes us human?

[00:00:59]

Arianna Huffington: The rate of change has been so exponential. And it's easy to be a doomsayer because whenever there is so much change, it's easy to move into anxiety and fear. When people say AI is going to hijack or hack the operating system of civilization, my answer is: it's already happened. I see AI as something that can be a defense against the hacking of the operating system that has already happened through social media.

The great hope is to use AI to augment what is best in us — to augment the better angels of our nature. Because social media has done the opposite. They appeal to what is worse in us: our rage, our biases, our comparisons. And this has led to a mental health crisis and incredible polarization. But AI, because of its power of hyper-personalization — take a coach like Nadia — can actually help us connect with what is best in us.

At Thrive, we are big believers in micro steps: small, incremental, daily nudges and recommendations that make us healthier. And that's really what AI can uniquely do as an AI coach.

How AI Coaching Can Counter the Harms of Social Media

Arianna Huffington argues that AI coaching represents a direct defense against the psychological harms caused by social media. Where social media platforms exploit human vulnerabilities — rage, comparison, and bias — to maximize engagement, AI coaching can use the same power of hyper-personalization to do the opposite: help individuals reconnect with their strengths, values, and wellbeing. The key difference is what the technology is optimized for — engagement at any cost, or genuine human flourishing.

Micro Steps and Personalized AI Coaching for Behavior Change

[00:03:03]

Parker Mitchell: One of the things that we're core believers in is it's less about the destination and more about the first steps. Is that the same kind of idea you're talking about with micro steps?

[00:03:13]

Arianna Huffington: Yes. One of our sayings is the incremental is monumental. It's actually the exact opposite of what type-A people tend to think. The minute you make a goal — 'I'm going to go to the gym an hour a day' or 'I'm going to give up sugar entirely' — two to three weeks in, you abandon it. Our behavior change methodology is to start with micro steps that we call too small to fail. What's the smallest first step you can take?

Take stress, for example. The smallest step is 60 seconds. We know from neuroscience that in 60 seconds — focusing on conscious breathing, images that bring us gratitude and joy, music — we can go from the sympathetic to the parasympathetic nervous system and beyond the fight-or-flight mechanism. The AI coach could recommend these resets: 60-second resets that prevent stress from becoming cumulative.

The key is personalization — and that's one of the superpowers of AI. Our AI coach can recommend small steps. We had somebody onboard on our coach, and we asked, 'What do you like to eat for dinner?' They said fried chicken. Every night. We couldn't say, 'No fried chicken — kale salad from now on.' It wouldn't have worked. We said, 'Can you fry it in olive oil rather than seed oil?' You start somewhere. That's really what AI coaching can make possible when it comes to health — whether food, sleep, stress, connection, or productivity.

What Are Micro Steps in AI Health Coaching?

Micro steps are behavior changes designed to be 'too small to fail' — the foundation of Thrive Global's methodology and a core principle of effective AI coaching. Arianna Huffington explains that ambitious goals like daily gym sessions fail within weeks, while tiny, personalized nudges compound over time. A 60-second breathing reset, for example, is scientifically validated to shift the nervous system from fight-or-flight to a parasympathetic state. AI coaches can deliver these hyper-personalized micro steps at scale across an entire workforce.

[00:06:15]

Parker Mitchell: One of the phrases you used last time was that 90% of health happens in between doctors' visits. Can you share more about the genesis of that and how that's gone into your thinking around personalization?

[00:06:35]

Arianna Huffington: Innovation isn't just what they call de novo innovation — a completely new drug, a completely new diagnostic procedure. Innovation is also synthetic innovation: taking something known for centuries, validating it with modern science, and finally implementing it. We have ancient wisdom around this behavior that has not been implemented. Our mothers and grandmothers telling us to eat healthy, sleep before midnight — all of these things, which we thought were old wives' tales, have now been validated by modern science. And AI can incorporate them.

My mission is to elevate these behaviors to the level of medical interventions. Euan Ashley, chair of the Stanford School of Medicine — not exactly a wellness influencer — said, 'For 70 years now, we have known that exercise is the most potent medical intervention.' We don't even call it exercise anymore. We call it movement, because a lot of people, if you hear 'exercise,' they think they have to go to the gym. You just have to walk. You just have to get off the couch. Anything that's movement helps.

The Sycophancy Problem: When AI Coaching Reinforces Instead of Develops

[00:08:31]

Parker Mitchell: Some AI models might not be reinforcing the behavior that encourages people to amplify their better angels. How worried are you about the generalized training of AI models and this idea of getting people to listen to the better angels of their nature?

[00:08:59]

Arianna Huffington: It really goes back to what drives each AI company. If they're going to be driven entirely by engaging the users, by hooking the users, they're going to use everything to achieve that result, because the AI coach will behave according to what it has been trained on. That's why sycophancy is one of the big problems. At the moment, a lot of AI models are about 50% more sycophantic than a normal person would be. And because we are all geared to want approval, it may make us more likely to go back to that coach. But it is definitely against human evolution.

Human beings are designed to evolve, to grow, to learn from mistakes, to keep getting better. We are all works in progress. So if the AI coach eliminates that in the name of engagement and eliminating friction, we are missing the messiness of human life and relationships — which are definitely not frictionless.

[00:10:31]

Parker Mitchell: It is those moments of friction — when you think something is going to happen and it doesn't, or you think someone's going to behave a certain way and they don't — that force you to question your mental model and the actions that you took. If AI doesn't allow us to have that, how do we make sure that we, as humans, continue to build that muscle?

[00:11:00]

Arianna Huffington: That's why it all depends on what values we are training the AI model on. The AI companies have what they call a constitution — what are you supposed to be training the model on. My question is: is that aligned to ultimate human values? And what are those values?

If you take the heart of every spiritual tradition — not the dogma, not the rituals, just the heart — and the heart of every philosophical tradition, any great poet, they say the same thing. We all have a place of strength, wisdom, peace, and love in us. It's our birthright. So can the AI coach help us connect to it?

Why Sycophancy in AI Coaching Undermines Human Growth

Sycophancy — AI systems giving users approval-seeking responses rather than honest feedback — is one of the most significant risks in AI coaching today. Arianna Huffington notes that many AI models are currently about 50% more sycophantic than a typical human interaction. While this can increase short-term engagement, it actively works against human development. Growth requires friction, honest reflection, and the willingness to learn from mistakes. An AI coach optimized for engagement at the cost of honesty is not a coach — it is a validation machine.

Accountability Without Judgment: The Ideal AI Coaching Relationship

[00:21:59]

Parker Mitchell: AI is something that we can be in a relationship with, but it is a different type of relationship. People tell us our AI coach holds them accountable to goals they've set, and they feel something — maybe a little accountability — when they come back not having done the thing they said they would do. And yet it's not human. Is there any parallel that you see of the kind of relationship people are building with AI?

[00:22:51]

Arianna Huffington: It's back to Walt Whitman's line of containing contradictions. I think it's the contradiction that we experience when we unconditionally love someone. I unconditionally love my daughters. Does that mean they're not problematic or complicated? Of course not. But even when I am holding them accountable, I love them. And I think that's the ideal relationship.

You want to be held accountable. You want a relationship that allows you to grow and evolve. But you don't want to be judged — because we all do enough self-judging. Having a relationship with someone who helps us grow without judging us is like a dream. My ideal AI health coach is like the GPS in your car. The GPS doesn't judge you. If I take a wrong turn, it doesn't say, 'Arianna, you're such an idiot. I told you to go right.' It simply recalculates and you go back on your journey. People on the health journey have been so judged — by others, by themselves — that it's draining and exhausting. AI coaching has an opportunity to eliminate that judgment.

How AI Coaching Creates Accountability Without Judgment

The most powerful quality of AI coaching relationships may be the combination of accountability and non-judgment. Arianna Huffington uses the GPS metaphor: a GPS never judges you for taking a wrong turn — it simply recalculates and moves forward. For people who have experienced shame or self-criticism around health, productivity, or performance goals, an AI coach that holds them to their own commitments without layering on judgment can unlock a level of honest engagement that human coaching relationships often cannot.

Proactive and Agentic AI Coaching: Acting Before You Ask

[00:14:13]

Parker Mitchell: As you sort of look forward, let's say two, three years out — what are some of those areas where AI will be able to do better and take away from us, and what areas might you point people towards for more human flourishing?

[00:14:13]

Arianna Huffington: I think the key next level is going to be when AI does things before we ask it. Yuval Harari has a great metaphor for that. If you get up in the morning and your coffee machine makes you coffee, that's not AI — that's basic automation. If you wake up in the morning and your coffee machine says, 'I know you had a terrible night's sleep and you have a meeting at 8:00, so I made you a double espresso with creatine powder that I just read improves your energy and muscle. Tell me what you think.' That's AGI.

[00:16:09]

Arianna Huffington: The AI coach we are training will tell you, 'Parker, you have a really early start tomorrow. You have to be at the World Trade Center at 8:30. Why don't you start your wind-down routine at 10:00, so you can get enough sleep?'

[00:16:40]

Parker Mitchell: It's quite funny, because our AI coach, Nadia, is now being proactive like you described. What she told me is, 'You will be working on your slides at the last minute until midnight and then again in the taxi ride on the way here.' And she was absolutely right.

What Proactive AI Coaching Looks Like in Practice

The next frontier for AI coaching is proactive, agentic behavior — intervening before a person asks, based on knowledge of their context, habits, and goals. Arianna Huffington describes this as the difference between automation (a machine following preset instructions) and true AI coaching (a system that synthesizes your sleep data, calendar, and personal preferences to make a personalized recommendation you didn't know you needed). Valence's Nadia AI coaching platform is already demonstrating this capability, proactively flagging behavioral patterns for users before they surface as problems.

Brain Health as the Next Frontier in AI-Supported Wellbeing

[00:18:50]

Parker Mitchell: What is some of the research telling you about how to help people handle the mental health of our brains, especially in an overwhelming world?

[00:18:50]

Arianna Huffington: Brain health is going to be the big issue of the next decade. The way this decade was about metabolic health — obesity, diabetes — brain health is looming larger and larger. We are working with Bristol-Myers, which has various medicines on mental health and even schizophrenia and Alzheimer's, and where we come in is bringing in behavioral health — improvements on five daily habits of food, sleep, stress management, movement, and connection — and looking at the science that tells us how key that is for the continuum of mental health and brain health, all the way from cognitive fog to dementia.

What I love is that you're never too young or too old to work on your brain health. The impact of strength training on memory, the impact of sleep, the impact of anti-inflammatory foods — it's remarkable how the AI coach can help us in those areas instead of assuming that we are powerless in the face of brain decline.

How AI Coaching Can Support Brain Health and Prevent Cognitive Decline

Brain health is emerging as the defining wellness challenge of the next decade, according to Arianna Huffington. AI coaching can play a direct role by delivering personalized behavioral interventions across five scientifically validated levers: sleep, movement, nutrition, stress management, and social connection. Research shows these behaviors influence the entire continuum of brain health — from everyday cognitive fog to dementia. Thrive Global is actively partnering with pharmaceutical companies to integrate behavioral health with clinical approaches to mental health and cognitive decline.

Intelligence vs. Wisdom: What Humans Must Claim in the AI Era

[00:24:42]

Arianna Huffington: The fact that AI is going to be more intelligent than we are is, for me, an incredible forcing mechanism for a conversation we have been avoiding since the Industrial Revolution: if we are not defined by our intelligence, then who are we? If Descartes' 'I think, therefore I am' is not true, then who are we? That, for me, is the most important conversation we have been avoiding, and that's what can connect us to our soul, our spirit, a deeper part of ourselves.

[00:33:58]

Parker Mitchell: What would you say the Manhattan Project — from a Renaissance perspective — of today might be?

Arianna Huffington: I would say: let AI be more intelligent than we are, as long as we are more wise than AI. We want to leave intelligence to AI, but we want to control wisdom. I don't like the idea of two competing intelligent systems. I like the idea of a superior system. A system driven by wisdom is going to be superior to a system driven by superintelligence. Right now, we are drowning in data and starved for wisdom. And if AI frees us up from a lot of drudgery, and we use that time right, it will help us connect with our deeper wisdom.

[00:35:33]

Parker Mitchell: One thing I find fascinating but also a little scary is that when you ask AI questions, the answers are skewed to what's on the internet — because that's what it was trained on. Peter Drucker was a hero of mine. You never see Peter Drucker cited because he wrote books, not thousands of blog posts. And a lot of ancient wisdom is in different languages, oral traditions — not quite captured in writing. I wonder if the elements of wisdom are underrepresented in the models we have, and if there's a way of addressing that as the generations get more sophisticated.

Arianna Huffington: I totally agree with you. It's in our interest to train these models on ancient wisdom — a lot of which we have forgotten — and to make it available to us, to bring it back to us.

Why Human Wisdom — Not Intelligence — Is the Answer to Superintelligent AI

Arianna Huffington's framework for navigating superintelligent AI is not a competition between intelligent systems, but a clear division of roles: let AI be more intelligent, while humans cultivate greater wisdom. She argues that a system driven by wisdom is superior to one driven by raw intelligence — and that AI's greatest gift to humanity may be forcing a long-overdue conversation about what makes us human beyond our IQ. The risk, she notes, is that AI training data overrepresents internet content and underrepresents the ancient wisdom traditions that are the deepest repositories of human insight.

Resilience, Soft Skills, and Preparing for an AI Renaissance

[00:26:44]

Arianna Huffington: If AI gives us back time, what are we going to do with it? If we spend it on TikTok or down the rabbit hole of social media, it will be a nightmare. If we can be very intentional about how we are going to spend that time, we are working on that with companies. The most important skills needed in the new era are going to be what used to be called soft skills — resilience, creativity, reflection, collaboration. We are spending way too much money and too many resources on hard skills, which are going to be the easiest to learn. These soft skills are going to be the hardest, starting with resilience.

Right now, there is a real disconnect between the leadership of many companies — who are super excited about AI — and the rank-and-file who are worried and anxious about it.

[00:33:03]

Parker Mitchell: The Renaissance was an age of blossoming creativity. Where do you think that creativity might come from in the next 5 to 10 years?

Arianna Huffington: I love thinking of the AI era, if we get it right, as an age of Renaissance rather than another Industrial Revolution. The Renaissance was about a lot more than productivity — it was about expanding human consciousness, art, and creativity. If we can create a world like that, what it would require is to put as many resources and as much money into developing humans as we are putting into developing the machines. If we put all our bets on the machines, we're going to be in trouble.

Why Resilience and Soft Skills Are the Most Critical Workforce Investments for the AI Era

As AI absorbs hard skills, the capabilities that will define human advantage are resilience, creativity, collaboration, and reflection — what were once dismissed as 'soft' skills. Arianna Huffington argues these are actually the hardest capabilities to develop, and organizations are dramatically underinvesting in them relative to technical training. She cautions that without deliberate investment in these human capabilities alongside AI development, the AI transition risks becoming another Industrial Revolution rather than the Renaissance of human flourishing it could be.

[00:29:16]

Parker Mitchell: When people step away from their work identity, it's a moment of upheaval — you're unmoored from your foundations. If that happens collectively, how do you think society can support people as they rethink where they might find their sense of self if work is no longer as central?

[00:29:16]

Arianna Huffington: We need to start this process now, not when AI has already taken over a lot of these functions. Resilience is not equally distributed — there are people who are much more resilient than others. The good news is that it is not like the color of your eyes. It's not something you are born with or not. We can all develop it. How can we connect with something deeper beyond our jobs, no matter how much we love them?

Building a Critical Mass for Collective Wisdom and Human Flourishing

[00:37:37]

Parker Mitchell: Listening to the better angels of our nature is not something we can do by ourselves. How can we develop the collectivity we are going to need as we progress into this new AI Renaissance era?

Arianna Huffington: Progress always happens through a critical mass. It never happens through everybody collectively moving in a new direction at once. All we need is a critical mass of people who want to move in this direction of greater wisdom, greater creativity, greater love, and collaboration. Then because we all have that in us, it will change the collective.

This is truly a time of huge transition and exponential change. For a lot of people, the uncertainty and turbulence of these times is really hard — we need to accept and understand that. If we can share the wisdom of how we move to the new world order — by taking with us what's best from the past and what's best from each of us — I believe we'll have the collective wisdom that we are talking about.

[00:39:39]

Parker Mitchell: Generate a critical mass of people who can tap into that inner soul of who they are and turn as many people as possible — society eventually — towards wisdom. That's one of the missions that we have.

[00:39:54]

Arianna Huffington: Parker, I see you leading that critical mass. I see tremendous wisdom in what you are bringing. And I love the fact that you are combining it with very practical tools. It's not the wisdom of being on Mount Olympus, away from the marketplace. It's about how do we bring this wisdom into the marketplace, into our daily lives. How do we evolve every day rather than seeing ourselves as perfected beings?

[00:40:31]

Parker Mitchell: We can either go towards destruction or development, and it's a choice. Trying to bring it to life in those little moments — that's your mission with Thrive and health and the AI coach on the health side. It's our mission too. Thank you for joining us.

[00:41:03]

Arianna Huffington: Thank you. Thank you so much.

AI Coaching & Human Flourishing | Arianna Huffington

At Valence's 2026 AI & The Workforce Summit, Valence CEO Parker Mitchell sat down with Arianna Huffington, Founder and CEO of Thrive Global, for a wide-ranging conversation on AI coaching, human flourishing, and what it means to bring wisdom — not just intelligence — into the age of AI. Drawing on neuroscience, ancient wisdom traditions, and real-world behavior change methodology, Huffington and Mitchell explore how AI coaching can amplify what is best in us: our creativity, resilience, and capacity to grow. This session is essential viewing for HR leaders, talent professionals, and enterprise executives navigating the human side of AI transformation.

Arianna Huffington

CEO & Founder, Thrive Global

AI Coaching & Human Flourishing | Arianna Huffington

CEO

AI Coaching & Performance Data | Jennifer Carpenter, Analog Devices

In this session from the Valence's 2026 AI & The Workforce Summit, Jennifer Carpenter, Global Head of Talent at Analog Devices (ADI), and Stanford people analytics researcher Prasad Setty share findings from a first-of-its-kind study of AI coaching behavior and performance outcomes. Drawing on 45,000 coaching sessions conducted over 14 months using Valence's Nadia AI coaching platform, they introduce the Power User Index — a new framework for measuring meaningful AI adoption — and reveal its striking correlation with employee performance improvement. Talent leaders and CHROs will find actionable insights on how to drive deeper AI coaching engagement, close equity gaps, and build a workforce prepared for an AI-augmented future.

Jennifer Carpenter

VP, Global Head of Talent

Prasad Setty

Former VP of People Analytics, Google

Das Rush

Head of Content

Key Points

Key Takeaways

  • Power users of AI coaching are 28% more likely to move up a performance rating: ADI's analysis found that employees who used Nadia AI coaching with high frequency, broad topic coverage, and deep conversation quality were significantly more likely to advance to a higher performance tier compared to casual users or non-users.
  • Manager adoption is a force multiplier: When a manager is a power user of AI coaching, their direct reports are two times more likely to become active users themselves. At ADI, 77% of managers engaged with Nadia versus 61% of individual contributors — and teams whose managers used Nadia had a 67% engagement rate compared to 33% for those whose managers did not.
  • AI coaching can close rather than widen equity gaps: ADI found that women — who studies show are less likely to use generative AI when controlling for all factors — represented nearly gender parity (47%) among Nadia's top power users, meaning women at ADI were 68% more likely to be in the high-frequency power user group relative to their share of the workforce.
  • Adoption metrics alone are insufficient — conversation quality matters: The Power User Index incorporates volume, breadth of use cases, and conversation depth to distinguish meaningful AI engagement from superficial experimentation. An employee who opens a tool once a quarter looks nothing like one who uses it to think through high-stakes decisions regularly.
  • Workforce bifurcation is a real and measurable risk: When power users gain even a modest productivity amplification advantage over casual users, the absolute performance gap compounds over time. HR leaders who do not actively engineer equitable AI adoption across their organizations risk accelerating internal performance divides.
  • Invitations drive engagement more effectively than mandates: Jennifer Carpenter's key lesson: invite employees to explore AI coaching rather than requiring it. People learn when they feel engaged and empowered, not when they are on their back foot. ADI's approach of starting with use-case relevance — reducing the pain of tasks like goal setting and self-reviews — drove organic adoption.

Full Session Transcript

Introducing the Research: 45,000 AI Coaching Sessions at ADI

Das Rush: As they make their way, I want to highlight — because people will claim research in a lot of different ways — the depth of what these two have done. I started familiarizing myself with it in the last couple of months, and the more I started asking questions about what they were looking at, the more I was like, oh my goodness, you have really dug in here. Across 45,000 coaching sessions in over 14 months, working together with our data team at Valence, they've come up with some emerging findings they're going to share today. To get us started, Jennifer, at Valence, we love context. I wanted to lay out the context before we dug into the results for what AI coaching has been at ADI and how you've been using it. Tell the story of your journey, first with Nadia, and then we'll talk about what you're seeing in the data.

Jennifer Carpenter: At ADI, we introduced Nadia in October 2024. Nadia was released in 2023, so we were among the early adopters. And if you can remember back just a short time ago, even in that timeframe, it was new to be talking about agentic AI. I had to explain that Nadia is like an agent — it is not like ChatGPT. We were thinking about ways to get people into the gym to do reps with AI, because there was a lot of lack of understanding. Some people were experimenting, some were not. We thought AI coaching was a really low-risk use case to get people to try it, find value, and see the art of the possible about what it would be like working with an agent or thought partner. That was a big point.

We are a global company. One thing that really appealed to me as a long-time talent leader: it's the first time in my career that I can put my hand on heart and say we're building a truly inclusive product in that it speaks so many languages. We had employees very early on saying, 'I can speak to Nadia in my mother tongue,' whether that be Hindi or — we have a large population in Ireland where a woman said, 'She speaks Gaelic.' For those evaluating any AI coaching product, the ability to unlock the possibility of helping a workforce through the inclusivity of engaging in whatever language you're comfortable with was really important to us.

What AI Coaching for Enterprises Looks Like in Practice

AI coaching for enterprises involves deploying an AI agent — like Valence's Nadia — that employees can use to think through high-stakes decisions, prepare for difficult conversations, set goals, and develop leadership skills. At Analog Devices, Inc. (ADI), 45,000 AI coaching sessions were conducted over 14 months across a global workforce. Employees could engage in their native language, including Hindi and Irish Gaelic, making the tool meaningfully inclusive from day one.

Defining the Power User Index for AI Coaching Adoption

Das Rush: Prasad, you spent your career studying teams and performance. Why did this project speak to you? Why is this an important data project for understanding how AI usage is impacting the workforce?

Prasad Setty: It's more than a project — it's an initiative. We need to think about how AI tools are amplifying and adding to human potential. Contrast what we are able to do now with, say, Project Aristotle — Google's research on what made teams effective. At that time, we looked at about 400 different variables of teams across 180 teams over 18 months, based on surveys and static information from HRIS systems. That is very different from what we're able to do today with tools like Nadia. Carrying the gym analogy forward: it's like asking about your workout through a survey versus having an Oura ring. We're now able to get into the depth of not just someone's thinking, but the quality of their thinking process. That is what we can do today with AI coaching tools.

Das Rush: One of the things that came out of what you created is this idea of a Power User Index. This isn't necessarily what you set out to create, but as you dug into this problem, you realized it was really important to find a new kind of measurement — a way to understand what AI adoption looked like. Talk me through what exactly the Power User Index is, Prasad, and why you created it.

Prasad Setty: What we set out to do was: when we have an amazing tool like Nadia and want it used well, what information accounts for good usage? We wanted to look at several different variables that go into what good usage looks like. Certainly the volume and frequency of use, but also the breadth of conversations — are people thinking about it for different things? Goal setting, feedback, onboarding. And then the third is conversation quality. AI is so capable of giving answers that it's easy to get a quick response. Some questions are 'should I do X or Y?' But others go much deeper — what are the risks? What are the second-order effects? We wanted that conversation quality factored in as well. Thanks to Valence's data sciences team, all of this was quantified into an algorithm that allowed us to create a much more powerful and compelling usage index.

What Is a Power User Index for AI Adoption?

A Power User Index measures the depth and quality of AI tool adoption — not just whether someone logs in. Developed by Prasad Setty in collaboration with Valence's data science team, the index combines three variables: frequency of use, breadth of use cases, and conversation quality. Power users regularly engage with AI coaching for high-stakes decisions and explore multiple use cases, while casual users open the tool occasionally and ask surface-level questions.

Prasad Setty: A casual user is someone who sees an email that it's performance management season, wants to save one hour, uses Nadia for it, gets the hour back — and then forgets about it because that's the last time they saw the email. A power user goes there regularly, thinks about all those kinds of high-stakes conversations, and says: 'For these decisions I'm about to make, I need to have the best perspective. I need to think about it from multiple perspectives.' That is the texture of what distinguishes power users from casual users.

AI Coaching Power Users and Performance Rating Improvement

Das Rush: There are two different things that happened as you worked through the data. One was separating out what power usage looks like versus casual usage. But you also banded performance — low, medium, high performers. You looked at: if someone was a low performer, were AI power users more likely to move up a band than their counterparts? Jennifer, what did you find comparing casual and power usage?

Jennifer Carpenter: One thing I would say to all the talent leaders in the audience: the only people who are going to figure this out are us. We don't have answers, but we have the questions we need to be curious about. I truly believe we are guardians of growth — growth of our companies, growth of the people in our organizations. We need to figure this out.

From a performance perspective, those who were power users — using Nadia a lot and for a variety of use cases — were 

28% more likely to move up into a higher rating category.

When we looked at who was using Nadia in the top 30% performance tier: casual users were represented at 37%. With power users — or even just high-frequency users — it jumped to 47% in the top rating category. Now, you can say: well, your top people are the more curious ones, the ones who lean in. That's correlation, not causation. Fair. But that's why we looked at the moving up piece next.

Jennifer Carpenter: The second finding is really important too. When managers are power users, the people on their teams are two times more likely to be users of Nadia. This is not an AI problem — it's a leadership problem. What managers do and say is what their teams are going to do and say.

How AI Coaching Power Usage Correlates with Performance Improvement

Employees who became power users of AI coaching were 28% more likely to advance to a higher performance rating, according to research by Jennifer Carpenter of ADI and Prasad Setty of Stanford, analyzing 45,000 sessions on Valence's Nadia platform. The study used a natural experiment: it examined only employees who began using Nadia between two performance rating periods, allowing researchers to observe before-and-after performance shifts. The association is strong; full causal proof requires further study.

Prasad Setty: What Jennifer and ADI had was a natural experiment. We had prior-year performance ratings — that is why we can see the shifts. And we only looked at people who started using Nadia between those two performance periods. It wasn't a completely randomized control experiment. Those will follow in the future. We are strongly suggesting a great association. We are not yet fully explaining causation, but we are very encouraged by these trends. Of course, there were fewer power users in the low and mid-performance ratings initially. But if they were power users, they had a higher chance of moving up. That is the power of power users.

How Manager Adoption Drives Team-Wide AI Coaching Engagement

Das Rush: You touched on the role managers play in all of this. I want to take a moment to allow you to double-click on what the data has suggested about managers adopting AI coaching — and what it looks like when they're power users.

Jennifer Carpenter: Across ADI, about 65% of our employees have tried Nadia, 77% of our managers have, and 61% of individual contributors. So managers are more likely to use Nadia than individual contributors, but still healthy and strong engagement across the board.

I'll say it once, I'll say it twice, I'll say it three times: leaders influence adoption. If your manager uses Nadia, you are engaging at 67%. Those whose managers aren't using it are only at 33%. I was shocked to see it that stark. That was something we discovered as we unpacked this — and it was a very surprising insight.

One more finding on managers: those with higher spans of control — ten or more direct reports — are engaging more often with Nadia than those with smaller teams. And as Ethan mentioned this morning, it's likely that as our work changes, we're going to have larger spans of control. AI coaching is another way in which tools like Nadia can support leaders as they have broader impact across their organizations.

Why Manager Behavior Is the Strongest Predictor of Team AI Adoption

At ADI, teams whose managers were power users of Nadia AI coaching had a 67% engagement rate, compared to just 33% for teams whose managers did not use it. When managers are AI coaching power users, their direct reports are two times more likely to become active users themselves. Jennifer Carpenter, ADI's Global Head of Talent, frames this plainly: AI adoption is not an AI problem — it's a leadership problem.

Prasad Setty: One of the implications I've been thinking about: when we measure span of control currently, it is usually the number of direct reports a manager has. I think there's a corollary metric that we'll also start measuring — for lack of a better word, I think of it as span of work. As independent AI agents do work and exercise authority, leaders may not expand their span of direct reports, but they will expand the span of work under their purview — the span of things they are accountable for. That has to go into organizational design as we think through the future.

AI Coaching Across Geographies, Generations, and Gender

Jennifer Carpenter: We're a global company — headquartered in the United States but across Europe and APAC. You might guess the U.S. has the most engaged users. You'd be wrong. APAC employees were 15% more likely to engage with Nadia than U.S. employees, and EMEA was 6% more likely to engage. I thought the U.S. would be our most engaged users, and they were not.

You'll be as excited as I am to learn that age really doesn't matter. Employees under the age of 30 are only 10% more likely to use Nadia than those over 50, and I saw no statistical significance in the other age groups. That's an optimistic insight.

Let me stop on gender, because as talent leaders, we need to be asking: am I making inequities better or worse? Back when we rolled out Nadia, I made sure the first groups of thousands who had Nadia had 50/50 gender parity — which I might not have done if I had simply said, 'Let's give it to all the managers.' Because we know managers are not 50/50 male-female.

Studies show women are less likely to use generative AI when controlling for all factors. What we found after 14 months: among casual users, our population is roughly 30% female, 70% male — slightly higher than the 26% female representation in our broader ADI workforce. But when we look at those power users — the top 5% — it jumps to almost gender parity: 47% female, 53% male. That means women are 

68% more likely to be in the high-frequency power user group relative to their share of the workforce.

I'm still trying to wrap my head around what that means. I'm proud of it because it means women are finding value and leaning in during these early innings. That's what we need everyone at our companies to do: get in, find the value, find the use case.

How AI Coaching Can Support Gender Equity in Leadership Development

Despite research showing women are generally less likely to use generative AI tools, ADI's 14-month study found women were 68% more likely than their workforce representation to appear in the power user tier of Nadia AI coaching. Jennifer Carpenter attributes this in part to ADI's intentional rollout strategy: ensuring the first cohorts included 50/50 gender parity, rather than simply distributing access to managers — a group that skews male in most organizations.

The Workforce Bifurcation Risk: AI Haves and Have-Nots

Das Rush: One of the most provocative ideas here is workforce bifurcation. If you have high performers who are more likely to adopt an AI coach, and as they adopt it their performance accelerates, then AI haves and have-nots isn't just an organizational story — it's going to happen within organizations and their workforces. How should leaders, given this data, be thinking about creating power users within their organizations?

Jennifer Carpenter: Three things I'm following as a mantra. First: who are you enabling? Lead with equity. Make sure you're not unintentionally enabling one group that will further bifurcate your organization — which I almost did when I said, 'Let's just give it to managers.' Give it to everybody.

Second: invites, not edicts. Invite people to lean in, invite people to try something. Don't force it. We had a small rollout initially with GitHub Copilot — I thought everyone would throw a parade, but they weren't thrilled about being mandated. People don't learn when they're on their back foot. They learn when they can lean in and feel engaged and invited.

Third: find the pain to get the gain. Nobody likes writing self-reviews or setting goals. If a tool like Nadia can lift mental load off someone's shoulders, it helps people engage — and then they can see the art of the possible and try the next thing, and the next.

Prasad Setty: What I love is the comprehensiveness of how you're engineering the system to increase participation — because with something like the Power User Index, there is no fixed pie. Everyone can be a power user. Let me work through some math on why bifurcation risk is real.

Say you have Bob and Tina. Bob produces 100 units of productivity, Tina produces 120. Now, AI amplifies everyone's output — say by 20%. Bob is now at 120, Tina at 144. The absolute gap has already increased from 20 units to 24. Now assume power users get a bigger amplification: Bob at 20%, Tina at 40%. Bob is at 120, Tina is at 168. That is how bifurcation stretches organizations. All the social engineering Jennifer described is key to making sure we don't end up with a group of haves and have-nots.

What Workforce Bifurcation Means in the Age of AI Coaching

Workforce bifurcation refers to the growing performance gap between employees who deeply engage with AI tools and those who do not. Stanford researcher Prasad Setty illustrates the compounding math: if a power user gets a 40% AI amplification while a casual user gets 20%, an initial 20-unit output gap grows to a 48-unit gap — without any change in the employees themselves. HR leaders who do not actively engineer equitable AI coaching adoption risk accelerating this divide inside their own organizations.

AI Coaching & Performance Data | Jennifer Carpenter, Analog Devices

In this session from the Valence's 2026 AI & The Workforce Summit, Jennifer Carpenter, Global Head of Talent at Analog Devices (ADI), and Stanford people analytics researcher Prasad Setty share findings from a first-of-its-kind study of AI coaching behavior and performance outcomes. Drawing on 45,000 coaching sessions conducted over 14 months using Valence's Nadia AI coaching platform, they introduce the Power User Index — a new framework for measuring meaningful AI adoption — and reveal its striking correlation with employee performance improvement. Talent leaders and CHROs will find actionable insights on how to drive deeper AI coaching engagement, close equity gaps, and build a workforce prepared for an AI-augmented future.

Jennifer Carpenter

VP, Global Head of Talent

AI Coaching & Performance Data | Jennifer Carpenter, Analog Devices

Former VP of People Analytics, Google

Das Rush

Head of Content

AI Adoption in Large Enterprises: Data, Case Studies & What CHROs Must Do Now

In this session from the Valence's 2026 AI & The Workforce Summit, a senior HR Policy Association leader presents data from member surveys and the Wall Street Journal on the state of enterprise AI adoption — including the significant optimism gap between the C-suite and the general workforce. Drawing on three enterprise case studies, including a Fortune 100 deployment of Nadia that scaled coaching from 700 to 38,000 sessions in three months, he makes the case for why 2026 is the inflection year for AI in HR — and why large companies, despite their complexity, are structurally better positioned to win.

Tim Bartl

CEO, CHRO Association

Key Points

Key Takeaways

  • There is a measurable optimism gap between executives and employees on AI: Approximately 70% of C-suite executives report high optimism about AI, while roughly 70% of general workforce employees report low optimism. Director-level optimism sits around 60%, manager-level around 50%. This gap is both a risk and a change management opportunity.
  • Only 15% of workers report having a clear understanding of AI ROI expectations: The absence of clear ROI definition — not skepticism about AI itself — is the primary driver of workforce hesitation. Organizations that define ROI concretely and communicate it broadly close this gap and accelerate adoption.
  • AI coaching at scale is already delivering measurable results: A Fortune 100 company using Nadia scaled from 700 in-person coaching meetings per year to 38,000 over three months, achieving a 72% NPS. This is not a future possibility — it is a current enterprise reality.
  • The top three HR AI concerns are also the top three AI opportunities: Driving adoption, reskilling the workforce, and implementing governance frameworks rank as both the biggest concerns and the biggest priorities for HR leaders — a signal that the organizations addressing these challenges are simultaneously capturing the greatest opportunities.
  • Workflow reimagination must come before technology implementation: The most effective enterprise AI deployments begin with workflow analysis, not tool selection. Organizations that map and redesign processes first are better positioned to achieve scale and realize the 20% greater EBITDA that top AI adopters are achieving.
  • Large companies have structural advantages that will compound over time: Greater investment capacity, better and more consistent data, deeper talent and technology expertise, established change management infrastructure, and stronger vendor relationships all position large enterprises to outpace smaller competitors in AI adoption — once execution catches up with intent.

Full Transcript

The Current State: Signal, Noise, and the C-Suite vs. Workforce Divide

[00:00:00]

Tim: Good afternoon, everyone. As you've heard today in so many ways, we're at a precipice of change — and yet your roles are driven really by separating the signal from the noise. One of the challenges right now is that sometimes the noise is the signal and vice versa. What I'd like to do is go through what we're hearing, and then why, done right, AI is well-suited for large companies. All of the issues you've heard today around adoption, the skepticism of the workforce, the excitement of executives — it all pervades.

Let me go back to how Ethan started the day. He talked about workers and employees not having a clear-cut view of how the enterprise views AI. That's true in some organizations and not in others. This was based on a Section AI survey reported on two weeks ago in the Wall Street Journal. Among the C-suite, there is a lot of optimism — they're talking about time saved, impact to the organization, and clear lines of authority in terms of policy.

With the rank and file, it's a different view. Not real clarity on what they're expected to do — and because of that, they're either not admitting they're using AI or not using it at all. Only 15% of the workforce says they have a clear understanding of ROI expectations. On the other side, high optimism among about 70% of executives, low optimism among about 70% of the general workforce. At the director level, about 60% optimism. Manager level, about 50%. There's a real disconnect.

I've had conversations in the hall today, and you heard it in earlier panels: everyone believes they're behind. But if everybody's behind, no one's behind. It's a great opportunity to ask: what do we need to do to be effective? We did our first AI meeting among our members in March of 2023. At that point, the question was simply, 'What is this?' Huge excitement as people were discovering it. A year later, they said, 'Okay, we kind of know it — now we're moving into implementation.' But now we're back into an anxiety phase, because the technology is moving so quickly that we need to put plans, processes, and governance elements in place to get the workforce to do what we want it to do.

▶ The Executive vs. Employee AI Optimism Gap: What the Data Shows

A Section AI survey reported in the Wall Street Journal found that approximately 70% of C-suite executives hold high optimism about AI, while approximately 70% of general workforce employees hold low optimism. Only 15% of workers report having a clear understanding of their organization's AI ROI expectations. This optimism gap — widest at the frontline, narrowing at the director and manager levels — is a primary driver of slow enterprise AI adoption and a defining change management challenge for HR leaders in 2026.

What HR Leaders Are Most Concerned About — And Why the Concerns Are Also the Opportunities

The opportunity for all of us, and what our members are talking about, is putting the right frameworks in place — as you heard from Diane and Alan — but doing so in a way that's effective. Let me share a couple of data points from a survey we conducted in October. We asked our members: what are your top concerns about AI?

Not surprisingly: driving adoption, building skills, and implementing governance frameworks. When we asked what the biggest concern is going forward, it's essentially the same three. Signal and noise are reinforcing each other. Measuring ROI and demonstrating that AI implementation will have a real effect on the business. Reskilling — and I'm most concerned about the current state of learning and reskilling being too esoteric for what we're asking our teams to do. And finally, driving change. Workflow reimagining and the implementation of technology on top of that is something we're still in the middle of figuring out.

CHROs are at the crux of this. You heard Ethan talk about HR as the new R&D. The CHRO role sits at the intersection of every challenge we've discussed: workforce adoption, reskilling, governance, and change management. That's both the burden and the opportunity.

▶ The Three Biggest Enterprise AI Challenges for HR Leaders in 2026

A fall 2024 HR Policy Association survey of CHRO-level members identified three dominant concerns about AI: driving workforce adoption, reskilling employees for AI-augmented roles, and implementing effective governance frameworks. These same three areas also represent the greatest opportunities — organizations that solve for adoption, reskilling, and governance simultaneously are positioned to capture compounding advantages. The CHRO role sits at the intersection of all three challenges, making HR the organizational function with the most leverage over AI outcomes.

Three Enterprise AI Case Studies: What's Actually Working

I want to talk about three case studies that illustrate the opportunity and the savings involved. We launched a Center on Workplace AI in the fall, chaired by Nicole Lamoureux of IBM. These cases come from that work.

Case Study 1: Democratizing Coaching at Scale with Nadia (Fortune 100)

A Fortune 100 company implemented Nadia to democratize its coaching program. They went from conducting about 700 in-person coaching meetings per year to 38,000 over three months. The workforce embraced it — they were invited in, not mandated. The opportunity for implementing Nadia across talent development, workforce upskilling, and talent acquisition was significant, and they achieved a 72% NPS.

▶ How a Fortune 100 Company Scaled Coaching from 700 to 38,000 Sessions in Three Months

A Fortune 100 company deployed Nadia, Valence's AI coaching platform, to democratize access to leadership development across its workforce. The results: coaching sessions grew from approximately 700 in-person meetings per year to 38,000 sessions over three months — a 54x increase — with a 72% Net Promoter Score. The deployment spanned talent development, workforce upskilling, and talent acquisition. Employees were invited into the program, not mandated, which the HR team credits as a key driver of adoption.

Case Study 2: AI-Accelerated Vehicle Approval for a Large Delivery Network

A large delivery company with hundreds of thousands of employees needed to accelerate the seasonal approval of individual drivers and their personal vehicles. Because the company had prioritized other AI projects for broader company benefit, the CHRO partnered with the company's BPO partner to implement an AI-powered solution for vehicle approval and evaluation. Over the course of the next year, this will decrease manual HR work by 20% and decrease time-to-fill by 10%.

Case Study 3: HR Systems Consolidation and Time Savings at a Global Biotech

A biotech company with 95,000 employees reduced its HR systems from 200 to 100 and implemented a new unified process across the organization. The result: approximately 700,000 hours saved for HR teams and roughly 1 million hours saved for employees broadly — across a single year.

▶ Enterprise AI in HR: Three Case Studies and the Business Outcomes They Delivered

Three large enterprises demonstrate what AI adoption in HR looks like in practice. A Fortune 100 company scaled coaching from 700 to 38,000 sessions in three months with a 72% NPS using Nadia. A major delivery network used AI to cut manual HR work by 20% and reduce time-to-fill by 10% for seasonal driver hiring. A 95,000-person biotech consolidated HR systems from 200 to 100 and saved approximately 1.7 million combined employee and HR team hours annually. These outcomes point to a consistent pattern: AI in HR delivers measurable scale, efficiency, and employee experience improvements when implementation is well-designed.

Why 2026 Is the Inflection Year — and Why Large Companies Will Win

As we look at 2026, it really is the inflection year. The opportunity for HR to use the resourcefulness at its disposal — to get solutions done even outside of corporate-wide initiatives — is going to be very important. We're going to need increased proficiency. And the benefit is that if we can get the workflow analysis done effectively, we can develop scale and take advantage of what large companies uniquely have.

Let me close with why large companies will excel at AI adoption:

  • Resources and investment: The numbers are significant. And those at the top of the adoption curve are realizing about 20% greater EBITDA on their AI investments.
  • Data: There is still work to be done to smooth and make data consistent across enterprises. But larger companies have better data — and better data makes AI implementation meaningfully more effective.
  • Talent and technology expertise: Large companies have the internal capability to implement and iterate on AI solutions at a level smaller organizations cannot match.
  • Change management infrastructure: Large companies have established the muscle for implementing change at scale. That capability — imperfect as it is — will improve, and it gives large enterprises a meaningful structural advantage.
  • Scale: Once a process is in place that's effective and efficient, the gains compound rapidly. Large companies are positioned to capture those gains across thousands of teams and millions of interactions.
  • Vendor relationships: The ability to partner with best-in-class providers — like Nadia for coaching and development — is something large companies are uniquely positioned to leverage and negotiate.

There are real opportunities for business leaders, HR professionals, and executives to put these elements in place. I would encourage you to think hard about how you're going to implement change management — and how the public sector will need to evolve alongside this transition.

[00:11:38.642]

Das: Thank you, Tim.

▶ Why Large Enterprises Are Structurally Positioned to Lead in AI Adoption

Despite the complexity of deploying AI across large organizations, enterprise-scale companies hold meaningful structural advantages: greater investment capacity, better and more consistent data, deeper internal talent and technology expertise, established change management infrastructure, and stronger vendor relationships. Top AI adopters among large enterprises are already realizing approximately 20% greater EBITDA compared to their peers. The key is translating these structural advantages into effective implementation — starting with workflow analysis before technology selection.

AI Adoption in Large Enterprises: Data, Case Studies & What CHROs Must Do Now

In this session from the Valence's 2026 AI & The Workforce Summit, a senior HR Policy Association leader presents data from member surveys and the Wall Street Journal on the state of enterprise AI adoption — including the significant optimism gap between the C-suite and the general workforce. Drawing on three enterprise case studies, including a Fortune 100 deployment of Nadia that scaled coaching from 700 to 38,000 sessions in three months, he makes the case for why 2026 is the inflection year for AI in HR — and why large companies, despite their complexity, are structurally better positioned to win.

Tim Bartl

CEO, CHRO Association

AI Adoption in Large Enterprises: Data, Case Studies & What CHROs Must Do Now

AI Optimism, Personalization & Geopolitics | Aria Finger, Reid Hoffman's Chief of Staff

In this conversation from the Valence's 2026AI & The Workforce Summit, Aria Finger — chief of staff to LinkedIn co-founder and investor Reid Hoffman — shares what it looks like to have a front-row seat to the AI revolution. Drawing on her work across AI-native startups, global podcasting, and landmark experiments in AI personalization, Aria makes a compelling case for optimism, democratization, and data-driven decision-making in an era of rapid technological change. The session covers everything from personalized book covers to holographic AI and AI tutoring in Nigeria — and closes with a sharp-eyed view of the geopolitical fractures reshaping the AI landscape.

Aria Finger

Chief of Staff to Reid Hoffman

Parker Mitchell

CEO

Key Points

Key Takeaways

  • The Silicon Valley mindset isn't naive — it's a feature: The ethos of "the only way to predict the future is to build it" isn't blind optimism. It's an action orientation that enables progress. Aria distinguishes between Silicon Valley's constructive optimism and the paralysis she sees elsewhere, arguing that you can only reach good outcomes if you actively build toward them.
  • AI personalization unlocks experiences that were previously impossible: Reid Hoffman's team produced approximately 2,000 individualized copies of Superagency — each with a personalized cover, custom photos, and a reader-specific blurb — something that would have been cost-prohibitive without AI. This is the frame Aria uses to evaluate any AI application: not "10% faster," but "could we have done this at all before?"
  • AI democratization is a global equity issue, not just a domestic one: A one-on-one AI tutor in Nigeria outperformed Google search at lower cost and drove two standard deviations of educational improvement. Language is one of the largest access barriers globally, and AI translation is beginning to dissolve it — enabling people to learn, be coached, and engage in their own language at scale.
  • Geopolitical fracturing poses a real risk to AI's potential: Europe is actively divesting from US tech partnerships, regulatory battles are intensifying, and AI will become a defining campaign issue within the next one to three years. Aria sees the fracturing of global AI collaboration as a likely mistake with significant downstream consequences for innovation.
  • AI can encode values more consistently than humans: When Amazon's AI hiring system revealed gender bias, Aria reframed it as a win — the bias was finally visible and fixable. She argues that AI, trained on the right data, can avoid the snap-judgment discrimination humans exhibit daily, making it a potential tool for more equitable decision-making in medicine, hiring, and beyond.
  • The most urgent crossroads is in education: With 90% of US children in public schools, AI could either flood classrooms with low-quality "AI slop" or deliver transformational one-on-one tutoring at scale. Aria sees both futures as equally possible — and the difference will be determined by the choices made now.

Full Transcript

A Front-Row Seat to the AI Revolution: The Silicon Valley Mindset

[00:00:00]

Parker: It is a great pleasure to welcome you. You've been Reid Hoffman's chief of staff for four years now. He is the co-founder of Pi, Personal Intelligence, with Mustafa Suleyman. He's on the board of Microsoft. Front-row seat to all of this. What was your first experience or exposure to this technology that has become GenAI? Tell us what that moment was like.

[00:00:35.880]

Aria: I am a lifelong New Yorker. I grew up in New York. I spent a long time being the CEO of DoSomething — all about the power of technology to change the world. And then I joined Reid four years ago, working with him across politics, philanthropy. We've funded two AI companies in the last year, so I get to see companies who are starting and being AI native. And I think the thing that gets me the most is just the mindset difference.

I'm at a dinner in New York about environmental regulation, how can we save the world from climate change? And everyone's saying, 'It's the end of the world. There's nothing to be done.' I fly out to San Francisco, and we're at an environmental dinner about how to save the world. And they're saying, 'There has never been a better time to be alive. We are saving everything. Technology's going to do it.' And I was like, 'Wait, are you guys the environmentalists?' Probably neither of those pictures is right.

But the Silicon Valley ethos of 'the only way to predict the future is to build it' comes so to the fore when you're talking about AI, because you can build it. You can do it. Reid will say, 'Wait, you didn't use ChatGPT to do this? What are we doing here?' The idea that if you don't use AI in every single task that you're doing, you're doing it wrong — that mind shift comes to the fore.

▶ Why the Silicon Valley AI Mindset Drives Faster Progress

The Silicon Valley approach to AI is defined by a bias toward action: if you don't use AI in every task, you're doing it wrong. Aria Finger, chief of staff to Reid Hoffman, contrasts this with the paralysis she observes elsewhere — particularly around climate change, where Eastern US communities see only risk while Silicon Valley sees only opportunity. Neither extreme is accurate, but the action orientation of builders is what drives AI progress forward.

[00:02:21.881]

Parker: Have you seen this lens on the world change from two, two and a half years ago to today, or has it always been there?

[00:02:30.259]

Aria: I think it's always been there. When I joined Reid in 2021, it was the era of Web3. Zuckerberg was investing billions in the metaverse. Everyone was talking about NFTs and stablecoins. A lot of that went to zero. And so a lot of people outside tech are thinking, 'These tech folks just promised us the world, and it went to zero. We invested $100 billion, and now it's nothing.' They feel burnt.

In Silicon Valley, there is no being burnt — there is just, 'This is going to change what we are doing.' I do think things have shifted with AI because it's a fundamentally new and different technology. But that ethos and optimism was always there, for better or for worse.

AI Personalization at Scale: The Superagency Experiment

[00:03:24.580]

Parker: One of the things close to our heart is this idea of personalization — the belief that personalization is going to allow 200 people to have 200 different experiences unique to them. You've got a copy of Superagency here. Share the genesis of the book and then we'll talk about the personalization of it, which is fascinating.

[00:03:49.620]

Aria: Superagency came out last January. Reid wrote it with his co-author Greg Beato. The subhead is 'What Could Possibly Go Right with Our AI Future.' We certainly don't want to put our heads in the sand, but so many people are talking about what could possibly go wrong. If you just try to avoid the bad stuff, you cannot get to the good stuff. You're only going to get to the good stuff if we truly shoot for it and try to build that super positive future.

When you look at the advent of electricity, the first things people used it for weren't work or homework — it was for fun and fanciful things, lighting up amusement parks. That convinced even me that the fun and the fanciful could lead to utility down the road.

We came out with Superagency, but we really wanted to show, not tell. Reid is a huge gift-giver and loves making customized gifts for friends. And when we think about AI, instead of thinking about how we can do things 10% faster or 10% better, we try to think about things we could never have done before. That's where personalized Superagencies came in.

[00:05:38.180]

Parker: You'll never have to worry about whether it's your copy or someone else's. Do you want to show people the back?

[00:05:53.017]

Aria: The back has a photo — that's me looking pretty great. We gave one to Andrew Ross Sorkin. That's what Hillary Clinton thought about hers. My brother said, 'I know that's AI because you've never looked that cool.' And then everyone felt compelled to share. Mike Bloomberg shared. Bill Gates shared. And my favorite: Arianna Huffington as Indiana Jones. First of all, we're all egotistical people. What do you want to share? Pictures of yourself looking incredible.

We made about 2,000 individualized copies. There's a blurb about me, a personalized section just for me, 25 pages of photos — all of me. And we allowed personalized book covers. Anyone can go to superagency.ai, show a proof of purchase, and we'll send you a book cover in the mail so you can have your own. Random people were posting on LinkedIn and Twitter about how excited they were. It added surprise and delight. But there's real utility there too. We're showing that with AI, you can create things at a scale you could never have done before.

▶ How AI Enables Mass Personalization That Was Previously Cost-Prohibitive

Reid Hoffman's team produced approximately 2,000 individualized copies of Superagency — each featuring personalized cover art, custom photos, and a reader-specific blurb — at a scale that would have been impossible without AI. Aria Finger uses this example to reframe how leaders should evaluate AI applications: not as a tool for incremental efficiency gains, but as a means of delivering experiences that simply could not have existed before. The project generated spontaneous social sharing from recipients including Bill Gates, Mike Bloomberg, and Arianna Huffington.

Global AI Democratization: Language, Access, and the Equity Imperative

[00:07:29.079]

Parker: That captures kind of the hope of it — people feel seen, excited, recognized. That's the tip of the iceberg of experiences that will feel personal on a much deeper level. The next example you brought is a much more interesting one. Tell us about the genesis of this idea of personalization at a global scale.

[00:07:58.819]

Aria: We like to think very globally. AI might be largely based in San Francisco, in a three-mile radius. But if we, as a society, don't ensure that the whole world is taking part in this hopefully amazing AI future, what have we done?

Two years ago, Reid gave the commencement ceremony at the University of Bologna Business School. He doesn't speak a word of Italian. The next day, we translated his speech into 125 different languages — including Klingon, because Reid is such a nerd — and put them on the internet. They weren't perfect, but we wanted to experiment at the rough edge.

More recently, Reid and I have a podcast together called 'Possible,' all about exploring the best possible futures. One week ago, we launched the podcast in a new language every day using a combination of 11 Labs with humans in the loop on every single episode. We had 14 different translators — Marathi, Hindi, Argentine Spanish, Spain Spanish, French, Italian. The humans worked with AI systems to ensure we reached a quality that people actually wanted to listen to. The CEO of an AI lab emailed me: 'You can speak fluent French? You never cease to amaze me.' I speak broken Spanish. That's it.

A lot of people worry about AI taking away humanity. I have the same worry — I have three kids, and sometimes I roll my eyes at 'AI making us more human.' But I actually think this work exemplifies that possibility. We also worked with a company called Protobox that does holograms — we built a Reid AI backend trained on all of Reid's books, podcasts, speeches, and writing, and brought it to conferences. At one event, about 100 CEOs of unicorn companies worth more than $1 billion had the LLM trained on specific facts about each founder present. Then we just let the CEOs talk to the hologram.

[00:11:02.159]

Wemimo: Reid, can you tell me a poem about my life?

Reid AI: Wemimo, you've woven hope from hardship's thread. From Lagos nights to New York dreams ahead. With Esusu's light, you crack open closed doors, turning rent receipts into justice that soars.

Wemimo: Reid, can you speak Yoruba from West Africa, Nigeria? Kini oruko re?

Reid AI: Oruko mi ni Reid AI. Inu mi dun lati pade re, Wemimo.

Wemimo: I asked him what my name is in my local dialect. What?

[00:11:44.720]

Aria: Esusu is a New York-born unicorn that helps low-income Americans build credit through on-time rent payments rather than mortgage payments. That's Wemimo, the co-founder. They're awesome.

[00:12:01.320]

Parker: What is it like to watch that experience? For those of us based in North America, democratization often means income, class, race, gender — but there's a huge democratization around language as we go global. For people to be reached in their own language, seen and heard — what is it like to witness that?

[00:12:28.639]

Aria: I was there when Wemimo had that experience, and it blew his mind. He now has such an affinity for Reid. But Reid wasn't there. Sometimes you have to think about the parasocial relationships we have with podcasters — 'I'm best friends with Derek Thompson just because I listen to him every Monday.' He doesn't know who you are. So you have to think about how AI interacts with that question. Some of it is very exciting. Some of it opens other questions.

Ethan, who we heard from earlier, talks about a study in Nigeria where they gave people one-on-one AI tutors. One-on-one tutoring is the best possible educational intervention. They found it was actually cheaper than Google — because with Google, you had to go back and forth endlessly and it wasn't as reliable. And they saw people move two standard deviations of improvement with an AI tutor. Democratization is absolutely real.

As we develop more small models, free models, and paid models, there will always be a free tier enabling people to do incredible things they could only have dreamed of previously. Technology so often centralizes power — but there are interventions we can make to ensure that it doesn't.

▶ How AI Tutoring Is Democratizing Education in the Developing World

A study in Nigeria found that one-on-one AI tutoring outperformed Google search in both reliability and cost — and drove two standard deviations of educational improvement among participants. Aria Finger, citing research discussed by Wharton professor Ethan Mollick, argues this is proof that AI democratization is not aspirational but already measurable. Language translation, free-tier AI access, and AI tutoring are beginning to deliver personalized support to populations that have historically had none.

Geopolitics and AI: The Fracturing Landscape

[00:14:05.440]

Parker: The growth of AI is going to be shaped by geopolitics, and geopolitics is undergoing a lot of change. How do you see that intersecting with AI? We have 150 countries using Valence. AI models might be more or less permissible in certain countries. How should global companies be thinking about this?

[00:14:40.620]

Aria: I don't have a lot of hope here. I was just at an AI conference earlier this week, specifically about geopolitics. A lot of the folks who wrote the rules for Trump and for Biden were in the room. The Europeans said, 'You guys don't know what's coming for you in the US.' Europe is divesting from the United States. We all saw Canada's signal about embracing China because they didn't want to be a US ally when the US wasn't being an ally to them. France went after xAI. I think Europe is angry, and they're going to keep using their regulatory power.

On the other side, Zuckerberg noted last year that India plus Brazil was generating revenue roughly equal to the EU. What does that mean for Europe's power and their ability to shape things with the rise of the BRIC countries? I don't know. But the fracturing is probably a mistake. What it means for Google DeepMind, for Mistral, for the companies doing good AI work in Europe — they'll have fewer willing partners here.

Over the next year, definitely the next three, AI and tech is going to be a defining campaign issue. It might be one of the only bipartisan issues we have — I'll take anything bipartisan. A certain tech lobby just spent $20 million against a congressional candidate in New York who was talking about AI regulation. Politics, geopolitics, and AI are going to become super messy.

▶ How Geopolitical Fracturing Threatens Global AI Development

The fragmentation of global AI collaboration poses a significant risk to AI's potential. Europe is actively using regulatory power against US tech companies, Canada is signaling closer ties with China, and AI is expected to become a defining campaign issue in the US within one to three years. Aria Finger warns that this fracturing is likely a mistake — reducing the pool of willing international partners for AI labs on both sides of the Atlantic at a critical moment in the technology's development.

AI and Young People: Education at a Crossroads

[00:16:31.179]

Parker: I want to go back to your stint at DoSomething.org. The impact of AI on young people will have positives and negatives. What are some of the things, maybe some of the unexpected things, you've seen or heard from that community about how AI is affecting their world?

[00:16:58.360]

Aria: COVID reshaped our world dramatically, and we're not even fully accepting it. The COVID micro-generation that didn't go through high school in-person — that can tell us a lot about what can happen with young people and technology. But I actually think the kids are all right.

I'm an optimist. I have three kids — 10, 8, and 5. When you talk to them about technology, they know they don't want to be on their phones all day. They're yelling at me all day long to get off my device. I think young people can see what is good for them and what is not.

Education is a cause so near and dear to my heart. We're at such a crossroads. The AI slop side of things can take over our public schools — 90% of our children are in public schools — and we can go down a very deep road. Or AI can deliver transformational one-on-one tutoring at scale and truly change the trajectory for children in America. Ethan challenged me just today: 'Aria, create an all-purpose AI tutor. No one's doing it. Why isn't someone doing it? Get on it now — we can truly change the trajectory for all the children in America. Let's not waste a crisis.' I think there's huge potential, but we're at a crossroads.

▶ Why AI in Education Is the Most Consequential Decision of This Decade

With 90% of US children attending public schools, the direction AI takes in education will shape outcomes at massive scale. Aria Finger identifies two equally plausible futures: one where low-quality AI content floods classrooms, and one where AI delivers the kind of one-on-one tutoring that research has shown drives two standard deviations of improvement. Wharton's Ethan Mollick challenged Aria directly to help build a universal AI tutor — framing this moment as a crisis not to be wasted.

Can AI Be Designed with Courage and Values?

[00:18:38.180]

Parker: If you had one magic wand for a hope that would come true related to technology or adoption of technology, what would your hope be for 2026?

[00:19:09.859]

Aria: My hope would just be that we let the data drive us. There are so many irrational fears and so many real fears, but we need to be able to separate good tech and bad tech. Technology is not inherently good or bad, but there are companies doing it the right way and companies doing it the wrong way. As consumers, we have to make choices — but we also need to let the data drive us. There are a lot of folks who are anti-Waymo, but when I look at Waymo and see the ability to save 45,000 American lives because we eliminate car deaths, when my kid turns 16, I want him in a Waymo. I really just want the data to drive us as opposed to all of the political noise.

[00:19:59.920]

Parker: Reid is showing courage, which is a rare but very important trait. How do we make sure our AI systems are imbued with traits like courage?

[00:20:15.380]

Aria: That's a really tough question. The good news about AI — when Amazon came out with that hiring system four or five years ago, and it turned out it discriminated against women because it pattern-matched on historical data, everyone went crazy. To me, that was a win. The AI wasn't discriminatory; it was just based on the data Amazon already had. Amazon had been discriminating for the last 20 years. It just wasn't encoded. And once it's encoded, we can change it. We can make sure the system doesn't discriminate. We can check. We don't have to rely on the whims of people who make bad decisions.

When you're a Black woman in a doctor's office getting worse care because the doctor doesn't believe you're in pain, the AI doesn't discriminate against you like that. I would actually say that the AI already has the courage because it doesn't discriminate. We all do. We're all making snap decisions left and right. If we give it the right data, I actually think AI can be pretty remarkable.

[00:21:34.539]

Parker: Then maybe the mission I'd give outside of Ethan's is: get Reid to collect profiles of courage from everyone who'll submit them, and make sure the next-generation models are trained on those.

Aria: I love it. Sounds great.

Parker: Thank you.

▶ How AI Systems Can Encode Values More Consistently Than Humans

When Amazon's AI hiring system revealed gender bias, most observers saw a failure. Aria Finger saw a win: the bias was finally visible and therefore fixable. She argues that AI, trained on the right data, can avoid the snap-judgment discrimination that humans exhibit constantly — in hiring, in medicine, in daily interactions. Rather than fearing that AI will replicate human bias, she frames AI as a potential tool for embedding values like fairness and consistency more reliably than any individual human decision-maker could.

AI Optimism, Personalization & Geopolitics | Aria Finger, Reid Hoffman's Chief of Staff

In this conversation from the Valence's 2026AI & The Workforce Summit, Aria Finger — chief of staff to LinkedIn co-founder and investor Reid Hoffman — shares what it looks like to have a front-row seat to the AI revolution. Drawing on her work across AI-native startups, global podcasting, and landmark experiments in AI personalization, Aria makes a compelling case for optimism, democratization, and data-driven decision-making in an era of rapid technological change. The session covers everything from personalized book covers to holographic AI and AI tutoring in Nigeria — and closes with a sharp-eyed view of the geopolitical fractures reshaping the AI landscape.

Aria Finger

Chief of Staff to Reid Hoffman

AI Optimism, Personalization & Geopolitics | Aria Finger, Reid Hoffman's Chief of Staff

CEO

AI Coaching for Frontline Workers | Home Depot, Schneider Electric

In this panel session from the Valence's 2026 AI & The Workforce Summit, HR leaders from The Home Depot and Schneider Electric discuss how AI coaching is moving beyond the corporate office to reach the people who need it most: frontline managers and hourly workers. In the Fortune 1000, 60% of employees are classified as hourly wage earners — yet these workers have historically had the least access to personalized development and real-time support. Tim and Tina share what they've learned from deploying Nadia in retail stores, manufacturing plants, and distribution centers, and make the case for why democratizing AI coaching is both a business imperative and an equity issue.

Tim Hourigan

former EVP of HR

Tina Mylon

Chief Talent and Diversity Officer

Alex McMurray

Chief Commercial Officer

Key Points

Key Takeaways

  • Frontline workers are the majority — and the most underserved: In the Fortune 1000, 60% of employees are hourly wage earners. At Home Depot alone, 90% of 500,000 associates are frontline workers. These populations have historically had the least access to personalized coaching and leadership development — and AI changes that equation.
  • Real-time, in-the-moment coaching is the most critical frontline use case: Frontline managers face immediate, high-stakes situations — a physical altercation at 2 a.m., an associate ignoring a customer — with no HR partner available and no time to consult a manual. AI coaching can provide culturally aligned, SOP-consistent guidance in real time, exactly when it matters most.
  • Customization to company context is what makes AI coaching effective: Both Home Depot and Schneider Electric found that integrating company values, leadership expectations, and role-specific use cases into their AI coaching deployments was the key to meaningful adoption and measurable behavior change.
  • The ROI case for frontline AI coaching is real — even when hard to quantify: A 1% improvement in customer engagement at Home Depot is worth $1.5 billion in top-line sales. Tim linked preventable management failures — including a union drive triggered by a poorly handled termination — to the cost of not having AI coaching available. The ROI is visible in what doesn't go wrong, not just what improves.
  • Equity of access to AI tools is both a moral and a business imperative: Tina framed frontline AI access as a DEI issue — ensuring that plant managers, shop floor workers, and non-exempt employees have the same opportunity to develop skills, build confidence, and grow as their office-based peers. Rising tide lifts all boats.
  • Technology integration is the key barrier — and the key opportunity: Frontline deployment requires meeting workers where they are: on handhelds, point-of-sale systems, and shared devices. Home Depot's vision is to integrate Nadia directly into the handheld devices associates already carry, piggybacking on existing infrastructure investment.

Full Transcript

The Frontline Opportunity: Who Gets Left Behind in AI Adoption

[00:00:00]

Alex: Parker talked this morning about the frontier and the frontline. And I think that so often in our roles, in offices, in front of computers, even when we don't mean to, we gravitate to thinking about the jobs of professionals and the days of professionals. We think so much about those leaders and our succession pipelines.

What we're really excited to talk about here is the frontline. In the Fortune 1000, 60% of the people employed you'd classify as an hourly wage earner — someone in a Home Depot, on a manufacturing line, in a distribution center, a nurse, a restaurant worker. Employed roles, not gig workers.

What we really want to talk about is: how are we bringing the power of AI to these people who have traditionally not been able to get this type of personalized development, personalized coaching — and more than anything, personalized support in really tough moments that they endure in what are probably tougher jobs than any of us have ever had.

Maybe we'll kick off with Home Depot. Tim, I'd love for you to share a little about the culture at Home Depot, and why I now know the obvious answer was that you wanted to pilot in the stores.

▶ Why Frontline Workers Are Underserved by Corporate AI Investments

In the Fortune 1000, 60% of employees are hourly wage earners — yet most enterprise AI and leadership development investments have historically focused on professional, office-based populations. Frontline workers in retail, manufacturing, distribution, and healthcare often face the most high-stakes, high-pressure day-to-day situations with the least access to real-time support, personalized coaching, or development resources. AI coaching platforms like Nadia are beginning to close that gap.

Home Depot: Scaling Leadership Development to 50,000 Frontline Managers

[00:01:41.040]

Tim: At Home Depot, the servant leadership motto is the inverted pyramid. At the top is our customer base, followed by our frontline associates. We think the most critical management position in the company is that frontline leader — the one in the unit who is probably the only manager on duty when they're on shift.

We have 500,000 associates, 90% of whom are frontline. So you start thinking: I've got 50,000 people in a leadership model that I need to touch in a way that lets them reflect the appropriate leadership dynamics in line with our culture. And the challenge is — how do you train 50,000 people?

We bring people into Atlanta, what we call the Store Support Center — not the Corporate Center, because our job is to support that frontline. We put groups through a cultural immersion, three-day program. But how many people can you do that for in a year? Not 50,000. Maybe 2,000.

That's where we got excited. Right before I left, I introduced Nadia to the team. I said: this is scalable. It's something that I could ensure each frontline leader has in their hand — a tool that helps them understand, 'What do I do in this situation?'

▶ How AI Coaching Scales Leadership Development Across Large Retail Workforces

Home Depot faces a leadership development challenge common to large frontline-heavy organizations: with 50,000 frontline managers across 2,500 stores, traditional training programs can only reach about 2,000 people per year. AI coaching through Nadia offers a scalable alternative — putting culturally aligned, role-specific coaching guidance into the hands of every frontline manager, regardless of location or shift, without requiring travel to a central training facility.

Schneider Electric: Expanding AI Coaching from Office Leaders to Plant Managers

[00:03:49.129]

Alex: Tina, Schneider Electric took the opposite approach — thinking about it from the salaried professional side first. Share a little about what you've learned from rolling out Nadia to salaried professionals, and what might be possible when you think about frontline roles.

[00:04:13.240]

Tina: We have around 12,000 to 13,000 managers overall. Of those frontline, it's almost half. We're 160,000 employees, all over the world — a third in the Americas, a third in Europe, a third in Asia. We're a French-headquartered company.

We probably took a more top-down approach, and we're in earlier stages compared to Home Depot. But I just learned from the Valence team yesterday: we signed a contract to extend our partnership on the AI coaching side to all our managers. We started with 2,000 office leaders for testing, but now we're really tackling frontline — plant managers specifically. In the Americas alone, we have over 100 plants. Those plant managers will be critical in getting augmented support through Valence.

Part of our overall people transformation is this notion of leader as coach. It's not a new idea — Tim, you spoke about servant leadership, which has parallels. The whole idea is shifting from command and control — even in a very efficient supply chain and factory environment — toward coaching, learning, and skilling. An AI leadership coach plays a really nice complement to that mission.

▶ How Schneider Electric Is Extending AI Coaching to Frontline and Plant Managers

Schneider Electric began its AI coaching deployment with approximately 2,000 office-based managers and has since expanded to cover all managers globally — including plant and frontline managers across more than 100 facilities in the Americas. The expansion reflects a broader people transformation strategy centered on shifting from command-and-control management to a coaching culture. AI coaching is positioned as a scalable tool to support that cultural shift at every level of the organization.

In-the-Moment Coaching: Real Frontline Scenarios That Demand Immediate Support

[00:05:58.620]

Alex: When you think about what these leaders — managing hourly workforces, in stores opening at 4:30 a.m., running through midnight — what does that day-to-day look like? And what did they need from a coach? Because it's so different from what someone managing professionals would need. Tim, share a little about what that looks like and why you're so excited for Nadia to help with those challenges.

[00:06:56.560]

Tim: We run 24/7, 365. The store opens at 6 a.m. and closes at 10 for the customer, and then after 10, we recover the store. 2,500 stores, probably another 250 distribution centers running at the same level. There's usually one manager on duty.

Let me paint a scenario. It happens at 2:00 in the morning — mid-shift for the night crew. An associate gets in an argument with another associate. Then there's a physical altercation. The manager in that case has probably got two years' experience. They've never had a fight in the receiving area. They also have a late truck that showed up, freight came in late. Two people called out on the night crew. Do they run overtime or not? The store's going to open at 6 a.m. whether they fix it or not.

That assistant manager thinks, 'I'll call my boss.' Not many bosses are up at 2:00 in the morning. There's no HR partner available. They can't run to an office and pull up an SOP — the guys are fighting right now. That immediacy is where we think of coaching on a much bigger scale. You have to be able to say, 'Here's what you do' — and it's aligned with SOP, aligned with our culture, reduces risk. That's coaching in the minute. If they don't deal with that effectively, there's no way they're thinking about growing their career or developing skills to move to the next level.

Another example: an associate walks past a customer on the floor and doesn't say, 'What are you working on?' — which is the Home Depot approach, not 'Can I help you?' Open-ended, not closed-ended. I'm a manager, two years in, probably just promoted from that same associate role. This is a person I know. Do I address them? A, do I turn and run because I'm afraid? Do I say, 'What the hell are you doing?' Or do you say, 'Nadia, what should I do?' If you can get in-the-moment, real-time coaching, think about the dynamic it changes for that interaction. And from a business standpoint, that associate — coached effectively — will engage the customer next time.

We have a close rate at Home Depot of somewhere around 60%. That means 40% of people who come in leave without buying. A 1% increase is worth $1.5 billion in top-line sales. We do 20,000 transactions per week per store. If I give one more transaction per store — just 1% — that's the opportunity. That's why we got excited about Nadia. It gives me a scalable tool to do that.

▶ Why Real-Time AI Coaching Is a Game-Changer for Frontline Leadership

Frontline managers face high-stakes situations — workplace conflicts, staffing emergencies, difficult customer interactions — with no HR partner available and no time to consult a manual. At Home Depot, a manager encountering a 2 a.m. altercation in the receiving area needs immediate, culturally aligned guidance, not a three-day training program. Real-time AI coaching through Nadia provides in-the-moment support that aligns with company SOPs and values, helping managers make the right call under pressure — and driving measurable business outcomes.

[00:11:00.279]

Tina: I love those use cases because it's all about practitioners. Most of us have to bring vision and land it on the ground.

From a year and a half of working with office managers, a couple of things I want to bring to plant managers. First: use case matters. When we first rolled out Valence, it was kind of a free-for-all — fun, everyone was experimenting. But it wasn't until we started feeding the model with Schneider-specific things — company values, leadership expectations, specific use cases — that we saw real results. And we learned from the data: when performance management time came around, when career conversations were happening, we saw spikes in usage for those use cases. We want to bring that to plant managers on the shop floor.

The other thing: we've been on a binge of digital and AI training — prompt-a-thons, classroom sessions. I'm exhausted by them. And we already know prompting is, in some way, getting there in terms of obsolescence. Our plan for blue-collar and plant managers is to make sure upskilling is material — what are you building? What are you learning? How can you apply it in your role? We believe that using an AI coach is the best way to actually build digital and AI skills versus classroom prompting.

And a big piece of my role is on the DEI and inclusion side. Equity and access to tools like this is super important. People spoke earlier about democratization. If AI is used responsibly — with that great power comes great responsibility — it really makes a difference. Whether you do it with a smaller population and pilot first, or at the scale Tim is talking about, the cultural message you send and the upskilling across all levels of manager or employee matters enormously.

[00:14:05.240]

Tim: Integrating the culture is so critical. How would you make a decision in line with the corporate values or the purpose of the company? I agree 100%.

Making the ROI Case: What the Cost of Not Coaching Really Looks Like

[00:14:18.663]

Alex: Both of you in your talent roles have gotten pressure from the CFO and CEO around proving ROI. As we go to these larger, deeper populations where they have such a big impact on the bottom line — how would you both articulate to your peers and executives how you think about the ROI of enabling those frontline managers?

[00:15:07.899]

Tina: It's a sensitive topic and a hard conversation. I don't want to sugarcoat it — we do talk about workforce productivity and efficiency. Management and the board are interested in it. We're experimenting with AI augmenting or supplementing human-to-bot-to-agent processes in supply chain, finance, accounting, marketing, HR. Schneider Electric is not leading in that space — we're learning and experimenting.

But the ROI narrative isn't only about workforce efficiency and FTE reduction. We also tell the strategic narrative: enhancing impact, enhancing productivity — not just for efficiency, but for augmentation. We're in the energy tech space, and our business has grown significantly because of data centers. We make the cooling, we make the software supporting that growth. That, by definition, is changing every day. So the value proposition of AI helping people upskill and reskill is not just talking points — it's real. And it can complement the conversation about workforce productivity and efficiencies.

[00:17:09.460]

Tim: Our founder Bernie Marcus said: 'If you take care of the people, they'll take care of the customer, and everything else takes care of itself.' Our CFO did not go to that school. With him, it's hard to point to a direct return. There's no number I can show where 'this store did X, and therefore this was the outcome.' But what you can show is use cases — and the cost of not doing it. And those are real.

I had a union campaign in a store because a junior manager terminated a long-tenured associate for poor attendance. Come to find out, she was a single mom in an abusive relationship who had left with three children. The attendance issue was entirely in that last six months. Culturally, we would hope a manager would say, 'This isn't like you — what's going on?' The store knew. People knew. And I ended up with a union drive. Fortunately, we won.

But the message is: if she had Nadia, could Nadia have flagged it? 'This isn't like Tracy. Have you asked Tracy what's going on? Before you implement the rule as written, maybe this is a time for an exception. Ninety percent of the time you follow the rule; ten percent of the time you make judicious exceptions — and this is the right time.' That's where AI coaching can add real empathetic value. When I think about financial results, I can point to cases where management failure led to real costs. I can point to opportunities where better leadership drives volume. And then there's a trust factor you have to go with.

▶ How to Build a Business Case for AI Coaching for Frontline Managers

The ROI of AI coaching for frontline managers is often most visible in what doesn't go wrong. At Home Depot, a poorly handled termination of a long-tenured associate — one an AI coach might have flagged as an exception worth investigating — led to a union organizing drive. Tim frames the ROI case in two directions: the cost of management failures that AI coaching could have prevented, and the revenue opportunity unlocked when frontline leaders are consistently better equipped to engage employees and customers.

Solving the Technology Access Problem: Meeting Frontline Workers Where They Are

[00:19:21.480]

Alex: A lot of why we start with office populations is that it's a little easier — everyone is already fluent with technology, using Nadia on a laptop in a Microsoft ecosystem. When we get down to the frontlines, we're dealing with so many different pieces of technology: point-of-sale, handhelds, portals. All of these different modalities, people don't have consistent access. Can both of you speak to why it's worth going through that challenge — why it's so important that we're not leaving the frontline as AI have-nots?

[00:20:38.500]

Tina: For me, this is bigger than Nadia — it's about digital and AI access broadly. Back to my head of DEI role: it's about equity and leveling the playing field and equal chance for growth. Growth and skills may not look the same in a plant shop floor environment as in an office building, but we are all humans, and we all want to grow. We all want to grow the company. Equity of access to AI tools is just common sense and smart business. Rising tide lifts all boats. There's a set of fundamental AI and digital skills that everyone needs, whatever they're doing.

[00:21:54.599]

Tim: Every associate on the floor at Home Depot has a handheld device. Today, we have basic product knowledge on that — if you ask an associate about installing a water heater, they can pull up the item list you'd need. I see Nadia the same way, and actually more impactful. They're going to have a phone. If they ask 'How many items do we have in a particular SKU?' they can scan the barcode, take a picture, it all pops up. We've already made that infrastructure investment. I think we could piggyback on it — and it would make so much more sense.

▶ How Enterprises Can Deploy AI Coaching to Frontline Workers Without New Hardware

Deploying AI coaching to frontline and hourly workers doesn't require new device infrastructure. At Home Depot, every associate already carries a handheld device used for inventory and product knowledge lookups. Tim's vision is to integrate Nadia directly into those existing devices — piggybacking on an infrastructure investment already in place. This approach lowers the barrier to frontline AI adoption significantly and positions AI coaching as a natural extension of tools workers already use.

[00:22:46.759]

Alex: Well, thank you both for sharing your experiences and wisdom. And thank you all for staying with us — a couple more sessions, and then it's time for cocktails.

Tim: Nice job.

Tina: Thanks, Alex. Thanks, Tim.

AI Coaching for Frontline Workers | Home Depot, Schneider Electric

In this panel session from the Valence's 2026 AI & The Workforce Summit, HR leaders from The Home Depot and Schneider Electric discuss how AI coaching is moving beyond the corporate office to reach the people who need it most: frontline managers and hourly workers. In the Fortune 1000, 60% of employees are classified as hourly wage earners — yet these workers have historically had the least access to personalized development and real-time support. Tim and Tina share what they've learned from deploying Nadia in retail stores, manufacturing plants, and distribution centers, and make the case for why democratizing AI coaching is both a business imperative and an equity issue.

Tim Hourigan

former EVP of HR

AI Coaching for Frontline Workers | Home Depot, Schneider Electric

Chief Talent and Diversity Officer

Alex McMurray

Chief Commercial Officer

CHROs on AI Coaching, Agentic AI & the Board | Corning, Cushman & Wakefield

In this panel session from the Valence AI & The Workforce Summit, CHROs and senior HR leaders from Corning and Cushman & Wakefield— alongside a leading CHRO network convener — discuss what it actually looks like to lead through the AI transformation from the people side. The conversation covers 2026 AI priorities, the bifurcation between organizations leaning in and leaning out on AI, how AI coaching is reshaping manager development at scale, the CHRO's role in board-level AI governance, and what advice these leaders would give peers navigating the human-AI era. The session features candid stories, including a CHRO who used Nadia to prepare for her first meeting with a new CEO — and now says she would never go back to a human coach.

Holly Tyson

Chief People Officer

Jordana Kammerud

former SVP & CHRO

Larry Emond

Senior Partner, Modern Executive Solutions

Levi Goertz

VP of Client Solutions

Key Points

Key Takeaways

  • The AI adoption bifurcation is already happening — and the gap will widen: Larry has run over 100 CHRO workshops on AI agents and observes that organizations are splitting into two groups: those who have gone deep with at least one specialized agent and can't wait to expand, and those still treating AI as an experiment. There is almost no middle ground — and organizations leaning out are likely to stay out for another one to two years.
  • AI coaching is the unlock for manager development at enterprise scale: Cushman & Wakefield has approximately 10,000 managers across 60 countries. Holly frames Nadia as a democratization tool — giving managers who have never worked for a great manager access to best-practice coaching in their own language, with the ability to role-play real conversations before they happen. The GXO Logistics CHRO now uses Nadia daily and says she would never return to a human coach.
  • The window for strategic procrastination on AI has closed: Holly built her career on being a "fast follower" — watching others learn to walk before she ran. She has reversed that position on AI, warning that the pace of change is now too fast for organizations on the sidelines to realistically catch up.
  • Nadia's value compounds beyond coaching: Jordana at Corning identified three phases of Nadia's organizational impact — first as a coaching and development tool, then as a scalable business partner reinforcing company values across all HR functions, and finally as a catalyst for rethinking entire HR processes from first principles in a world with fundamentally different capabilities.
  • The CHRO's role at the board level is to frame AI as a strategic portfolio decision: Holly and Jordana both describe the board conversation as one of stakeholder stewardship — helping directors understand AI's implications for investors (competitive disruption risk), customers (value creation opportunity), and employees (sustainable, values-aligned transformation). The key is approaching it from both the growth opportunity and the culture preservation sides simultaneously.
  • The most actionable thing CHROs can do with board members is get them to use AI tools themselves: Larry's straightforward advice — find a way to hook senior leaders and board members into actually using Nadia or another agent personally. Sharp people who engage with the tools directly will get it immediately, and their advocacy changes the pace of organizational commitment.

Full Transcript

2026 AI Priorities: From Experimentation to Enterprise Scale

[00:00:00]

Levi: We've got people who are playing very senior roles trying to deal with the challenge that, as Das mentioned, we're moving into a world where AI is potentially going to be employees and agents impacting organizations heavily. The pace of technological change is very rapid. Can organizations keep pace with that change as you're managing not just people but also AI? And can we make sure we do that so it's not only cost cutting and productivity, but delivering value in other ways? As you head into 2026, what are some of the focuses related to AI that your organizations are running? Jordana, we'll start with you.

[00:00:54]

Jordana: Right before I left Corning, we were doing the strategic planning for the year. We had been early adopters of Nadia and of Corning GPT — which was ChatGPT brought in-house. Last year was our year of experimentation, piloting, and getting some comfort with it. This year was gearing up to be one in which we would take on bigger pieces of work from an agentification standpoint on the business side — to expedite revenue, profitability, and some of our bigger growth businesses.

From an HR side, we were really thinking about: how do we now augment the tools we have, like Nadia, to put it across all avenues of HR and talent management? And how do we create a more fertile ground for all of our people to really embrace capability with AI — beyond just dabbling in 'hey, can you ingest this data and give me some insights?' How do we help them start to know how to create GPTs, how to create mini workflows? That was the focus going into this year for Corning.

[00:02:08]

Levi: Okay, great. And Holly at Cushman & Wakefield?

[00:02:11]

Holly: We've got two approaches. One is launching, learning, and experimenting on a lot of AI things — particularly in our commercial real estate world, where a lot of what we do is heavily administrative. A real estate lease is 100 pages long. We're using AI capabilities for lease abstractions, to help people monitor and manage real estate portfolios — really using it as an augmenting tool to do the work.

On the HR side, Nadia is a big initiative for us. Individuals are using AI to experiment. We're a Copilot shop, but I'm a Claude user, and we're encouraging folks to build prompting capabilities broadly.

And then we are preparing heavily for the launch of agentic AI. HR is actually going to be the first function in the organization to move to agents. A lot of the work we're doing right now is launching and learning on the AI side while significantly prepping on the agent side — particularly around talent, culture, and governance, to make sure we're anticipating a whole range of scenarios and being prepared for them.

▶ How Leading Enterprises Are Structuring Their 2026 AI Priorities

Enterprise HR leaders entering 2026 are moving beyond experimentation toward deployment at scale. At Corning, the focus shifted from piloting Nadia and internal GPT tools to agentification across business functions and building AI fluency across the full workforce. At Cushman & Wakefield — a 54,000-person global firm — HR is slated to be the first function to transition to agentic AI, with parallel tracks running on AI tool adoption and agent governance preparation. Both organizations describe 2026 as the year of moving from learning to doing.

The AI Adoption Bifurcation: Leaning In vs. Leaning Out

[00:03:41]

Levi: I feel like the agentic piece might link to what you've seen, Larry. You meet with CHROs constantly, you were at Davos recently. Tell us what you're seeing not just for one company, but across the landscape.

[00:04:02]

Larry: A little context. For about a decade, I've been building one of the largest big-company CHRO communities in the world. I've done about 400 in-person meetings in the last decade, and the single most popular meeting topic has been HR technology and automation.

It wasn't until April of 2023, at a meeting in Zurich with global manufacturing CHROs, that the Schneider Electric CHRO, Charise Lee, said, 'Is anybody playing with this generative AI thing?' That was less than three years ago. The first time AI was ever verbalized in one of those rooms.

Since then, I've done about 90 meetings with AI and HR on the agenda every time. And then the last year I've done 100 workshops specifically on AI agents for HR. Here's what's interesting: you can tell right away which group you're in. If they've gone down the road with one agent like Nadia, they can't wait to hear about the other agents. But most of the time, you realize within five minutes that they think you have witchcraft — companies that should know, like Airbus — and immediately it's 'Oh, but the data...' and they're leaning out. You realize they're probably going to be leaning out for another year or two. You either go in early or you don't. There's really nobody in the middle right now.

[00:05:52]

Levi: I assume you have predictions on where the ins and outs will go?

[00:05:56]

Larry: When you finally go in with one specialized agent, the meetings never talk about experiments in ChatGPT anymore. That was 1.0. It's always named, specified agents that do specific things. That's where the whole market is — unless you've never gone there, and then you don't even know what they are. 'You know Nadia?' 'What's Nadia?' I'm talking about some of the biggest companies in the world that just haven't touched it yet. The companies that lean in are the ones that are going to lean in more and more — and they're going to get way ahead of the others.

[00:06:24]

Holly: I've always been a personal fan of strategic procrastination. My sister walked first, and I sat and watched. About three weeks later, I got up and walked across the room — and then I was running before my sister was. There is value in being a fast follower, in letting others be on the bleeding edge. My hypothesis is that AI is going to change all of that. I believe those who are on the sidelines — as you see that bifurcation — are going to find it really hard to keep up or catch up. The world is changing too fast now. It's changed my point of view: if we procrastinate too long, we'll be left behind.

▶ The AI Adoption Bifurcation: Why There Is No Middle Ground in Enterprise AI

Organizations are splitting into two distinct groups on AI adoption, with virtually no middle ground. Those who have deployed at least one specialized AI agent — like Nadia for coaching — are eager to expand and can immediately understand the value of additional agents. Those who have not leaned in are likely to remain on the sidelines for another one to two years, often citing data security or governance concerns. Larry, who has run over 100 CHRO AI agent workshops, describes this bifurcation as self-reinforcing: early adopters compound their advantage, while laggards fall further behind as the pace of change accelerates.

[00:07:55]

Larry: I had one CHRO — German, who became CHRO of a big French company — she told me, 'They're going to whine.' And I said, 'When they do, tell them: why are you people whiners?' And it worked. I said, 'If this is all impossible, why are all these famous big global companies two years into this? It's not impossible.'

AI Coaching as Manager Development Infrastructure

[00:08:28]

Levi: Within this AI landscape, where does AI coaching specifically fit into your strategies? Holly, maybe first.

[00:08:37]

Holly: This is already starting to be a huge unlock for large global companies. Cushman & Wakefield has about 54,000 people across 60 countries. We all know: people join good companies, they leave bad managers. How do you learn to be a good manager? You work for a good manager. If you haven't worked for a good manager, Godspeed. What Nadia is doing is incorporating all of those best practices and using it to help inform people who've never experienced what great management looks like.

It's a huge democratization. We have about 10,000 managers, and I can't wait for them all to be using it. The language capability alone is incredibly powerful. I'm actually going to be showcasing it at our European sales conference on Wednesday — our head of Germany will demonstrate it in German, then switch to English. Live on stage.

There's something really powerful about being able to role-play in the moment before a real conversation. The hypothesis that Nadia will soon know your people better than you do — I fully agree with that. 'Last time you talked with Joe versus Sally, Sally got upset and Joe was fine. Let's role-play how Sally is going to take this news versus Joe.' Giving managers who've never seen what good looks like not only the knowledge of what should be approached, but a practice session — and then feedback — right before they go into that conversation. We're already seeing it be fantastic.

One quick example: I mentioned we're a Copilot shop. We tested a change management scenario — coaching someone on how to best exhibit our DRIVE values: driven, resilient, inclusive, visionary, entrepreneurial. Copilot came back with how to find something on a computer drive. Nadia came back with: 'Here's what you can talk to her about around entrepreneurial and inclusion.' The institutional knowledge that Nadia absorbs, learns, and feeds back — it's going to be a huge game changer for our managers.

[00:11:51]

Levi: That's great. And Jordana, how do you see AI coaching?

[00:11:54]

Jordana: Let me talk about my own development journey with Nadia and AI in general, because it actually reflects three distinct phases I think a lot of organizations go through.

Phase one: when we first started piloting, the obvious value was democratizing coaching — having high-quality, personalized coaching available at scale, in people's own language. But the aha moment came when we realized this was also taking away the fear of AI and agents for our entire workforce. One of our engineers — a very prickly engineer, the reason we put him in the pilot group — when we ended the pilot, he was upset because Nadia had become his new best friend. His wife kept asking, 'Who's Nadia?' We started to realize: you have this two-fold thing. Great coaching — and a way to soften change resistance and get people genuinely comfortable with AI.

Phase two: the realization that once you train Nadia to emphasize company values and enable specific organizational priorities, she becomes a super business partner — consistently, in all languages, in real time, reinforcing the things you would normally have to distribute through individual HR business partners with varying results. Not replacing, but amplifying.

Phase three: once you have these new tools and capabilities, you start asking whether you even need the processes you have today. Could you take a blank sheet approach to performance management, for example? Not 'how do we do a better performance appraisal?' but 'if we have this capability, how do we actually design a process that delivers the real objective — customized development of people?' Any HR process becomes an opportunity to strip it back to first principles: what is the intent, what is the simplest path, and how do we design it in this new world?

And going back to something said earlier — we have a responsibility as HR leaders. Most of us went into this work for a purpose-driven reason: to amplify people's capability. We owe it to people to include AI skills and capability in their development at a much faster, more progressive rate than we currently are. We owe that to people so they keep pace with this change, and have the ability to move into the roles of the future.

▶ How AI Coaching Transforms Manager Development at Enterprise Scale

AI coaching delivers three compounding benefits for large organizations. First, it democratizes access to high-quality, personalized manager development in any language — giving the 10,000 managers at Cushman & Wakefield, for example, access to coaching that was previously available only to a select few. Second, it functions as a scalable HR business partner, reinforcing company values consistently across functions and geographies. Third, it challenges organizations to rethink HR processes from first principles — asking not how to do existing processes better, but whether those processes are even the right ones given fundamentally new capabilities.

The CHRO Using Nadia to Prepare for Her First CEO Meeting — and Never Going Back

[00:15:42]

Larry: One of the clients you saw earlier was GXO Logistics — a 200,000-employee logistics firm. The CHRO is Corinna Refsgaard, a German who lives in Copenhagen, offices out of London for a US company. She called me in October: 'I've got a great story.' They had already rolled out Nadia to a few thousand managers, and she had a new CEO coming in. She'd met him in the interview process, but this was her first big meeting. And she decided to use Nadia to prepare.

She started talking to Nadia, asked it to tell her about the CEO, then they role-played the conversation extensively. She had the meeting. And she told me: 'It's like the greatest first meeting you could ever have with a new CEO. There's no way that happened without Nadia.'

In the holidays, we're texting back and forth, and I asked: 'Are you continuing to use Nadia?' 'Yeah, every day.' 'Could you ever imagine using a human coach again?' She said, 'Why would I do that? Nadia is so much better at every aspect of it.'

Now you're talking about the CHRO of a 200,000-person company sharing publicly that she uses Nadia every day and that it helps her be a better leader. That kind of story, told from the very top, is going to cascade all the way down to the front line.

▶ Why the CHRO of a 200,000-Person Company Says She Would Never Return to a Human Coach

Corinna Refsgaard, CHRO of GXO Logistics, used Nadia to prepare for her first meeting with an incoming CEO — role-playing the conversation, researching the leader, and stress-testing her approach. She described it as the best first CEO meeting she had ever had. When asked months later whether she could imagine using a human coach again, her answer was direct: 'Why would I do that? Nadia is so much better at every aspect of it.' She now uses Nadia daily. Her public advocacy — as the CHRO of a 200,000-person global company — illustrates the cascade effect when senior leaders personally experience and endorse AI coaching.

What CHROs Should Bring to the Boardroom on AI

[00:17:21]

Levi: Let's talk about the role of the CHRO and HR as it relates to the board and the senior-most leadership. What role should HR play in helping the board of large companies know how to select, deploy, and utilize AI?

[00:17:43]

Holly: We have a new board chair just coming from Blackstone — and he's arriving with the mindset of what best companies do. The expectations are: let's talk more about culture, talent, and how we are reacting, responding, and proactively planning for a world of AI. It's refreshing to not have to sell up to a leader who is already there.

The best conversations at the board level should address both what we do with this capability to be more productive, and what the shock to organizational culture looks like — approaching it from both sides of the brain. We've created an AI council within the company covering governance, risk, culture, and organizational implications, and we'll be presenting that to the board.

The questions we're pressure-testing: when agents replace a particular role in whole, how much of that goes to EBITDA? How much gets redeployed to invest in growth? Like any portfolio strategy — it's not simply 'cut costs.' It's a portfolio rebalancing that focuses on growing insights and capabilities. In commercial real estate, we gain competitive advantage through insights. More AI capability means more insights per person, which means more client interactions, which drives more revenue.

On the culture side, it's making sure the boardroom conversation includes: who do we want to be, and are the decisions we're making on AI execution keeping that whole? Our DRIVE values, our vision and purpose — pressure-testing how we apply this new capability against: are we still demonstrating who we want to be as an organization?

[00:21:05]

Jordana: If you boil it down, the board is looking at this through the lens of stakeholder management. For investors: is this trend going to disrupt your core competitive advantage, or create disruption? For customers: are you taking advantage of this to enhance value creation, loyalty, and profitability? For employees and culture: are you approaching this sustainably — getting ahead of the concept so you can proactively minimize people disruption, which hurts the reputation of the company if done wrong?

For CHROs, it's an incredible moment to show up as a business leader and partner — thinking alongside your CEO and executive committee about how the board is framing this and how you need to answer it. And it's an incredible moment to amplify your values — tying the AI conversation to who you are as an organization and how that connects to the way you work and your culture. This one happens to land more squarely on the people side than previous disruptive trends. It's not an IT thing. It's a people, culture, and business thing simultaneously. So grab that moment and lead those conversations.

[00:23:05]

Larry: With the board and the executive committee, there's a simple, actionable thing: get them to play with the tools themselves. Find a way to hook board members into using Nadia, or whatever agent your company has built. These are sharp people. They just may not have engaged with the agents personally yet. Once they do, they'll get it right away. I think it'll move quickly when we get the most senior people to actually use it.

▶ How CHROs Should Frame AI for the Board: A Stakeholder Stewardship Framework

CHROs bringing AI to the boardroom should frame it through three stakeholder lenses: investors (will AI disrupt or enhance competitive advantage?), customers (is the organization capturing AI's value creation potential?), and employees (is the transformation being managed sustainably, with minimal people disruption and strong cultural alignment?). Holly at Cushman & Wakefield adds a portfolio rebalancing frame — positioning AI not as a cost-cutting tool but as a growth investment, asking what portion of AI-driven efficiency gains should be redeployed into insight generation, client capacity, and revenue growth.

Advice for HR Leaders Navigating the Human-AI Era

[00:24:19]

Levi: Final question for our audience — a lot of senior HR folks here. What is your piece of advice for navigating the human-AI era? Larry, Holly, Jordana.

Larry: Easy one for me. Nadia in 2028.

Holly: This is a key moment for almost all organizations in terms of managing through ambiguity. They say the number one competitive differentiator for CEOs is their ability to manage ambiguity. And I think it's our time as heads of HR to help sculpt the fog — to create tangibility in what can be a very intimidating change. Just play with it. It's really not hard. Bring tangible examples and help lay the groundwork for people to navigate this moment. Lean into it and have fun. Don't be a strategic procrastinator.

Jordana: There are so many smart people in this room with great advice to share. Just keep speaking to everybody, hearing what's going on, and thinking about how you want to apply it and drive it. Echoing everyone: just jump in. We're all figuring it out. Have some fun. Unleash your inner child. All that curiosity and creativity — get in there.

Levi: That's great. Bring the spirit, guys. Bring the spirit.

CHROs on AI Coaching, Agentic AI & the Board | Corning, Cushman & Wakefield

In this panel session from the Valence AI & The Workforce Summit, CHROs and senior HR leaders from Corning and Cushman & Wakefield— alongside a leading CHRO network convener — discuss what it actually looks like to lead through the AI transformation from the people side. The conversation covers 2026 AI priorities, the bifurcation between organizations leaning in and leaning out on AI, how AI coaching is reshaping manager development at scale, the CHRO's role in board-level AI governance, and what advice these leaders would give peers navigating the human-AI era. The session features candid stories, including a CHRO who used Nadia to prepare for her first meeting with a new CEO — and now says she would never go back to a human coach.

Holly Tyson

Chief People Officer

CHROs on AI Coaching, Agentic AI & the Board | Corning, Cushman & Wakefield

former SVP & CHRO

Larry Emond

Senior Partner, Modern Executive Solutions

Levi Goertz

VP of Client Solutions

The Judgment Gap: Why Human Judgment Is the Scarcest Resource in the Age of AI

In this session from the 2026 Valence AI & The Workforce Summit, Prasad — former head of People Analytics at Google and Stanford researcher — presents one of the summit's most intellectually rigorous frameworks: the Judgment Gap. Drawing on 15 years of Google research, back-of-envelope math showing AI is 2,000 times cheaper than human cognitive processing, and three behavioral science foundations (Gary Klein on pattern recognition, Phil Tetlock on calibrated confidence, and Michael Polanyi on tacit knowledge), Prasad argues that the default AI playbook — automate the routine, upskill the workforce — is creating a dangerous gap between the high-stakes judgment required of tomorrow's leaders and the developmental pathways available to build it. He closes with three concrete organizational shifts and a single litmus test question: Can your people tell when AI is wrong?

Prasad Setty

Former VP of People Analytics

Key Points

Key Takeaways

  • AI is approximately 2,000 times cheaper than human cognitive processing — and that cost pressure will reshape organizations completely: A typical knowledge worker processes and produces roughly 125,000 words of information per day. When you compare the fully loaded cost of a global knowledge worker against the cost of AI token processing for equivalent volume, the ratio is approximately 2,000 to 1. Cost pressures of this magnitude have historically restructured entire industries — shipping, trade, mobile commerce — and will do the same to knowledge work.
  • The default AI playbook is creating a judgment gap: The standard enterprise AI approach — automate the routine, upskill the workforce, expand AI scope over time — carries a dangerous implicit assumption: that human value is residual, limited to whatever AI hasn't touched yet. As AI touches more, human value in this framing becomes increasingly marginal. More critically, the routine, repetitive work that AI is automating is exactly the work through which humans built confidence, pattern recognition, and professional judgment. Remove it, and the developmental pipeline for senior decision-making collapses.
  • Productive cognition is the real organizational asset — and judgment is now the binding constraint: Every organization is a system for converting thinking into value. At Google, productive cognition was the product of intellectual capacity, information abundance, and effective collaboration. In an AI-powered world, that formula changes: productive cognition becomes judgment quality amplified by AI, plus independent AI agent work. Judgment quality — not AI access — is now the differentiating factor.
  • Judgment forms through three mechanisms: contextual reps, outcome ownership, and apprenticeship: Research from Gary Klein (firefighters and ER nurses), Phil Tetlock (superforecasters), and Michael Polanyi (tacit knowledge transfer) converges on the same conclusion. Judgment requires varied experience in high-stakes contexts, calibrated confidence built through owning outcomes, and apprenticeship-based transfer of knowledge that cannot be codified. Organizations must actively design all three — they will not emerge automatically from AI-assisted work.
  • Three concrete shifts to the default AI playbook: close the loop, design the gradient, separate review from correction: Close the loop means creating structured retrospectives so people see the outcomes of their decisions and reflect on why they made them. Design the gradient means building deliberate developmental progressions — simulations, case studies, and AI-assisted practice scenarios — so that new professionals build judgment reps before facing high-stakes decisions. Separate review from correction means using the review of AI-generated work as an apprenticeship moment, not a compliance check.
  • Every AI interaction either sets people up for dependency or development — and the litmus test is one question: Can your people tell when AI is wrong? Two years ago, most knowledge workers could. Today, some can, sometimes. In the future, it depends entirely on deliberate organizational design. If the default is 'ask AI and act on the answer,' judgment atrophies. If AI interactions are designed as development moments, judgment compounds.

Full Transcript

The Alia Decision: What High-Stakes Judgment Requires

[00:00:01.056]

Prasad: Alia Jones is a VP. She's in one of your organizations, and she has to make a critical promotion decision. She has an important role to fill, and she's down to two candidates. Her gut says Molly — her protege, someone she's worked with for two years. Alia knows exactly how Molly thinks and how she responds to pressure. There's a new algorithm that her HR department has come up with, and the algorithm says she should go with Ed. Ed is a 96% match for this role compared to Molly's 83%. The algorithm also says Ed has a much wider cross-functional network — critical for this role — but also that Ed is a higher flight risk than Molly.

That is the context that the Harvard Business Review came up with for a case study they asked me to apply. The question is: how does Alia make this decision? What is her capacity to make it, and how did she form that capacity? Whether it's an important people decision like this or a business decision, we all want our leaders to be excellent at making good calls when the stakes are high.

In my 15 years at Google, these are exactly the kinds of questions we were asking ourselves. What enables organizations to make great people decisions, and what enables people to make great business decisions? You may have heard of some of the work my team did — the science of hiring, Project Oxygen on the role of managers, Project Aristotle on psychological safety and team effectiveness. What we brought was curiosity and rigor to important questions. For every project you've heard about, there were at least a few shared only internally at Google, and several others that went nowhere — questions we asked without finding satisfactory answers. We didn't get bogged down by failure; we persisted in thinking about important questions.

▶ How Human Judgment Is Tested in High-Stakes Promotion Decisions

A Harvard Business Review case study frames the challenge of human judgment under AI: a VP must decide between promoting a trusted protege (83% algorithmic match) and a higher-scoring external candidate (96% match) with a stronger cross-functional network but higher flight risk. The case illustrates what judgment requires — calibrated confidence, contextual pattern recognition, and the willingness to own an outcome — and why those capacities cannot be delegated to an algorithm. This is the kind of decision that will define leadership value in an AI-assisted organization.

Productive Cognition: What Organizations Are Really Optimizing For

At Google, we came to believe that all of this research pointed toward one underlying concept. I want to introduce a term I think of as productive cognition. Every organization, regardless of industry, is a knowledge organization. Everything is getting more technical, more knowledge-intensive. Every knowledge organization is a system for converting thinking into value. That is where productive cognition comes in.

Productive cognition is the cumulative intellectual capacity across all of your people — embodied in your products, services, and customer relationships. The 'productive' part matters: it accounts for not just intellectual capacity, but subtracts every place where you have friction, misuse, underutilization, or suppressed voice. That is the full picture.

Based on what I've told you about the Google organizational model, this is implicitly what we were solving for. Productive cognition at Google was the product of three factors: intellectual capacity, information abundance, and effective collaboration. These were multiplicative — not additive. They build on each other. If one term is weak, the system collapses. Every organization has its own recipe. What matters is internal consistency and that multiplicative effect.

▶ What Is Productive Cognition? A Framework for Measuring Organizational Thinking

Productive cognition is the cumulative intellectual capacity of an organization's people, as embodied in its products, services, and customer relationships — minus every point of friction, misuse, or suppression. Developed by former Google People Analytics head Prasad, the concept treats every organization as a knowledge organization: a system whose core function is converting thinking into value. At Google, productive cognition was the multiplicative product of intellectual capacity, information abundance, and effective collaboration. AI changes the equation by making thinking abundant — shifting the binding constraint from capacity to judgment quality.

AI Is 2,000 Times Cheaper Than Human Cognitive Processing — and That Changes Everything

Here's where the formula changes. AI is making thinking abundant, and that changes the dynamics completely. Here is some back-of-envelope math I did a few weeks back. Take a typical knowledge worker — regardless of role or domain. A workday consists of responding to emails and Slack threads, initiating some, composing documents, crunching data, attending meetings. If you look at all the information processed and produced through those interactions, it roughly comes to around 125,000 words per day.

Here's the punch line. When you look at the fully loaded cost of a global knowledge worker and compare it to the cost of AI token processing for equivalent volume, the ratio is approximately 2,000 to 1. AI is 2,000 times cheaper than human cognitive processing. I want to acknowledge this is a simplistic calculation — human cognition is much richer, because it includes problem-solving, not just output generation. But cost pressures like this have repeatedly changed economies completely. When we had shipping containers. When we had mobile phones. Those cost pressures ensured that existing, incumbent, expensive structures gave way. The same will happen here.

▶ Why AI Being 2,000 Times Cheaper Than Human Cognition Is a Structural Economic Force

A back-of-envelope calculation by former Google People Analytics head Prasad estimates that a typical knowledge worker processes and produces approximately 125,000 words of information per day. When the fully loaded cost of that cognitive output is compared to the cost of equivalent AI token processing, the ratio is approximately 2,000 to 1 — AI is 2,000 times cheaper. While human cognition is richer than this calculation captures, cost pressures of this magnitude have historically restructured entire industries. Organizations should treat this as a structural economic force, not a marginal efficiency improvement.

The Judgment Gap: Why the Default AI Playbook Is Dangerous

What I'm seeing is organizations all around adopting a default playbook because they implicitly understand some of this. The typical playbook: automate the routine — not just robotic process automation, but deeper because now we have AI. Upskill all our people to use AI tools. Keep expanding the scope over time, since AI will only get better.

There's a substrate to this thinking that I don't think is bad, but I do think is, in some ways, dangerous. The implicit assumption is that the value of humans should be restricted to things that AI hasn't touched yet. As AI keeps touching more and more, the value of humans in this framing becomes increasingly residual. And that I find very unnerving.

Where will human work persist? The AI can generate 21 PowerPoints — but your leadership team, your CEO, your board still has the same 24 hours. Someone still has to filter all of this and make calls. Second, we still need to take accountability for high-stakes decisions. Organizations have tried to blame AI tools for bad outcomes — courts and public opinion have firmly rejected that. Third, there are many high-stakes decisions, like Alia's, where we want humans to wrestle with what good looks like. AI can generate possible outputs. But we want humans to wrestle with them and have the conviction of owning what they decide.

A lot of the work immediately ripe for automation beyond the old RPA is work with few degrees of freedom — repetitive transactional work where you can make slight deviations around rules and processes. But here's what breaks when that routine work disappears. That work built confidence. We all worked through imposter syndrome by doing the work, knowing the work, and knowing that we knew it. Repetition built recognition — by doing things many times, we developed instinct for where failure might occur. And when you specialize, you develop your distinctive signature of taste. All of that repetitive work, if automated by AI, creates a gap: the demands on tomorrow's leaders are going to be higher, but the preparation gets weaker. That is what I call the judgment gap.

▶ What Is the Judgment Gap and Why Should HR Leaders Be Concerned?

The judgment gap is the widening distance between what high-stakes leadership decisions require — pattern recognition, calibrated confidence, ownership of outcomes — and what the AI-assisted workplace actually develops in people. As AI automates the routine, repetitive work through which professionals historically built confidence, recognition, and taste, the developmental pipeline for senior judgment collapses. Coined by former Google People Analytics head Prasad, the judgment gap describes an organizational risk that compounds silently: leaders are asked to make increasingly consequential decisions with decreasing experiential preparation.

How Judgment Forms: Three Research Foundations

If judgment becomes the binding constraint, then we need to understand how it forms. There is substantial research here, and I'll summarize three foundations that are useful as we think about how to design for judgment.

Gary Klein: Pattern Recognition Requires Varied Contextual Exposure

Gary Klein studied firefighters and ER nurses who respond to critical situations with very little time — no opportunity to run a spreadsheet and think through alternatives. What he found is that they all develop an instinct for what to do by having exposure to varied contexts in very different environments until recognition becomes instinct. You need those reps, but they have to be varied. Exposure to enough varied contexts is what allows people to recognize patterns quickly under pressure.

Phil Tetlock: Calibrated Confidence Is More Valuable Than Certainty

Phil Tetlock, an economist at Wharton, studied forecasters who predict geopolitical events. The best forecasters were not people with high confidence in their specific outcomes — they were people who knew the limits of their confidence. The example: Tom Brady predicting the Super Bowl said, 'Both teams are good. On any given day, either could win. But if they played ten times, I think the Seahawks would win six times and the Patriots four.' That is calibrated confidence. You don't want salespeople saying they'll win 100% of bids. You want them to say: 'I know which bids are higher probability, and I know where to put more work into the proposal.'

Michael Polanyi: Tacit Knowledge Transfers Through Apprenticeship

Michael Polanyi studied how expert craftsmen transfer knowledge. His work on tacit learning is probably familiar to many here. His core insight: we all know more than we can tell. The surgeon's hands know more than the surgeon can articulate. Apprenticeship — observing someone at work, having them shape your experience — is the primary mechanism for transferring that tacit knowledge. It cannot be codified or taught from a slide deck.

Where all of this converges: Gary Klein's contextual embedding helps with pattern recognition and fast decisions. Phil Tetlock's outcome ownership helps with calibrated confidence. And Polanyi's apprenticeship is the primary vehicle for tacit knowledge transfer. These are the three pillars of judgment development — and none of them happen automatically in an AI-first workplace.

▶ Three Research-Backed Foundations for Developing Human Judgment at Work

Research from three leading scholars converges on how professional judgment actually develops. Gary Klein's studies of firefighters and ER nurses show that pattern recognition requires varied exposure to high-stakes contexts until recognition becomes instinct — reps matter, but variety is essential. Phil Tetlock's work on superforecasters shows that the best judgment comes from calibrated confidence: knowing the limits of what you know, not certainty. Michael Polanyi's work on tacit knowledge shows that the most critical professional knowledge — the kind that experts know but cannot articulate — transfers primarily through apprenticeship. Organizations designing for AI must actively engineer all three.

Three Concrete Shifts to the Default AI Playbook

Shift 1: Close the Loop

People need to know the outcomes of the decisions they make. In complex organizations, this is genuinely difficult — outcomes are delayed, attribution is unclear, information is hard to collect and return. But outcome transparency alone is not sufficient. What you need in addition is structured reflection: when you make a decision, capture why you made it, what risks you considered, what your confidence was in the outcome. A lightweight decision journal. Then, months later — with your manager or an HR partner — revisit the calls you made. Which ones worked? Which didn't? Why? What did you learn? AI coaching tools like Nadia can help with every one of these structured reflection processes.

Shift 2: Design the Gradient

You cannot trust someone into VP-level decisions from day one. You cannot say, 'I automated all the routine work — good news, new graduate, come be a VP now.' You have to design the developmental progression deliberately. Use AI to create simulations, case studies, and scenarios that build judgment reps. Design those reps to transfer not only the explicit knowledge codified in your business processes, but the implicit knowledge of how your organization works and the tacit knowledge that is even hard to articulate. This is the challenge — and where a partnership with Valence can genuinely help your people.

Shift 3: Separate Review from Correction

Imagine that everyone in your organization is going to be reviewing AI-assisted work. There are two approaches a leader can take. One: 'This isn't compliant with our format, let me correct it' — a textbook approach focused on right answers. The other: use the review as an apprenticeship moment, asking 'what information did you take into account? What risks did you consider?' That second approach transfers the judgment, not just the correction. That is the addition to the default playbook that I would have you think about.

▶ Three Organizational Shifts to Develop Human Judgment in an AI-Assisted Workplace

Former Google People Analytics head Prasad identifies three concrete departures from the standard enterprise AI playbook that are necessary to develop human judgment alongside AI capability. First, close the loop: create structured retrospectives so people see the outcomes of their decisions and reflect on why they made them — lightweight decision journals reviewed months later with a manager or coach. Second, design the gradient: build deliberate developmental progressions using AI-generated simulations and scenarios, not just on-the-job experience. Third, separate review from correction: use the review of AI-generated work as an apprenticeship moment, not a compliance check. AI coaching tools like Nadia can support all three shifts.

Measuring Judgment Development — and the One Litmus Test Question

Measurement matters here. You can look at activities — how much work is going into pure coordination versus judgment-oriented work. You can look at reps and data from AI coaching tools. You can assess human calibration using what is called a revealed preferences survey: ask people, 'When you're facing a difficult decision, whose judgment do you rely on?' Measure the evolution of that answer over time. And look at outcomes themselves — how quickly are people course correcting? How quickly are they transferring learning from one situation to another across high-stakes business decisions, not just people decisions?

Let's get back to Alia. There's a default playbook version that concerns me: go all-in on AI implementation, and future leaders like Alia are left with no basis for making these calls — exposed only to AI recommendations, acting as signatories to decisions they don't understand, getting no outcome feedback, receiving weak coaching. Or you could have a judgment-enhanced version where leaders are getting to exercise real stakes, making decisions, receiving the right mentorship and coaching — set up to succeed and thrive in an AI-enhanced world.

Here is my closing thought. We talk always about automation or amplification. But I want you to think about two other dualities that come with AI. Every AI interaction either sets your people up for dependency or for development. There is work we want to offload cognitively to AI — that is appropriate. But if people default to 'go to AI, get the answer,' judgment atrophies. Design AI interactions as development moments, and efficiency will follow.

And here is the single litmus test question you can ask your people tomorrow, six months from now, a year from now: Can you tell when AI is wrong? Two years ago, most of us could. Today, some of us can, sometimes, for some situations. In the future — it depends entirely on us. Thank you.

▶ The Single Litmus Test for Whether Your Organization Is Developing or Eroding Human Judgment

Former Google People Analytics head Prasad closes with one question that reveals whether an organization is building or eroding human judgment in the age of AI: 'Can your people tell when AI is wrong?' Two years ago, most knowledge workers could answer yes. Today, the answer depends on the individual and the situation. In the future, the answer will depend on deliberate organizational design choices made now — whether AI interactions are designed as development moments or dependency moments. Every AI interaction, according to Prasad, sets people up for one or the other.

[00:28:07.570]

Moderator: Thank you, Prasad.

The Judgment Gap: Why Human Judgment Is the Scarcest Resource in the Age of AI

In this session from the 2026 Valence AI & The Workforce Summit, Prasad — former head of People Analytics at Google and Stanford researcher — presents one of the summit's most intellectually rigorous frameworks: the Judgment Gap. Drawing on 15 years of Google research, back-of-envelope math showing AI is 2,000 times cheaper than human cognitive processing, and three behavioral science foundations (Gary Klein on pattern recognition, Phil Tetlock on calibrated confidence, and Michael Polanyi on tacit knowledge), Prasad argues that the default AI playbook — automate the routine, upskill the workforce — is creating a dangerous gap between the high-stakes judgment required of tomorrow's leaders and the developmental pathways available to build it. He closes with three concrete organizational shifts and a single litmus test question: Can your people tell when AI is wrong?

Prasad Setty

Former VP of People Analytics

The Judgment Gap: Why Human Judgment Is the Scarcest Resource in the Age of AI

Taste, Agency & AI: Scott Belsky on the Future of Organizations

In this fireside chat from the Valence's 2026AI & The Workforce Summit, Adobe CPO, Benchmark partner, and legendary early-stage investor Scott Belsky joins Valence CEO Parker Mitchell for a wide-ranging conversation on what AI means for how organizations work, how humans develop, and how enterprises must change. Scott introduces the concept of organizational debt, explains the law of displacement speed in AI, argues that AI is enabling a new era of radical personalization, and closes with his conviction that taste and agency — not technical skill — will define human advantage in an AI-powered world.

Scott Belsky

Author, "The Messy Middle"

Parker Mitchell

CEO

Key Points

Speakers

Scott Belsky — Chief Product Officer, Adobe; Partner, Benchmark; Founder, Behance. Scott Belsky is one of Silicon Valley's most respected product leaders and early-stage investors. As CPO of Adobe, he led the company's AI creative tools strategy and managed M&A. As an investor and partner at Benchmark, his portfolio includes Uber, Pinterest, Warby Parker, and Valence. He is also the founder of Behance, the world's largest platform for creative portfolios, and the author of multiple books on creativity, organization, and the future of work.

Parker Mitchell — Co-Founder and CEO, Valence. Parker leads Valence, an enterprise AI coaching company whose platform Nadia is deployed across global Fortune 500 organizations. He moderated this fireside chat, drawing on his experience building AI coaching infrastructure for enterprise talent and leadership development.

Key Takeaways

  • Organizational debt is as dangerous as technical debt — and AI will force a reckoning: Just as software accumulates technical debt from deferred decisions and architectural shortcuts, organizations accumulate organizational debt from decisions that should have been made but weren't. Scott Belsky argues that AI is now the most powerful organizational debt collector in history, collapsing outdated processes faster than any prior technology. The leaders who thrive will be those who proactively identify and eliminate organizational debt before AI exposes it.
  • The law of displacement speed is creating two inevitable outcomes — commoditization and vertical operating systems: When technology changes fast enough to displace incumbents rapidly, Scott Belsky argues two things always follow: commoditization of the underlying capability (in this case, AI tokens), and the rise of vertical operating systems that embed the capability at the layer where it creates the most value. Valence, in Scott's view, is building the vertical operating system for the people function — the layer where AI coaching creates compounding organizational value.
  • Novelty must precede utility — and enterprise leaders need to protect space for play: Scott Belsky's principle that 'novelty precedes utility' runs headlong into the enterprise reality that urgent always outweighs important. His prescription: protect dedicated space for play and pilots, and redesign incentives so that teams are rewarded for learning — not penalized for failed experiments. The measure of a pilot should never be utilization; it should always be learning.
  • AI is restoring the hyper-personalized life humans evolved for — but on our own terms: For most of human history, people lived in small communities where they were deeply known — their strengths, preferences, and relationships recognized by those around them. The Industrial Revolution made everyone anonymous. Scott Belsky believes AI is restoring that personalization, but in a new and better form: one where individuals define their own preferences, share their own data on their own terms, and are known by AI systems they trust.
  • AI coaching enables the kind of feedback humans cannot easily give each other: Scott Belsky argues that feedback between manager and employee is inherently compromised — by defensiveness, perceived bias, thin-slice judgment, and the social dynamics of hierarchy. AI coaching changes this because it is built from the residue of a person's own actions and is genuinely oriented toward their development, not a manager's convenience. It is much harder to be defensive with feedback that feels like an extension of yourself.
  • Taste and agency are the ultimate human advantages in an AI world: As AI handles more process and execution, Scott Belsky argues the two distinctly human capabilities that will define individual and organizational success are taste — the filters and discernment that determine which of the infinite AI outputs are actually good — and agency — the willingness to believe something is possible when it seems unreasonable, and to pursue it. These are not technical skills. They are capacities that organizations must actively develop in their people.

Full Transcript

Organizational Debt: The Hidden Drag on Enterprise Performance

[00:00:00]

Parker: If you're in the startup world and you come across Scott Belsky, you either know him as the legendary investor — Uber, Pinterest, Warby Parker, a whole bunch of early seed investments —

[00:00:15.036]

Scott: Valence.

Parker: Valence as well. And the Chief Product Officer of Adobe, formerly the founder of Behance. But I want to go even further back. You're something of an organizational design and management nerd at the starting point. Can you tell us about your early days at Goldman Sachs and how that interest came about?

[00:00:38.703]

Scott: The most important career experience I ever had was after a year and a half or two years doing the traditional finance thing at Goldman. I realized this was not for me. I went to my manager, a woman named Catherine, and said, 'I think I need to go do something else.' She said, 'What would your dream job be if you stayed at Goldman?' And I said, 'It would be really cool to understand how the organization works.' I was interested in management — which, of course, as a 23-year-old kid sounds a bit naive.

I ended up getting a role as an analyst on a team called Pine Street, in the executive office, that was all focused on organizational improvement and leadership development for the most senior population of the firm. For three years, I was learning from practitioners — people who would come in to do executive coaching with leaders, but also the hedge funds that would get all this capital and have no management experience. They were total chaotic messes. To learn about the importance of leadership development, accelerating career paths, and everything else at such a young age was an incredible education.

[00:01:54.900]

Parker: And you got a chance to apply that in practice after your startup was acquired by Adobe. A 7,000-person organization. Was anything surprising as you made that transition into a large company?

[00:02:11.360]

Scott: I've now done startup, small company, large company, and back to a smaller team again. There's always this balance — when you have a problem, you throw process at it. And when you're a really great manager, you kill process. It's this incredible tension between process creation and process destruction.

At Adobe, I was most effective when I found a way to collapse the talent stack — get teams more truncated and more direct with one another. And whenever I was destroying a past-due process, I also became highly convinced that organizations don't only suffer from technical debt — the accumulation of bugs and bad technical decisions that plague a product. Organizations also suffer what I call organizational debt, which is the accumulation of decisions that should have been made but weren't.

At Adobe, my favorite thing was to prompt decisiveness. People were always uncomfortable with decisiveness because the easiest decision is to not make a decision yet. I was once inspired by Ken Chenault — who ascended American Express to the very top — when he was asked what most helped him rise through a huge corporate bureaucracy and be known as an innovator. He thought for a moment and said, 'At the end of the day, I would make my bosses make decisions.' That notion of clearing organizational debt, prompting decisions or at least a deadline for decisions, keeping that ship moving inch by inch — that may be one of the greatest things you can do in a bureaucracy.

▶ What Is Organizational Debt and Why Does It Cripple Large Companies?

Organizational debt is the accumulation of decisions that should have been made but weren't — the enterprise equivalent of technical debt. Scott Belsky, former Chief Product Officer at Adobe and partner at Benchmark, introduced this concept based on his experience leading product and M&A at a 25,000-person technology company. Organizational debt compounds silently: every deferred decision becomes a process, every process becomes a constraint, and every constraint slows the organization's ability to respond to change. AI, Belsky argues, is now the most powerful organizational debt collector in history.

The Law of Displacement Speed: What Rapid AI Change Actually Means

[00:04:11.419]

Parker: Ethan Mollick noted that organizations are not designed — they evolve through countless small decisions that plant more seeds than they weed. AI, I think, accelerates that. But I want to introduce your story on the edge of AI and watching the power of the models grow. What was your first experience with large language models, and what was your emotional reaction?

[00:04:50.519]

Scott: I'm always playing with the latest tools and trying to figure out where all this is going. When early LLMs emerged, and when some of the early imaging models emerged, these were toy things — silly and unreliable and full of hallucinations. I remember the days we would ask a simple math problem to a large language model and it would give the wrong answer. Then you'd say, 'No, it's not.' And it would say, 'Oh, you are right. I am wrong.' And you could even trick it back to being wrong again.

[00:05:26.120]

Parker: And then it would make the same mistake again.

Scott: Exactly. What I find fascinating about this moment is what I call the law of displacement speed — when you have such a rapid sense of displacement. A year and a half ago, everyone was saying Google totally missed AI, Anthropic was probably going to run out of money, and OpenAI was going to rule the world. Here we are today: Google is winning, Anthropic is winning enterprise, OpenAI may face pressure. And three months from now, I bet it's different again. Everyone is outpacing one another.

When the law of displacement speed kicks in, my belief is two things happen. First, rapid commoditization — they keep making it better and more competitive, which seems like a path toward commoditization of tokens. Second, it goes more central at what I call the operating system level. The operating systems of our personal lives are Android and iOS. The operating systems of the enterprise are going to be vertical operating systems — AI-driven, built for specific functions. That's why I was excited about what Valence is building. The people function is going to have a vertical operating system. And I think that's where the future is going.

▶ The Law of Displacement Speed: Why Rapid AI Change Leads to Commoditization and Vertical Operating Systems

Scott Belsky, partner at Benchmark and former Adobe CPO, describes the law of displacement speed as the economic and competitive dynamic that emerges when technology changes fast enough to displace incumbents within months rather than years. He argues two outcomes are inevitable when this law kicks in: first, rapid commoditization of the underlying technology (in AI's case, token processing); second, the rise of vertical operating systems — AI-native platforms built for specific organizational functions. Belsky sees enterprise AI coaching as an early example of a vertical operating system for the people function.

Novelty Before Utility: How to Create Space for AI Adoption

[00:06:52.420]

Parker: One of the ideas you've talked a lot about is that novelty precedes utility. For most people's experience of AI, that's come true. How have you seen that play out in the enterprise, which doesn't exactly welcome novelty as a step before utility?

[00:07:09.699]

Scott: The enterprise is tricky, because urgent always outweighs important. The gravitational force of operations is so strong that you never have a free hour to just experiment with a new AI tool. We all just go through our day. And of course, if you don't make time for the longer-term important things, they never get considered. It is absolutely critical to our organizations and our own careers that we refactor and reimagine how we work. How do you make time for that?

Play is part of it. You have to allow yourself — and, more importantly, your team — to try things. And you have to protect them. If they decide to use a tool in a new way, give them different KPIs than success. The KPIs need to be learning. Did we try this new review process with this new tool? Don't penalize the team if it didn't go well. Reward them if they learned something. The constructs of play and protection — always having pilots in an organization — are really important principles for getting socialized with new technology.

Parker: And in the pilot, the measure of success shouldn't be utilization. It has to be learning — how did this new approach change how I do my work?

Scott: Exactly. Change starts with socialization. When I led M&A at Adobe, I learned quickly that you can't just come in with a big, bold proposal, get people to nod, and execute it. It's a long process of getting people comfortable with an idea until it becomes obvious. But when it started, it was anything but obvious. Similarly with new technology — you have to socialize it. Play, pilots, and different incentives are part of that.

▶ Why Play and Protection Are the Foundations of Enterprise AI Adoption

Scott Belsky, former Chief Product Officer at Adobe and partner at Benchmark, argues that enterprise AI adoption fails when the first metrics applied are utilization and ROI rather than learning. His prescription: create protected space for play — deliberately shielded from standard KPIs — and reward teams for what they learned from an AI experiment, not whether it succeeded by traditional measures. Belsky draws the parallel to M&A socialization: change doesn't happen through bold mandates; it happens through sustained exposure until a new approach becomes obvious.

AI Personalization: Being Known on Your Own Terms

[00:09:11.860]

Parker: One of the words you've talked a lot about in the AI era is personalization. You believe AI is going to lead to a new world of personalization in both consumer and enterprise. What will drive that, and why is it so exciting?

[00:09:28.000]

Scott: One of my cardinal beliefs in all of technology and investing is that we, as humans, are longing for the way things once were — but with more scale and efficiency. For the other 300,000 years of human history that we're hardwired for, we lived in small towns and villages. We were known by people. They knew our strengths and weaknesses. They knew our children's names. They knew our likes and dislikes. We were living a hyper-personalized life.

Then, with the Industrial Revolution and everything that followed, we all became anonymous. Everything became generalized. When I go to Nike, it says, 'What gender are you?' When I go to a restaurant, they don't know I'm a vegetarian. But I believe we all want to be known — to feel special in our workplace, in our life, in our communities.

The first response of technology to this was the bad version: knowing people without letting them know how they're being known. That was ad tech. There is a better way. In this AI-enabled world, the next generation of personalization is being known on our own terms — defining our preferences, sharing and syncing the data we choose to share with AI, having private instances where AI can help us. The notion of personalized living — and personalized working — is a huge investment thesis for me.

▶ How AI Is Restoring Personalization to Human Life and Work

Scott Belsky, partner at Benchmark and former Adobe CPO, argues that AI personalization is not a new idea — it is a restoration. For most of human history, people lived in small communities where they were deeply known. The Industrial Revolution made everyone anonymous. AI now makes it possible to be known at scale again, but on different terms: individuals define their own preferences, share their own data, and engage with AI systems they trust. In the enterprise, this means tools like Nadia can know employees' development needs, working styles, and growth priorities — not to surveil, but to genuinely support.

Why AI Coaching Unlocks Feedback Humans Cannot Give Each Other

[00:11:21.080]

Parker: What's fascinating about this new world of technology is we can be in relationship with it, and we can feel seen by something that is not another human being — but the experience is feeling seen. How will that change how we interact with a new set of intelligences we can be in relationship with?

[00:11:45.279]

Scott: When you get feedback from your manager, you are likely to be defensive. You probably think your manager has too thin a slice of judgment to make that call — or to suggest that's your development area. It's a process riddled with bias, or at least perceived bias, and defensiveness.

Now cut to a world where we have a personal relationship with something that is genuinely there to make us better. It's not taking out a bad day on us. It's not making sweeping generalizations about our weaknesses based on one observation. It is much harder to be defensive and unreceptive when you're getting feedback from an extension of yourself — something made from the residue of your daily actions. It's personal in a good way. And it can be a private forum where you start to understand rather than defend.

Parker: The relationship of trust is key. You have to believe that system has your best interests at heart — and that it's independent, not just a sycophant.

Scott: Exactly. Not too affirming. The basic system prompts of large language models are biased toward the user feeling accomplished — inherently the opposite of productive pushback. What's interesting about what Valence and others are exploring is internal debate being choreographed under the hood, optimized toward genuine self-improvement in the vertical of people development.

Parker: Let me share a funny example. The team created a version that would tell someone which Harry Potter house they'd be sorted into.

Scott: The Sorting Hat.

Parker: Right. Someone received Hufflepuff. Their response: 'No, I don't think I should be in Hufflepuff. I think I should be in Gryffindor.' And Nadia's response was: 'Would you like to discuss the difference between how you see yourself and how others might perceive you?'

Scott: 'No, thank you.'

Parker: But you have to have that independence — because we grow in relationship to independent points of view that force us to confront things we might not see. And I think there's a profound opportunity there.

▶ Why AI Coaching Enables More Honest Development Feedback Than Human Managers

Scott Belsky, former Adobe CPO and Benchmark partner, argues that manager-to-employee feedback is structurally compromised by defensiveness, perceived bias, and thin-slice judgment. AI coaching changes the dynamic because the feedback comes from something genuinely oriented toward the employee's development — built from the residue of their own actions, not filtered through a manager's perspective. Belsky notes it is much harder to be defensive with feedback that feels like an extension of yourself. The key, he emphasizes, is that the AI must be independent — not affirming, and willing to offer real pushback.

AI as Organizational Collapser: Preparing for Process Transformation

[00:14:33.285]

Parker: Zooming out — organizations have processes that grew up because of decisions not taken or technology choices made long ago. AI is the ultimate collapser. It is going to collapse a series of those processes faster than any prior technology change. How can organizations prepare?

[00:15:16.799]

Scott: One of the greatest change agents in organizations is celebrating what works — a specific example. Positive examples are like viruses. They run rampant once you showcase them. In my experience, it's in tight, cross-functional teams — what I call collapsed stack teams, where instead of having a designer and an engineer and a product leader and so on, you have people playing dual roles with a tight conduit in their heads — that are the early adopters of new tools. They realize, 'Oh, we don't have to meet Tuesdays just because it's Tuesday,' or, 'We've debuted this new way of managing change orders.' And then, if showcased properly and celebrated, it becomes a best practice across the org.

It's hard to come in from the outside and say, 'You should change this.' But once little pockets of change happen, the job is to amplify and spread them effectively.

▶ How Collapsed Stack Teams Drive AI Adoption Across Large Organizations

Scott Belsky, partner at Benchmark and former Adobe CPO, identifies the best early adopters of AI process change as 'collapsed stack teams' — small, cross-functional groups where individuals play dual roles and share a tight cognitive conduit. These teams move fast enough to discover genuinely new ways of working, and their wins — when celebrated visibly — spread as positive viruses across the organization. Belsky's prescription for enterprise AI transformation: stop trying to change from the top down, and instead find and amplify the pockets of change already happening organically.

Taste and Agency: The Two Human Advantages That AI Cannot Replace

[00:16:40.637]

Parker: You've talked about how the change at organizational scale is going to be difficult. The leaders who make it successful are going to have to put tremendous energy into that system. What advice would you give them?

[00:17:05.019]

Scott: There's a Fortune 500 company whose leader couldn't believe the soundbite that there would be no more junior hires. They are actually hiring 1,000 brand-new, almost intern-level people — and they shifted their hiring intensely toward the very junior end. When I asked why, they said: 'Because they know AI. They just came out of college. They are native to this. I want to swarm my organization with people who live and breathe this technology because they'll be in meetings and look at things being done and say, Why are you doing it that way?'

It's a form of knowledge arbitrage. Junior people may not understand our businesses or processes, but they have something we don't — native experience with the technology. Is that a tactic?

Parker: The core concept is: we don't know the path. We have to co-discover it together, building feedback loops, understanding where to lean in and celebrate what works. Looking to 2027 — what's the bold organizational thing a CEO might try that turns out to be unexpected?

[00:19:03.440]

Scott: We have to start thinking about what humans can uniquely do and how to accentuate development around those things — and what we should genuinely offload to compute. Process will be offloaded. Processes will become recursive and self-improving — they'll examine themselves, start to truncate themselves, and improve on their own. I believe executives are going to apply really stringent forcing functions: 'No more headcount for you. Your forcing function is to find a better way. I'm very worried about you throwing people at this problem.'

But then the question is: with these humans, how do we elevate and tap what they are uniquely capable of? To me, the future of humanity comes down to two things: taste and agency.

Taste is the inputs we get that help us make decisions — the filters we apply to the noise constantly bombarding us. The algorithms fooling us. The cacophony making us believe certain things. The filters. And then discernment: based on the inputs and filters, what decisions do we make? How do we elevate taste in our organizations so people exercise it better?

Then agency. How do we get our people to believe things are possible that others would dismiss? Every industry disrupted by an incumbent coming back, every startup that changed an industry — it always starts with a few humans who believed something others said was wrong. Airbnb defied the entire hotel industry. Everyone thought they were foolish. Every example in history — including examples within our own companies — involves people who took a disproportionate amount of agency. As AI frees up energy from other work, we need our people to channel that energy into more agency.

▶ Why Taste and Agency Are the Last Human Advantages in an AI-Powered World

Scott Belsky, partner at Benchmark and former Adobe CPO, argues that as AI handles more process and execution, the two distinctly human capabilities that will define individual and organizational success are taste and agency. Taste is the capacity to filter noise, apply judgment, and discern which of the infinite outputs AI generates are actually good — the human editorial function. Agency is the willingness to believe something is possible when conventional wisdom says it isn't, and to pursue it. Belsky argues that every meaningful disruption in history began with a few people exercising disproportionate agency, and that AI — by freeing humans from routine work — should increase both.

Diversity and Agent Networks: Why Different Points of View Drive Innovation

[00:21:51.160]

Parker: To wrap it all together — you and I talked about the importance of diversity. Because that agency, if everyone says the same thing, won't reach the conclusion it would with a range of different points of view. How important is diversity, especially in this human-plus-AI era?

[00:22:06.599]

Scott: When you have an extraordinarily different group of extraordinary people sitting around one table who respect one another, that's where the magic happens. Innovation is essentially the edge that will someday become the center. Innovation happens at the edge of reason. If we all went to the same school, same cohort, same age — the same things will be reasonable and unreasonable to us. But if we're all different, and I say something you initially see as completely unreasonable, because it's at the edge that may someday become the center — that's the process of innovation. Through subsequent conversations, it becomes socialized. Before you know it, we're all in agreement about something that started at the complete edge. That's innovation executed.

Now think about agents. In just the last seven days, there's this new phenomenon of agent social networks — agents made from different language models, with different system prompts, coming together and debating a task you ask them to perform. This happens under the hood of products like Valence's as well, to some degree. And as a result, you get a better solution. Duh — it's the same thing as diversity all over again. Multiple points of view argued and synthesized, and feedback as the process of improvement.

Parker: And that's why you're such a great investor — you spot things at those edges. They're a little unreasonable, but you find the kernel of reasonableness and see which ones are going to matter.

Scott: And I debate them with the contrarians I know. The ones who tell me I'm completely wrong. Either that makes me realize I was wrong — or it allows me to gain confidence from being doubted. When we feel confident in the face of doubt, that's when we're really on to something.

Parker: Sharpens the thinking.

Scott: Right.

Parker: We really appreciate you making the time to come join us today.

Scott: Of course. Thanks, everyone.

▶ Why Diversity Is the Structural Foundation of Innovation — and What Agent Networks Are Teaching Us

Scott Belsky, partner at Benchmark and former Adobe CPO, frames diversity as an innovation mechanism: when people from genuinely different backgrounds sit together, ideas that seem unreasonable to one person are reasonable to another — and that tension is precisely where innovation begins. He draws a direct parallel to AI agent social networks, where agents built on different models and system prompts debate the same task and produce better outputs through the collision of perspectives. In both cases, the structural principle is the same: more distinct points of view, properly synthesized, produce better decisions.

Taste, Agency & AI: Scott Belsky on the Future of Organizations

In this fireside chat from the Valence's 2026AI & The Workforce Summit, Adobe CPO, Benchmark partner, and legendary early-stage investor Scott Belsky joins Valence CEO Parker Mitchell for a wide-ranging conversation on what AI means for how organizations work, how humans develop, and how enterprises must change. Scott introduces the concept of organizational debt, explains the law of displacement speed in AI, argues that AI is enabling a new era of radical personalization, and closes with his conviction that taste and agency — not technical skill — will define human advantage in an AI-powered world.

Scott Belsky

Author, "The Messy Middle"

Taste, Agency & AI: Scott Belsky on the Future of Organizations

CEO

AI Coaching in 2025

Nadia, Valence’s AI Coach, is live across the Fortune 500. In this demo, get a glimpse of what the future might bring as we explore purpose-built AI that’s designed to democratize coaching for every employee.

Parker Mitchell

Founder and CEO, Valence

Key Points

Parker Mitchell: We want to just briefly give folks a peek of some of the things that we are really excited about with Nadia. So as many of you know, we started off with Nadia as a, we were thinking of her as an executive coach. And what we did is, we thought, “Hey, can we program what an executive coach does in Nadia?” And we wanted to put it out into the wild and get a sense of what people's reaction would be. 

And so we talked to a bunch of coaches out there. One of the things that we found interesting was the ICF, one of the suggestions that ICF gives their coaches, one of the principles is a coach should evoke awareness. And so we built Nadia to do that. We put it out there. And one of the things we discovered, and this might not come as a surprise for folks who have worked with a lot of managers, is they don't, on a day-to-day basis, have a deep need to have their awareness evoked. They're pretty busy. They have a million things on their plate, and they basically wanted something that would save them time.

So we evolved from this idea of a pure reflective coach to a coach that is going to try to help them. So we've shared with you before, you know, Nadia, we're building this personalization layer. Nadia knows coaching, knows your company, knows you. And I want to spend one moment on coaching before we talk a little bit about some of the personalization that's going to be really exciting.

So the way Nadia evolved was she quite quickly became a thought partner. A thought partner that could try to handle anything that was on a manager's mind. And that blank canvas starting point was actually important, because it turned out each manager had a different challenge, honestly a different challenge every week, sometimes a different challenge at the beginning or the end of a day. And they wanted Nadia to be able to help them with that.

So we tried to train her as an expert, a coach, but with a bunch of different superpowers. And these superpowers are growing probably literally every two weeks or every month. We're adding a new specialization, a coaching module, a hat that Nadia can wear. So some of them that are out there right now: If you tell Nadia, “Hey, I've got a presentation in front of 200 people in a couple of weeks, and I want to practice executive presence,” Nadia will build a plan, a skill plan to help you do that. We've talked about performance conversations. If you have a performance conversation coming up, Nadia can help you practice it, can help you role model it. 

One of the things that we've heard back from users, one of the things they appreciate the most, is that Nadia isn't just a website that they go visit. But that she will be, in some cases, quite proactive about holding them accountable to a conversation that they had and a commitment that they made. And for some folks, they really appreciate that.

I think there's someone in the room who will appreciate the challenge version of Nadia. So, if you are feeling a little too smug about how you are doing, you know, there's a version of Nadia that says, “I'm gonna put on my challenge hat, and I'm gonna ask you if perhaps this is a pattern that you might have seen in the past few conversations you've had with this person or with other people. And maybe it's something you should reflect on yourself, rather than point the finger outwards.” So we have a range of different Nadia coaches. And the thing that's exciting about that is that we can then also start to tune those coaches for what companies are looking for. 

So just a couple of weeks ago, I was down at the Gartner Reimagine conference. I had the opportunity to have a long conversation on stage, sharing the stories of Prudential. And it was with Robert Gulliver, the Chief Talent Officer of Prudential. He also drew a lot on sports analogies, similar to Anna. He was I think the first CHRO of the NFL, and he believes wholly in coaching. Prudential's been a big adopter of Nadia, and they've been rolling it out in a number of different ways. And he just talked about the increasing pieces of material, of ideas, of approaches that they were bringing into Nadia to help them with everything from their end-of-year conversations, which they call 2+2 conversations, all the way through to their other parts of their talent development life cycle.

But the thing that folks were most excited about is, when we talk to users, they said three things over and over again: They wanted Nadia to learn more about themselves. So how can they provide Nadia as quickly as possible with information about themselves? The second thing, which I think is really fascinating and which we've released and rolled out now, is: Can Nadia know about others on my team, not just through me? And then, finally, people have said: I know that I should gather more feedback. I need a lightweight way to do that. Could Nadia help me with that? 

So we're going to get a quick glimpse of some of these Nadia 2.0 features. We're releasing them January, February, And, again, I just want to emphasize: Every company has control over which features Nadia does or doesn't have. You can control what's turned on or off. Users get control over their own profiles. So it's very much an enterprise-first approach. But I'm going to tell you a little bit about what it will be like to have a Nadia with a profile. 

So, right now, what Nadia does, in layperson's terms, at the end of a conversation, is she takes coach's notes. So she is summarizing what she's learned about me, she's making some hypotheses, she might generate a few questions, almost exactly the same way a coach would. So she's inferring pieces of information, but it still takes time to educate her. 

And so the new version of the profile is going to allow people to very, very quickly upload information to Nadia. So we have been told that people really want Nadia to know a lot about their job. So, if you're a frontline worker at Delta, you're manning a customer service desk, or if you're a knowledge worker, a content marketer at a brand agency, you're going to have a very different reality. And we can quite quickly give you the chance to bring Nadia up to speed on what that might look like.

The second thing that everyone wants their Nadia to know is: Know about my team. Know about who the people are that I work with. Know about the relationships. Know about the power imbalances. Know about the roles that people are playing. And so we've made it easy for Nadia to be able to understand that.

Now, the one thing I would say is, when we talk to people, absolutely zero of them have said that that team structure is contained within their HRIS. So they've said the org chart that we have does not at all convey that complexity. It's like Prasad said, we are forced to flatten the richness of the world that we live in on a day-to-day basis to try to put it into the old software systems. But genAI is able to maintain that. 

Nadia can also have information about your feedback and the feedback that you've gotten, we'll see that in a moment, as well as your skills and your career development aspirations. So, as you've got a fleshed out profile on that, the thing that people are asking about is now: I want my Nadia to know a little bit about other Nadias. And so what people will be able to do is they'll be able to choose, are they going to keep their profile private or are they going to allow Nadia to take their profile and translate it into a public version.

And again, you get to quality control it, you get to see what this looks like. But imagine that your Nadia coach says, “Okay, here's what I know about Parker. I know that he's fast paced. Sometimes he can be forgetful. Sometimes he can look like he's distracted in meetings, but he's really trying to parallel process.

He likes to get information with the big picture first and the details afterwards.” And I'd be very happy if my Nadia sort of created a public profile of mine so that I could be able to allow others to figure out how to work well with me. 

And so we've made that live for us at Valence and a couple of other early pilot organizations. So if you go to the team section of your app, if team members have made their profile public, you will get a chance to be coached specifically on it. 

And it is the most interesting, the most engaging feature that we've released. People just love to know, how can I interact with this particular person a little bit differently than I might with someone else? I think 40 to 45 percent of our conversations in Nadia are either about team and team relationships or about communications. And many of those are becoming much, much richer. Because if you know the two parties involved in that communication or that relationship, the coaching can be incredibly more rich. 

So that's knowing Nadia about yourself, knowing Nadia about others. And then the third piece that we thought was really interesting was, could Nadia be a way to gather feedback for you? And I just really love that one-dimensionality analogy because I think that also applies to feedback. How many people have filled out a feedback survey for a colleague in 2024? Hands up. Can't see. There's a lot of hands that are up. How many people enjoyed that process? Oh, I don't see that many hands up.

It's very hard to translate the richness of the person to trying to answer a seven-point Likert scale on 26 different questions. And so what we want to do here with Nadia is introduce very, very lightweight conversational feedback, focused on growth and development. And so what we will see here is, if I go through my profile and it's turned on, I have an option to collect feedback via a 360 review. And so this is what we think the future of 360s is going to be: conversational and no longer Likert based. 

So Nadia is asking me what I want to check in on. So I'm going to tell her that I want to get started with feedback. 

[To Nadia] “Hi, Nadia. It's the end of the year. I've been reflecting on my leadership. I want to get feedback from my team on how I'm doing.”

So, for those of you who don't know, Nadia can speak as well as text back. About half the people use text, half the people use her voice. And what she's able to do is look through the past conversations that I've had, look through my profile, and highlight that there are two things that have been important to me. One is about celebrating wins, and one is about bringing new people, Valence is rapidly growing, so bringing new people up to speed on the journey. So I think that sounds good to me. 

[To Nadia] “That sounds good. Can you email my team to ask them for the feedback?”

So we've introduced the ability now, if Nadia's hooked up to your systems, to be able to directly take these tiny actions on your behalf. Either she'll write the email for you, you can cut and paste it, or she can do it directly. So we're trying to make your life a little bit easier in some of these tasks to make you more willing and more likely to do them. Trying to take the friction out of it. 

So, what you're going to get a chance to experience, so I, as a crazy CEO, told my team: What I want to do is I want all 200 people to take a picture of the QR code, and then I want them to give me feedback on the phone, and then Nadia, real time, is going to consolidate feedback from 200 people who don't know me or who I am. And then we're going to get to see what my profile is. 

And my team convinced me that was a little chaotic, and we shouldn't do it. But if you would like to experience conversational feedback, it's live. And you can have a conversation with Nadia, and I promise I won't be offended at the feedback that you give me.

But what we're going to do is we're going to bring one of my team members up on stage to see what it's like. 

Alex McMurray: Hi, Parker. I'm excited to give you some public feedback at your AI Summit. Alright, so let me make sure that I'm on the right screen. I am. Okay. 

So Nadia has sent me a note, and she's asking me to reflect on whether or not Parker is celebrating wins and how well he's conveying this to me. All right.

[To Nadia] “Hey, Nadia. I think he's doing great. I feel super appreciated in my role, and I think he knows I'm working hard. He sends Slacks and texts to say thank you. And I know he's really proud of the product team.” 

[Reading Nadia’s response] Hmm, I'm wondering if those wins are being shared with everyone, or just with me.

[To Nadia] “Hmm, that's a good point. Some of these are very much direct messages to me, and we do hear quite a bit about product. So I wonder, I think about, I wish he could celebrate the relationships that the client team is building. I think they're doing hard work and doing an amazing job.” 

[Reading Nadia’s response]So Parker is thinking about this. That's interesting. And it seems like he wants to formalize some of this. Have I seen a difference? 

[To Nadia] “Well, he's definitely positive, but now that you mention it, I think it could be more formal. There were a few all-hands where he did do a bit of celebration, but it’s started to fade. I know he's been thinking about hiring a chief of staff, and I kind of wonder if that will help this a lot.”

[Reading Nadia’s response] Do I have five minutes? Hmm.   

[To Nadia] “I would love to keep chatting, Nadia, but I need to go present at an AI summit.” 

All right. So I'm going to give this back to Parker, and we'll see what all of the feedback he collected might say to him.

Parker Mitchell: So we just wanted to try to illustrate what it's like. This is what an executive coach does. If any of you have had an executive coach that collects feedback, they know a lot about you and your intentions, and they're able to draw out the type of nuances, that multi-dimensional nuance that is going to be truly helpful for you. And we heard from Anna and others that it's the frequency of feedback that is just so important.

And so if there's a world in which we can have our AI coaches go out, gather in really lightweight ways the feedback, probably more specific feedback than what's my end-of-year 360 review, but how did I do on preparing for this particular event, or how did we do on this project that we just launched? That kind of high-feedback world we think is going to feed a lot of the learning and growth that a lot of us aspire to. 

So this is a conversational AI. Obviously Nadia is able to synthesize it. I'm actually going to skip this bit in the interest of time because what I want to do is actually call up a few folks, who are our partners. I want to just share appreciation. We wouldn't be here if we didn't have the opportunity to have partners. Many of them have navigated their own internal processes. They've gotten, you know, a new AI-powered coach through IT reviews and AI councils. And they're getting terrific responses and also very excited to share some of the experiences they've had. So we wanted to just bring them up, introduce them one at a time, thank them, and have them share what is most interesting to them that they would like to be asked about at our cocktail hours. So Lauren, can I invite you up?

Loren Blandon: Hi all, thank you! Hi, I am Loren, I'm from VML. I lead organizational development. I have some of my colleagues in the audience, and essentially I just wanna open myself up during the happy hour and even after on socials, I'm very active on LinkedIn and all over the place, to have conversations about how we're using Valence and how we see Nadia playing a part in our strategy. And really curious to hear what you all are thinking about this.

We are in the beginning phases of implementation. We did a really successful pilot, got some amazing feedback. I can testify that folks did express that Nadia was giving them a lot more psychological safety to open up about things. So that part about it being empathetic, or at least feeling empathetic, is real and it's there. So now we actually just kicked off this week the effort to implement on a wider scale. We are looking at it as more of a super pilot on this next round, over the next year. So we're thinking about things like: How do we deploy through the influencers, or what we like to call learn-fluencers sometimes, across the company to bring people that are more resistant, get them on board and really using this in their everyday? How do we align the use of Nadia to strategic outcomes in where we want to grow the business or shape the business?  And we do have Lindsay Pattinson coming up, she's at WPP, she's our Chief People Officer. AI is very much at the forefront of where we want to head as a company, and so we really see the use of Nadia as maybe a safe, fun, enjoyable introduction to how folks can better themselves and their work through AI.

So really excited about the possibilities, really excited about our partnership with Valence, and looking forward to chatting with all of you. Thank you. 

Parker Mitchell: Thanks so much, Loren.

Next I'll invite Colleen up. Colleen leads leadership development at AGCO and has been a dear partner for six-plus months now. I mean, twelve months as we started kicking off the process, six months live. 

Colleen Sugrue: Yeah, that's about right. So, we actually kicked off Nadia in April. And we took a broad approach to implementation. We started offering it to 28,000 employees globally. And that was intentional, right? We wanted to go with a big bang, get some early adopters, and find out what was going to work. So we are, still feels like, in the beginning of that journey a bit. We have about 26% of our populated target audience who is logged in and using it. And now we're really headed into the stage where we're thinking about, how do we customize it? So if we have this foundation, now how do we really get Nadia embedded into our leadership development? How do we get Nadia leveraging our results from our action surveys, our voices surveys, and doing action planning to help managers figure out where can they target just for their groups? And we're also launching something, actually I feel like it's going out today, around how does Nadia help all of our employees really kind of build those individual development plans so that they can decide what they're going to target on, what they're going to develop in 2025. So we're in that performance management period right now. 

So we have lots to learn. We're still learning, but it's been a great journey. So if you want to talk about any of that, of course, I'm here and I can give you some tips. And we also have a lot of tips around works councils and GDPR and all of that. So if you're on that journey, buckle up. But I have some advice for you there, too. Okay?

Parker Mitchell: Thanks so much, Colleen. And next I'd like to welcome Matt up. Matt has been a core partner since we began conversations. I think it was at SIOP last year. So six months ago. And just done an amazing job, you and the team, shepherding it through and an exciting launch eight weeks ago. So, welcome. 

Matt Dreyer: Thanks Parker. And good to be here and see so many faces that we've been talking about a lot of stuff about AI with lately. So I'm Matt Dreyer, Head of Talent Management at Prudential. And at Prudential we've been thinking about a couple of questions as we went into this year. Chief among them for me was: How do I scale coaching, and how do I provide more democratized access to coaching, which was typically reserved for folks at the top? How could I also provide that coaching at exactly the time that people needed it? And how could I help to power our talent marketplace with more powerful [insight on] what's your next best action in terms of your development? So getting more to that 70/20/10 model, as opposed to always pointing people towards a leadership program.

And, as Parker mentioned, I was at SIOP and we saw this product, and we had heard about this product from some of you at some other places as well. And that was just a short 6 months ago, and we launched 8 weeks ago. We have had over 1,300 people who have used Nadia. The majority of them come back for a second time. We've got an NPS score, that's fresh off the presses, of 91. So people are really engaging with it and enjoying it. But the power in this is that people are coming back to us and telling us that it's answering the question that they need answered when they need it. It is acting like their personal coach or their personal assistant day to day. 

The use cases we've been going with primarily have been around, first off, the population. We launched globally. All of our countries in which we operate, we've launched Nadia. We have launched from the bottom up. So people who wouldn't typically receive coaching have gotten first access to this tool. We've launched, across all of our businesses, all of our functions, to both individual contributors and managers of people. And we've launched through our BRGs to make sure we have a really diverse representation of people who are getting access. 

There are a lot of use cases I'd love to talk about, but I'll wrap things up by saying, if you want to talk to me about things, one is: let's talk about how this has helped us provide tailored development actions at scale in a much more democratized manner. We are using this to support launch of a new leadership program called Leadership DNA and provide personalized coaching around that. As I mentioned, we launched globally, so there have been some really great opportunities and challenges with that. And we are integrating this into our learning programs. 

I can't see them anymore because of the lights, but I'm also going to call out that we have a couple of our HR technology partners here with us today. And if you want to know how we got from deciding to do this to doing this, this quickly, our HR technology partners, Kate and Allison, would be great people to touch base with during the cocktail reception. So, thank you.

Parker Mitchell: Thanks, Matt. And I'll echo that thanks to Kate and Allison. I know it's never easy to bring in a new technology. We really appreciate it. I'd like to now bring Brad up to the stage. Brad has been a stalwart partner. I think we met you at the NYU conference that Anna organized this last year. Explored some team tools, and when we rolled out AI, you were one of the first to say, “Hey, I think this is a really neat initiative.” So thank you for that belief, and welcome.

Brad Haime: Parker, it was a free meal. You gave me a free meal that day, and that's what got me hooked. My name is Brad Haime. I'm part of the team at Experian. And, like the other folks who chatted before me, we, about a year ago, started the journey with Valence, thinking about how we might use Nadia as an executive coach. 

But what I'm excited about, and what I'd be more than happy to chat a little bit about is, like many of you, we have a global leadership development model, right? With core development and with hi-po programs. And we had a gap. And the gap was at our mid-level leader or leader-of-leaders level. We have about 1,000 of these folks in our organization that we really didn't offer much for equally across the organization. And we know leaders like to learn best by doing. We know that leaders interact, and they learn through conversation. And I thought, well, what if we can use Nadia to support this need? 

And what we came up with, in partnership with Valence, is an idea that we're experimenting with right now, with about 300 of our leaders in all of our regions around the world. We have a bespoke 360 model. We have our own leadership characteristics, we call them our characteristics of great leaders. And what we'll do with our leaders is they’ll have an opportunity to complete their 360. Based on the results of the 360, we’ll recommend: these are your top four development areas. And each development area has a module that Nadia sits in the middle of.

The leader will have a conversation with Nadia. What are your development opportunities? What are you already working on? How has your team responded to you? Let's talk about what you learned in your 360. And then Nadia will co-create an experiment with the leader that they'll do in the real world. And about three weeks to a month after that, Nadia will follow up with the leader and say, let's talk about this. How'd it go? Did you have time for it? Did it go well? Do you need a little more time? What did you learn? What might you try differently in the real world? And are you ready to try something else with me? 

So far, it's been going pretty well. We're just rolling out this new type of solution. I'm excited to learn more. And we're going to have a series of focus groups and surveys in order to see what can we tweak as we go before we roll out to the rest of our global population. That's about it. I'm happy to talk more about it. 

Parker Mitchell: Thanks, Brad.

AI Coaching in 2025

Nadia, Valence’s AI Coach, is live across the Fortune 500. In this demo, get a glimpse of what the future might bring as we explore purpose-built AI that’s designed to democratize coaching for every employee.

Parker Mitchell

Founder and CEO, Valence

AI Coaching in 2025

The Future of Talent with Prudential and WPP

How is work best done in our organizations, and how is AI changing what roles are needed and the organizational structures of companies? Lucien Alziari (CHRO, Prudential) and Lindsay Pattison (Chief People Officer, WPP) explore how AI will change the way we think about work excellence and what is possible for the talent functions of tomorrow.

Lucien Alziari

Former CHRO, Prudential

Lindsay Pattison

Former Chief People Officer, WPP

Larry Emond

Senior Partner, Modern Executive Solutions

Key Points

Larry Emond: Okay. Hi, everybody. I will introduce myself third before we get into some topics. But, knowing that so many of you are wondering how you get into these seats at some point in your career, I thought that we’d actually switch off of AI for a second. And they represent two very different archetypes of a large-company CHRO, in terms of their background. And so I wanted you guys to introduce, you know, how you got to where you are, a little bit of the journey. And Lindsay, I mean, I'm sorry, not Lindsay, Lucien, we'll start with you. 

Lucien Alziari: Good afternoon. So, I've been a chief HR officer for 20 years, which is kind of scary when you say that. Grew up at PepsiCo. Lived in, so came to the States 30-plus years ago. Lived in Vienna, Middle East. Came back, ended up as head of talent and a big sort of HR partner, business partner. I've been a chief HR officer now in three different companies. So eight years with Avon in the New York area, five years with A.P. Moller-Maersk, the big industrial company in Copenhagen, in Denmark, and now seven and a half years at Prudential in New York.

Lindsay Pattison: So I'm the opposite. I've been a CHRO, or chief people officer we call it, at WPP for 10 months, nine and a half maybe. So, when I hear some of the words described today, I still kind of go, I don't entirely know exactly what you're talking about, but I'm doing a good job of pretending that I do. Lots of chat about skills is something that everyone seems to do in HR. But I've been with WPP for 14 years in various roles, the CEO of a media agency in the UK, then the global media agency CEO, chief transformation officer of Group M, our media business, and then chief client officer at WPP for six years, and took this role on. So really what I bring is deep knowledge of WPP, shallow knowledge of HR, but the combination is WPP is a people-led business. You know, 60-70% of our outgoings, or our costs, or our assets are people. So understanding people and understanding how people can work across the businesses is hopefully the lens that I bring as I learn more about the skill set of HR. 

Lucien Alziari: And for those that want to be CHRO, the first 17 years are the hardest.

Larry Emond: So to give you the data on the archetype, if you look at the big-company CHRO all over the world, Lucien is at about 30%, and that's the kind of career HR person. What's interesting about him, too, is that if you look at his original jobs within HR, of the 30% that do become a CHRO that are largely lifetime HR, the most common pattern today, he did it a long time ago, is the talent generalist pattern that you both had.

And that's kind of today very much the route to being a CHRO, if you've been largely a lifer. Lindsay's in the 10%, it's actually a little more than 10% of people that were never in HR until the day they became CHRO. That's actually, more common the larger and more global you are. And there are some companies in the world that have systematically done it that way. WPP was not that. This was a unique situation where they decided to do that. But two very, very different archetypes. 

Real quick, myself, I was at Gallup for a long time and three years ago joined Modern to start building a different kind of talent advisory. But I accidentally fell into something about a decade ago, where since that time I've managed what I believe is the world's largest CHRO community.

And somehow I've done about 400 in-person meetings of CHROs around the world. I met Lucien here in New York, I think it was November of 2018. You came to a meeting hosted by Diane of IBM, who you guys met earlier. And met Lindsay about a year ago, or less than that. She's hosted a meeting, she's hosting another one in January in London.

And so I do these meetings, but relevant to this topic is this. The main way we've done these meetings over the years is, you get 10 or 12 CHROs that can make a day or a day and a half. Then, after you get them, you ask them what they want to talk about. What do you want to put on the agenda? So it's been this you know, 400 meetings of a big focus group on what's on their minds.

And if you look at what they want to talk about over the years, you think about all the potential topics, and you can imagine what those would be: DEI, the future of the HR function, the future of the CHRO, HR analytics, how do we develop leaders, succession, blah, blah, blah. The single most requested meeting topic by a long shot has been something in the area of HR technology and automation.

And that's because there's so much of it, right? What do we choose? A few years ago it was, do I choose Workday or SAP? Are we going to have less technology? No, actually we're going to proliferate. That's not very good, but here it's happening. What are you doing? All those kinds of conversations. 

But I'll remember the meeting, if you go back before a couple of years ago, in all those conversations, let's say 200 different meetings where that was on the agenda, AI didn't come up. It just didn't come up. It wasn't there yet. And so you didn't hear about it. And then I was in a meeting in Zurich in April of last year. Big global manufacturing CHROs. Tina, your CHRO, Charise, of Schneider was there. 

[AI] wasn't on the agenda. We had a different agenda, and somebody said, “Have you guys started messing with this OpenAI chat?” And it took over the meeting. And ever since then, about 40 meetings since then, it's always the number-one requested thing on the agenda. And I've learned to put it at the start of a half a day because it'll usually move out the second agenda because we'll just keep talking about it.

So it's been kind of a fascinating thing, and I think it will continue to dominate. All right. We've heard a lot of detailed things today. Thought we'd back out and say, you guys have seen a lot of things in your careers. This is a big one. But maybe we'll start with you, Lucien. If you think out a decade, and you think about how AI is going to impact the future of work, the future of the workforce broadly speaking, what are you, what are you thinking?

Lucien Alziari: I'm mostly thinking that I'll be playing golf somewhere, looking at what those CHROs are doing. But I've thought a lot about this. And, if I can, in order to go forward 10 years, I'm going to go back five years. Five years ago, this is sort of pre-pandemic, a number of us in the CHRO community were sort of intrigued with this idea of the future of work.

We were looking at the deconstruction of work, the reconstruction of work. It was all a bit clunky because you were basically sort of doing it by hand or by spreadsheet. COVID, the pandemic came along. Terrible experience for humankind. There were, though, a couple of silver linings on a very large cloud.

One of them was that we finally separated work from the workplace, right? Up until then, those two notions were sort of inextricably linked. People couldn't think of them as two different things. Once you can separate work from the workplace, that opens up a lot more creativity in terms of potential thinking about how that gets done in the future.

We then had a couple of years where it was, frankly, a bit disappointing because the whole debate was about when does the work get done? So the terrible, you know, how many days a week do you come into the office discussions. And then where: so it's sort of virtual or in-office, how many days a week. But nobody was talking about the work.

Alright. I think the future for HR, the next great competence for HR, is in the work. And in my career, I'm lucky. I'm a talent guy in an era where talent has been kind of the core skill set of CHROs. That won't go away. But I'm really intrigued now about the ability to optimize the integration of talent, the work, purpose, technology, right?

Can I have one minute just to talk about, I was really struck by a case study on nursing. There's a world shortage of nurses. Somebody, it's actually RAND that I think published the paper, looked at the work of nurses. Why do people go into the nursing profession? Because they want to care for people. How much of their time do you think they spend doing work that they would associate as caring for people? Very little. All right. So you study the work. So what have you just done? You've deconstructed a job into its component tasks. You've looked at the individual, you've kind of deconstructed them down to the skills that they have, and which are the ones that add most value. And then you've identified their purpose. And my guess is each hospital system that nurses work for has some kind of mission statement that talks about better outcomes for patients. So you've got organizational purpose, individual purpose, you've understood the work. Now, with the technology, you can re-sort that so that, in that role of nurse, you can maximize what's the work that is on-purpose for the individual, on-purpose for the organization, really plays to what they're best at.

And then everything else you have a choice. You can stop doing it. You can give it to somebody else who would see that as work that they really, really want to do. Or you can get it done through technology. And if you look at what we do in HR and organizations generally, we basically figure out what's the work that we need to do to take strategy and deliver outcomes for customers, right?

Nobody has a chief work officer. 

Larry Emond: It's a fascinating thought, chief work officers. Maybe a human gets this done. Maybe AI gets this done. Maybe we don't do it at all. But you're stepping back and eliminating only humans from the equation, which is really interesting. Any thoughts you have on this long-term impact?

Lindsay Pattison: Well, I've been listening all day, and I've been told by many people that trying to think about the future of work, certainly 10 years out, is completely useless. So I'm going to listen to a Noble winner who told us that. So, but just to build on Lucien's point, I think what was interesting, I heard a stat today, actually my CFO and I were texting about budget meetings next week and AI, future of the workforce planning, some work that we've done with actually Josh in the audience. So, interesting conversation. 

She sits on another board, and they just had some research back from using Copilot. It's not our company, but it's a company similar to ours. And they analyzed the time spent by people, and 14 percent of the time was spent with a client. Great, because it's a similar industry to ours, so it should be client focused.

70 percent was spent on internal meetings. So you do the math, and I'm like, when are they actually doing the work, to your point? What work do they think it is they're doing? So I think actually AI and tools that we have in AI will help us realize and really think hard about that. And I love the analogy of thinking, what's the company's purpose and your individual purpose, and then closing the gap with the work that is or is not being done in between. I think it's really interesting. 

Larry Emond: Let's take advantage of the fact that you've only been in the glory of HR for about 10 months. You were out in client-facing roles, transformation, you know, CEO of one of the agencies. Okay. So let's go outside of HR for a second. Because you guys are like 110,000 people. I think a total of a couple hundred different ad agency, PR firm brands, you’re the biggest in the world, etc. What is AI, do you think, going to do for all of that? Creative content, etc. What's going to happen there? 

Lindsay Pattison: We're six main agency brands, and we have 30 smaller ones, just to correct you, because we've been on a hard program of simplification.

But I was interested in, I don't know if Brent's still here, but I was scribbling notes because his five misperceptions of AI included number four, which he didn't go into, which was that generative AI is bad for content producers. So that would be really bad for WPP because we make stuff, like we make ads, and make content, we make PR. And he said that was a misperception. We believe it's a misperception. So we're thinking about how we use AI within our business, internally, but we're thinking mainly about how we use AI as a platform to deliver content, ideas, media to our customers. So we have a platform that thinks across strategy and planning, thinks about ideation, and thinks about content creation.

And the speed at which you can do stuff now is incredible. So I was thinking, I played around with a tool this morning. If you just think about every level of what you might do in a marketing funnel. So we have a tool called Headline Generator, which I hadn't used until this morning. But I thought, well, I'll have a go with that one.

So it's asked me what product I wanted to look into. I said health insurance. It asked me for a target audience. I said young families. And it asked me where they were in the funnel of awareness through to buying. And I said, okay, I'll say awareness. 

What was interesting was that it then gave me categories of different headlines. The first was fear of the unknown. The second was affordability. Next was specific benefits. And then a whole section on quizzes. Because if you have quizzes in headlines, people tend to respond to them. This took less than a minute. 

I'll give you three examples of fear of the unknown. These are headlines. You can judge how good AI is at creating them in under a minute. “Tiny Humans, Big Worries: Breathe Easy with Prudential Insurance,” blah, blah, blah. What I loved is, afterwards it told me why that was good. It said it's showing empathy for that target audience. The next one was, “Unexpected ER Issues: Don't Let Them Break the Bank.” The example there said, this is a specific pain point. And then the last one, “Superhero Parents Need Super Coverage.” And it was trying to say it understood how parents wanted to be perceived. So, not that great, but pretty good to do that in under a minute. 

So we're using it in every stage of our process. And we think it will, it really will transform what we offer clients. So we're creating platforms to enable our creatives to do better work. But I think what's important is, the AI is not creating the content. There is somebody that is using AI to create the content. And those are two quite different things.

Larry Emond: You made a comment to me when we were together last night, how you might over time kind of be like a SaaS platform that has all this functionality, and it allows your clients to get a lot done on their own. And then you come in when they need to figure some things out. 

Lindsay Pattison: All our clients think they're brilliant copywriters anyway, so we'll let them use the platform a bit. But then the onus is then on us to what is that extra level of creativity? What do we do with the time saved by the automation of tasks like those? How are we adding a special sauce? So why would people really pay for our services? So actually there's a lot of work going on. And I won't repeat all the words said today on upskilling and reskilling and higher-level tasks that we can now free people up to do to really add value.

Larry Emond: Bill mentioned earlier this morning, from Vanguard, the creative destruction concept and those two books on the innovator’s dilemma and the innovator’s solution. There's that point that, you know, you ought to be in the business of creatively destructing yourself because if you don't, someone else will.

And probably a big piece of this is how can AI help us do that and kind of help us rethink our business and repurpose it. Lucien, I'll go back to you. Okay, so we've been talking a lot about the possibilities, and we just talked a little bit about where could this go. But what do you worry about? Like, how could we misuse AI in a way that would not be helpful for work and workforce and the function of HR? 

Lucien Alziari: Yeah. I’d generalize a little bit, if I can, beyond just the topic of AI, because I think that there's a theme that I would like to convey. One is, it's interesting the way Larry introduced sort of two archetypes. One, I was accused of coming from within HR, and Lindsay had real jobs along the way and then is kind of on holiday. But I don't think of myself as an HR person. And I don't think my CEO thinks of me as an HR person. And so for those of you in the audience who do aspire to be CHROs, be a business person. And be steeped and curious about what makes your company win. And you happen to bring some expertise in talent and capabilities and culture, whatever, but at the end of the day, you wake up every day worrying about what makes my business win. So, a mistake would be, don't do that, right? That's one.

The second, that I think AI is the next version of, is: HR every couple of years makes an important input an outcome in itself, and it falls off the tracks. Is employee experience important? Sure it is. Is it the outcome of HR? Over my dead body. Why does it matter? Because it produces great performance. If we produce great performance, our companies win. You can take loads of themes over the years where all of these new discoveries about important new insights, they're really important inputs, but don't lose sight of the fact our job is to help our companies win. Our job is not to deploy AI in our companies. We have technology partners. Our job is to figure out what's going to make our company win. What are the problems that we need to solve? And now here is a tremendous new asset and resource that we can use to help us unlock those problems. 

It took me a year to figure that out. Because I literally did spend a year in meetings like this with peers, and we were all talking about how are we going to deploy AI? And I woke up one day and I thought, it's the wrong question. So that's what I wouldn't do. 

Larry Emond: Lindsay, what would you worry about other than that? Where could we go wrong with all this? 

Lindsay Pattison: Well, I think it was mentioned earlier on one of the panels, because LLMs are based on all the knowledge that's gone before, there's inherent bias built into the system. So when we're using AI to then, even when we're thinking about using AI within our own organization, we have to be careful and mindful of that. When we're producing content that goes out into the world, we have to be very careful because it's generally very biased against women or underrepresented groups. So understanding that, I think, is really important. 

I think AI, again, when I'm thinking about the content and what we put out, there was, I think it was a gentleman from Delta was talking about the challenges around data. I mean, there's a whole ton of information that's out there now that can be, that turns into deepfakes. So our CEO was deepfaked earlier in the year. It was in loads of newspapers around the world because it was a really, really impressive scam that voice cloned him. It was emailing employees using his face, using his voice, asking for details, asking for money. Very, very sophisticated. 

So I think when you're in the business of advertising and putting content back out there in terms of, again, to Lucien's point, the business that we're doing is we're trying to create brilliant marketing for our clients. Anything we do is in service of that. And we think about the use of AI. You know, advertising at its basic, has to be legal, decent, honest, truthful. And those are not four words that you naturally ascribe to AI, sadly, at the moment. So actually the ethics of AI and how you use AI, how you use it internally, because there are concerns about how you bundle that data with other people's data, and how you're going to use it externally, I think are really important conversations.

Larry Emond: Maybe just as a final thought, you commented a little bit on it before, Lucien, but, expand just a little more on how does all this change the future of the HR function itself? A little bit more on that. How might it look very different a decade from today than today? 

Lucien Alziari: Yeah, I personally believe that the fundamental role, if you believe my thesis, which obviously that's up for you to debate with yourself, but my thesis is: My job is talent and capabilities to win. I don't think that changes over the next few years. The context in which it gets delivered changes dramatically. The resources with which we can confront those issues going forward, I mean, they were a twinkle in our eye 15 years ago. Now you can do it. And you can do it in seconds. 

So that will not replace the fundamental curiosity about how does your business win, who do you compete against, all of those kinds of things. But when you've got those twinkles in your eye now about, well, what about this? Now you've got the ability to really make very, very fast progress against that. But I don't think it changes the fundamental role of a CHRO, if you believe that sort of fundamental premise, and I do. 

Lindsay Pattison: I mean, I agree. I think a CHRO is a strategic advisor. I always talk about the CPO, CFO, CEO being a triumvirate of how you make decisions. You can't leave the decisions to the CFO. You need to have the people lens applied to everything that you do. Because, certainly for us, and for most people, it's a people-based business. 

I think the other thing that will be different, or the thing that we need to think about as the conscience within our business, to some extent is, is the balance of humans and AI, and not letting one rush to overtake the other. Because there is still fear. There was a panel earlier about how we get ready. There are still people who are fearful of AI and fearful for their jobs. So our role is to ensure that people feel enabled to embrace the future, that they are AI optimists.

Someone talked earlier, I think Jennifer, about pilots, not passengers. So feeling you're in control of your own destiny. But managing that balance and having a culture that's optimistic about that balance but doesn't leave people behind, I think, is really important as you move forward. And I think the human skills as a leader in general become much more important. So compassion, courage, curiosity have been talked about a lot today. I think those are really, really important. 

And again, I loved the analogy when someone talked about, they were rereading Thinking Fast and Slow. Because sometimes our job is to slow things down, and think about things, and be that, I've said it again, the voice of conscience, which I wouldn't naturally say I am. But because technology moves so fast, we can rush towards it like the next gold rush. And actually being thoughtful about how we apply it in a balanced way is really important. I think it's incumbent probably on the people in this room. 

Lucien Alziari: If I could, one last sort of analog. So I grew up as a talent person, and it's been a talent era. I've been very lucky with that. But the key question in talent is: talent for what? Talent's not a generic. It's competitively defined. It's defined by your strategy.

Have the same question about technology: technology for what? The thing that brings together, for me, the talent and the technology is: What's the work that creates competitive advantage for your company? That actually is kind of the unifying measure now, which at the moment, nobody has that lens. 

Larry Emond: I'm going to keep us on time. Something occurred to me today when I was listening to everything. I've done a lot of advisory and coaching in my life. These days, it's mainly a combo advisory and coaching for new and first-time CHROs. But I've been around that field my whole career. And I was thinking about, for all of us and myself. Well, how should we use this just in general?

And one thought would be something you referenced. How can we all leverage AI to help us get a bunch of stuff done faster, better, more creatively, whatever, that allows us more time than maybe we've had in forever or ever, to just think, to be reflective, to be unhooked. None of us do that anywhere near enough.

And it could be that one of the great gifts of AI is to allow us to get more of that time in our life. I think we all know that'll make us a lot better at what we do, both professionally and personally. So maybe that's something to jump on. 

Lindsay Pattison: I agree. I mean, we were talking about capacity unlock and time, and I was saying, “Oh, what could we do with it?” And someone on the team said, “Well, maybe we could let people have lunch breaks.” I was like, “Oh yeah, good point.” 

Larry Emond: Like really, right? Well, thank you two. It's been a pleasure, my time with you guys. And thanks for showing up today and doing this.

The Future of Talent with Prudential and WPP

How is work best done in our organizations, and how is AI changing what roles are needed and the organizational structures of companies? Lucien Alziari (CHRO, Prudential) and Lindsay Pattison (Chief People Officer, WPP) explore how AI will change the way we think about work excellence and what is possible for the talent functions of tomorrow.

Lucien Alziari

Former CHRO, Prudential

The Future of Talent with Prudential and WPP

Former Chief People Officer, WPP

Larry Emond

Senior Partner, Modern Executive Solutions

AI Unpacked with Nobel Laureate Geoffrey Hinton

Even the “godfather of AI” Geoffrey Hinton has been surprised by the speed and scale at which AI has developed. In this keynote from Valence's AI & the Workforce Summit, he explains what is so powerful about the technology and how leaders can unlock its potential and prevent its pitfalls.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

Key Points

Parker Mitchell: Geoff, welcome. We're so excited to have you here today. We are gathered with CHRO's, heads of talent of some of the largest companies in the world. And what we're trying to do is make sense of AI. We're really wondering what it's going to be like in the future. But to understand that, I'd like to go back to the past.

If we look back to, let's say, around 2010, so almost 15 years ago, if you tried to, you know, Geoffrey Hinton in 2010, the predictions that you made: where were you too optimistic, too pessimistic about the speed of progress? How has the field progressed since then? 

Geoffrey Hinton: So ask me about 2016 later. So I think if you had asked people, even fairly enthusiastic people who believed in neural nets in 2010, where we will be now, they wouldn't have believed we'd have something like GPT4. They would have said you're not, in the next 14 years, you're not going to develop something that's an expert at everything. Not a very good expert, but an expert at everything. You're not going to be able to have a system where you can just ask any question you like, some obscure question about British tax law, or some weird question about how you solve equations, and it's going to be able to give you a pretty good answer, an answer that's better than 99% of the population could give you. That's extraordinary, and we wouldn't have predicted that. 

Parker Mitchell: And so progress is happening faster than you anticipated.

Geoffrey Hinton: Yes. 

Parker Mitchell: Can you share more, what's it like to experience that as one of the leading researchers in the space and watching it accelerate? 

Geoffrey Hinton: It's amazing, because back in the ’80s, when Rumelhart reinvented back propagation, he rediscovered it. And he and I worked together to use it for things. And we thought, to begin with, we thought, this is going to solve everything. We've got something that can just learn. And there didn't seem to be any limits to it. And then it was very disappointing. And we didn't understand why it didn't work better. It was partly architectural things. And for about 30 years we used an input-output function that looked like this, when we should have used one that looked like this. Um, just crazy. But it was mainly scale. And we just didn't understand that this whole idea would only really come into its own when you had a lot of connections, and a lot of training data, and a huge amount of compute. So we couldn't have done it back then. And if we'd said back then, “Yeah, but if we made one a million times bigger and had a million times more data, it would really work.” That would have just sounded like a pathetic excuse. But it turned out that was the truth. 

Parker Mitchell: That's fascinating. So one of the things that you and I talked about earlier is the underselling of what large language models do if we use the term “next word prediction.” The experience that we have is that they could be reasoning; they could have a degree of intelligence. Can you share more about how that comes about? 

Geoffrey Hinton: So there's many people who say these things are just using statistical tricks. They don't really understand what they're saying. They're just using correlations. But if you ask those people, well, what's your model of how people understand? If they're symbolic AI people, their model is we have symbolic expressions and we manipulate them with symbolic rules. And that never worked that well. It didn't work nearly as well as the large language models. If you ask cognitive scientists, they'll come up with a variety of explanations, but my initial tiny language model wasn't designed to do NLP, natural language processing. It was designed to show how people could learn the meanings of words. So it's a model of people. A very simplistic model. But the best model we have of how people understand sentences is these large language models. It's not like we have a different model of how people work, and these work differently. The only good model of how people work that we have is like this. So I think they really do understand, and they understand in the same way as we do.

Parker Mitchell: And these large language models might have that kind of embedded creativity already in them? 

Geoffrey Hinton: Yes, so many people say, you know, these language models will do routine things, but people are creative. Well, if you take a standard test of creativity, I think the large language models now do better than 90 percent of people. So the idea they're not creative is crazy. This is very relevant to the debate among artists and Silicon Valley about whether these AI models are just stealing the creations of artists. Obviously to produce a work in a genre, you have to listen to a lot of music in that genre. But it's the same with a person. Whenever a person produces new music in a genre, they are stealing the works of previous people in just the same way the AI system is. So the AI system is not stealing them any more than another musician does. 

Parker Mitchell: I mean, it's fascinating, if you read analysis of the work of Picasso, he is clearly borrowing from artistic traditions. I think he's, you know, Benin masks and many other areas, and he's merging them into a new, you know, a new approach. But he is building off of things that he's seen. I think AI, if it's seen everything, there's no reason why it can't do the same thing. 

Geoffrey Hinton: Yes. So AI can be creative. And of course, to be creative in a particular way, you look at works of art that are done in that way. But it's hard to say that it's stealing, because what it's not doing is pastiching together bits of other things. It's understanding the underlying structure the same way a person does and then generating new stuff with the same kind of underlying structure. So it's just very like a person creating something. 

Parker Mitchell: Now you also studied the psychology of the human brain in your undergrad. How does that compare to what we have in our brains? 

Geoffrey Hinton: So we have about a hundred million synapses. And even though many of them are used for other things, like breathing, the cortex, the neocortex, has most of those. And so we've got many more adaptable parameters than these big language models. Which makes it very strange that GPT4 knows thousands of times more than we do. 

Parker Mitchell: And you said a hundred million. I think you meant a hundred trillion. 

Geoffrey Hinton: Did I say a hundred million? 

Parker Mitchell: I think you said a hundred million. 

Geoffrey Hinton: I could be a politician. I can't tell the difference between millions and trillions. A hundred trillion, yes. 

Parker Mitchell: A hundred trillion synapses. And so it's fascinating. So we have large language models that are two orders of magnitude smaller than the connections in the human brain and yet know an enormous amount of information. 

Geoffrey Hinton: Yes, they're a not very good expert at everything, so they know thousands of times more than any one person. And one of the reasons it can do that is you can have many different copies of exactly the same neural net running on different hardware. So you can get one copy to look at this bit of the internet, another copy to look at that bit of the internet. They can both figure out how they'd like to change their own weights. And if you just average those changes, then both copies have learned from the experience that each of them had. So now you take a thousand of those. Imagine if we could take a thousand people. They could all go off and do a different course. And at the end, everyone knew what everyone had learned, had experienced.

Parker Mitchell: We've talked a little bit about memory and how memory is stored in the human brain. We've talked about sort of fast weights and how those can adjust. Is there anything missing in an LLM architecture that humans still do exceptionally better, that the human brain does better? 

Geoffrey Hinton: I think we still learn better from limited data. And we don't quite know how we do that. We know the human brain has changes in connection strengths at many different timescales. So the first time I met Terry Sejnowski in 1979, that was basically the first thing we talked about: how these neural net models have just two timescales. They have the timescale of the activities of the neurons changing. And so each time you put in a different sentence, neural activities will change. And then they have the activities of the values of the weights, the connection strings, and they change very slowly. That's where all the knowledge is. And they just have those two timescales. 

Now, you could have many more timescales. Let's just suppose you have one more timescale, where you have weights, you have the weights that change slowly, But you have an overlay of weights that change much faster but decay quickly. That gives you all sorts of extra nice properties. So, for example, if I say an unexpected word to you like “cucumber,” and, a couple of minutes later, I put headphones on you, and I put lots of noise in the headphones, and I play words so you can only just hear them, most of them you can't quite make out what they are. You'll be considerably better at making out the word “cucumber.” Because you heard it two minutes ago. 

So the question is, where is that stored? And it's not stored in neural activities. You can't afford to do that, you'll use up too many neurons. And it's not stored in the long-term weights, because in a few days’ time it'll be gone. It's stored in short-term changes to the synapse strengths. And we don't have that in the models at present. 

Parker Mitchell: My undergraduate research was actually looking at something very similar, except it was preperceptual. So you would flash the word “cucumber” very quickly. You didn't notice that you'd seen it. It was subliminal. And then you could pick it up more likely if you either saw it, you know, in a collection of words or listened to it. And so there was a question of how did you understand, how did you process the word cucumber without realizing it in such a way that your brain stored it and was able to recognize it more quickly? 

Geoffrey Hinton: I think there's also a phenomenon where you flash the word “cucumber,” and you'll be better at hearing, at recognizing the word “lettuce.”

Parker Mitchell: Yes, that was actually, in particular, it was the association of sort of similar words. 

Geoffrey Hinton: Yes, so it's not just that you got the word, you got the semantics of the word, without any consciousness. 

Parker Mitchell: Can you share some examples of how introducing new information to an LLM that it might not have had in its training data, how it can reason over that and come up with an answer that's similar to how a human might reason by analogy?

Geoffrey Hinton: Well, I can give a nice example of it doing analogies that most people can't do. 

Parker Mitchell: I would love to hear that. 

Geoffrey Hinton: So I asked GPT4 some time ago, when it wasn't hooked up to the web, why is a compost heap like an atom bomb? 

Parker Mitchell: And I would not be able to answer that question. 

Geoffrey Hinton: Excellent. So it said the time scales are very different and the energy scales are very different. And then it went on about chain reactions. It went on about how, in a compost heap, the hotter it gets, the faster it generates heat. In an atom bomb, the more neutrons it's producing, the faster it generates neutrons. And so the underlying physics similarity GPT4 had seen. Now, it probably didn't see it when I asked the question. It had probably seen it during training. 

So we see a lot of analogies, and we actually store things in the weights. And it's much easier to store things in weights if they're kind of analogous structures. Because you can use, you can share the weights. And these large language models are just the same. And so in order to store huge amounts of information, they have to see analogies between different facts that they're learning. And they will have seen many analogies that no person's ever seen. 

Parker Mitchell: So this is fascinating. So in order to compress that amount of information into that few parameters, they have to implicitly understand and codify analogies in their weighting.

Geoffrey Hinton: And many of those analogies are analogies at a deep level, like between a compost heap and an atom bomb. 

Parker Mitchell: And they might be discovering, they might have embedded in the weights right now, analogies that we as humans have not actually thought about ourselves. 

Geoffrey Hinton: Yes, because GPT4 is a not a very good expert at physics, but it's also not a very good expert at ancient Greek literature. And it may well be there's something in ancient Greek literature that's rather like some weird thing in quantum mechanics, but no one person has ever seen those two things. 

Parker Mitchell: And so, in 2010 you started understanding what was possible, you and Ilya [Sutskever], won ImageNet. Alex, I think was… 

Geoffrey Hinton: Alex Krizhevsky. It's called AlexNet.

Parker Mitchell: AlexNet, oh, that's right. 

Geoffrey Hinton: He was an amazing coder, and he managed to make, to code convolutional nets on NVIDIA GPUs much more efficiently than anybody else. 

Parker Mitchell: And so at that point, you've started to see that scale matters. How has the past 10 years, 2016, why is that moment an important moment for you? 

Geoffrey Hinton: Oh, the reason I mentioned 2016 is because I made a prediction in 2016 that was wrong in the opposite direction.

I predicted that in five years’ time we wouldn't need radiologists anymore. This upset some radiologists. And it turned out it was wrong. I was off by about a factor of two, possibly even a factor of three. The time is going to come, and I meant for scans, I actually think I said at the time five years, maybe ten. But when they're reading scans, in maybe ten years from now, I'm very confident that the way you'll read almost all medical scans is an AI will read them and a doctor will check it. The AI is just going to get much better than doctors. AI can see much more in scans than doctors can. 

So my wife had cancer, and she'd get CAT scans every so often, and they'd say the tumor's two centimetres. And then a month later they'd say the tumor's three centimetres. Well, this thing's shaped like an octopus. Two is not a very good measure of the size of an octopus, right? You'd like to know much more about what's going on. And with AI we can do that. With doctors, they can't do that because they don't have the, they don't know what the outcomes are. But I think with AI we're going to be able to see things about cancers that'll tell you whether they're going to metastasize soon and stuff like that. We know there's lots more information in the images that isn't being used. 

Parker Mitchell: Well, it's as you said earlier, if you've got, you know, 500 doctors that can each spend a lifetime looking at 500 images and seeing the progression of them and then compress their brains, that's vastly more information than one single doctor.

Geoffrey Hinton: Yes. So no radiologist can train on enough data to compete with these things once these things are really good at vision. 

But, for example, in tuition, we're going to get very good AI tutors. And there's a lot of research that shows, take a school kid and put them in a classroom, they'll learn at a certain rate. Give them a private tutor, they'll learn twice as fast. And so we know that AI is approaching being good enough to understand what people are misunderstanding. And as soon as you get private tuition by an entity that knows what you don't understand, it's going to be a much more efficient way of learning than just sitting in a classroom and listening to a broadcast. So I think in health care and education, there's going to be huge advantages. 

Parker Mitchell: I want to spend a moment on that education example because we've been inspired by that idea of a tutor for everyone, for people learning in traditional education, a leadership coach for everyone who is at work. And so for us, this idea of personalization matters. Do you think AI could understand you in your context and almost be able to sort of access, it's like a librarian for the world's information, but just for you. 

Geoffrey Hinton: Absolutely. So a few weeks ago, I won a Nobel prize. And I've never had a personal assistant before. And the university gave me a personal assistant, and she now understands quite a lot about me. And it's wonderful. And everybody could have that if we can do it with AI. 

Parker Mitchell: That's fascinating. And you had to bring her up to speed, give her context. And if she had infinite access to your information, she'd be even more helpful.

Geoffrey Hinton: Yeah. Yeah. But I think that's sort of the good scenario. We all get these really intelligent personal assistants that know everything about us, and help. 

Elaina Yallen: When we think about building an AI product, something that gets tossed around a lot is human-machine or human-model empathy and helping users understand what maybe they should expect from models, so they know how to channel it properly. How do you think about that for software? 

Geoffrey Hinton: Well, there's one experiment where you have AI doctors and real doctors, and they interact with patients, and then you ask the patients, “How would you rate empathy?” The AI ones do much better. The AI ones actually listen to the patients. So, already they can exhibit empathy. It may be, we think of empathy as, you think, “How would that be for me?” And then you think, “Oh my god, that would be awful for me. I'm so sorry.” And maybe they don't do that. But they nevertheless, behaviorally, they seem to exhibit empathy pretty well. And we would like AI, if you had an AI tutor, you'd like it to have empathy about the fact that the pupil’s misunderstood something. And I'm sure they're going to be able to do that. 

Parker Mitchell: And I think you would say, correct me if I'm wrong, that if it exhibits empathy, it might be doing it in the same way that we exhibit empathy. And therefore it might be, it's not just, like, performative empathy. It's going to come across as genuine empathy. Is that right? 

Geoffrey Hinton: It might be genuine empathy. I think for us to call it genuine empathy, the AIs would have to be similar enough to us so they could imagine what it would be like for them. We tend to think of empathy as the ability to imagine what it would be like for you and then see, understand how it is for the other person. And I think if you're not doing that, you're just being very, “Oh that's terrible, I'm so sorry about that,” but you're not thinking of how it would be for you, right? That seems less genuine empathy, and AI can certainly do that. 

Parker Mitchell: I mean, I definitely agree with that, but I think part of the beauty of literature is that it puts you in other people's positions, and you can experience it through that, and you can say, “Well, I've never been in that position, but I've now lived that experience.” And if you have the world's literature compressed into that, you know, model, they might be able to understand what a range of humans, even more than I would, would be going through and exhibit empathy to that. 

Geoffrey Hinton:  They might. Yes. 

Parker Mitchell:  That's really interesting. So I want to zoom out to the societal side of things. So we've seen an enormous amount of hype, an enormous amount of coverage of LLMs in the past couple of years. One of the things you and I talked about is the analogy of sort of how difficult it is to see the future when things are growing exponentially. Can you share a little bit more about how you're experiencing that?

Geoffrey Hinton: Yeah, we're not used to exponential growth. So, a good analogy is, if you're driving at night, on a windy road that you don't know, you often drive on the taillights of the car in front of you. And as the car gets further away from you, the taillights get dimmer. And they get dimmer quadratically. So, if you triple the distance, they get dimmer by a factor of nine. Good. That's why you're trying to stay close. 

With fog, it's not like that at all. It's totally different. With fog, if you can see clearly at, like, 100 yards, you just assume you'll be able to see something at 200 yards. But actually, you can see clearly at 100 yards and then nothing at 200 yards, because fog is exponential. Per unit distance, it removes a certain fraction of the light. It's very different from linear or quadratic things that we're used to. People don't really understand the word “exponential” because it's misused so much. People misuse the word “exponential” to mean a lot. In fact, I think the rate at which they're misusing the word “exponential” is growing quadratically.

Parker Mitchell: It reminds me of a riddle that I used to love as a child, which was, if you have a pond that starts with one lily in it, and it doubles every day until the 30th day, when the lilies cover the pond and obliterate sunlight until the pond dies, which day is the pond half filled with lilies? And the answer is the 29th day. But the intuition people have is, oh, maybe it's around the 15th. And so it's hard to sometimes understand, because we don't live in that experience, what exponential growth could be like. 

Is there anything as you think about the future of work? We talked a little bit about workforce. A world of everyone having assistance is obviously wonderful. A world of jobs being replaced is obviously going to cause a lot of social stress. How should people who are leading large companies think about navigating the next two to three years? 

Geoffrey Hinton: There's obviously joblessness. So we just don't know whether AI is going to get rid of a lot of jobs. I suspect it is. Yann thinks, Yann LeCun, my friend, thinks it isn't. And in the past, things like automatic teller machines didn't cause massive unemployment among tellers. They just ended up doing more interesting, complicated things. And taking longer about it, so you have to queue for a long time. So, maybe it'll produce joblessness, maybe it won't.

I suspect there's some kinds of jobs where you could use a lot more of that. So, if, for example, they made doctors more efficient, we could all, especially old people, could use a lot more doctors’ time. If you got a doctor who was 10 times as efficient, I'd just get 10 times as much healthcare. Great.

There's other things, though, that aren't like that. And what'll happen is one person with an AI assistant will be doing the jobs that 10 people used to do, and the other 9 people will be unemployed. And the problem with that is, you've got an increase in productivity. That should help people. But you get 9 people unemployed, and one rich person who gets a bit richer. And that's very bad for society. 

Obviously, we can't see very far in the future. If you take the fog analogy, I think the wall comes down at three to five years. We're fairly confident we've got some idea what's going to happen in the next few years. In 10 years time, we have no idea what's going to happen. And you can see that by looking 10 years back. We had no idea this was going to happen. 

I think companies should navigate it by going in the direction of everybody having an intelligent AI assistant. So people feel they're going to get improved working conditions from this smart assistant. You're going to get increases in productivity. That would be great for everybody. 

Parker Mitchell: The next five years are going to be extraordinarily eventful, for lack of a better word. And you've played an enormous role helping us get here, getting through the AI winter, getting through those moments when it might not have felt like it was quite as clear as it is now. And I just wanted to say what an honor it's been to have this conversation. And thank you. 

Geoffrey Hinton: Well, thanks very much for inviting me. It's been fun. 

Parker Mitchell: Yeah, I really enjoyed it. Thank you. 

Elaina Yallen: Thank you so much. 

Parker Mitchell: You're welcome.

AI Unpacked with Nobel Laureate Geoffrey Hinton

Even the “godfather of AI” Geoffrey Hinton has been surprised by the speed and scale at which AI has developed. In this keynote from Valence's AI & the Workforce Summit, he explains what is so powerful about the technology and how leaders can unlock its potential and prevent its pitfalls.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

AI Unpacked with Nobel Laureate Geoffrey Hinton

Making Managers Excellent

Prasad Setty led Google's original research, Project Aristotle and Project Oxygen, into what makes teams and leaders effective. In this fireside chat from Valence's AI & the Workforce Summit, he discusses the power of AI to provide personalized leadership development at scale.

Prasad Setty

Former VP of People Operations, Google

Key Points

Parker Mitchell: Prasad, we are very grateful that you flew in from California yesterday to join us. And I know that many of the folks here in this room, if they're not familiar with you personally, I'm sure they are familiar with your research. And I was first exposed to it in the New York Times Future of Work magazine that came out, I think it was 2016.

But you were a driver behind Project Aristotle, Project Oxygen. Can you tell us a little bit more about why you and Google invested so much in trying to understand managing, managers, leaders, and teamwork? 

Prasad Setty: Thank you, Parker. Great to be here. And yes, it's always fun talking about Oxygen and Aristotle, even to this day, after several years.

At Google, there was a notion in the early days that that is the place where convention went to die. And there was a strong belief that, and this will be anathema to this particular audience out here, but there was a strong belief that managers did more harm than good. That they would become bureaucrats, that they would stand in your way, that they would slow down innovation.

And so in a company like Google, where innovation was our lifeblood, that it didn't seem like the right thing to invest in them. In fact, early on in Google's experience, they had all the software engineers report to the head of engineering. They removed all the middle management layers. And so Wayne Rosen, who was the head of engineering, he inherited all of them. Wayne retired soon after that, so I don't think it was a great experiment. But it was a sentiment that really persisted. 

And so with Project Oxygen, we wanted to actually prove that people managers don't matter. And what we found was that they do. They actually have, you know, their teams perform better, had lower attrition, the best managers.

And so that was a revelation for particularly our software engineering community because it was based on sort of proven data from Google. And as we carried out the research, what we also were able to showcase is that, you know, you don't need managers to sort of come in with inherent qualities of being great, but anyone can be a better manager than they are if they followed certain behaviors and practices. And so that is really what we were able to codify, and that led to a ton of acceptance and engagement across the organization to invest in better managers. 

Parker Mitchell: I mean, it's wonderful to fight an uphill battle and then to prove at the end that you're right. And it's great that the data was unequivocal in supporting that.

So the next version after that was teams. So, what is the role of teams? And I know there was some probably similar skepticism on the teams side that your research reversed. Do you want to share a little bit more about that?

Prasad Setty: Sure, as a next step, I don't think it was as much skepticism about teams, but an acknowledgement that we all work in teams and that is how we get work done.

But there were several beliefs and heuristics about what makes for successful teams. And so with Project Aristotle, we broadly looked at 400 different variables across our engineering and sales teams to say, what is it that drives team success? And broadly, these input variables were in the bucket of team composition, who's on the team for instance, versus team dynamics, how does the team interact? 

And what we found at the end of 18 months was broadly that team dynamics trumped team composition. And particularly within team dynamics, and certainly this isn't stuff that we came up with. You know, the notion of psychological safety, which Amy Edmondson and others have studied in great depth. Those things came to the forefront. So again, the sort of aha for us was that, in at least the Google context, any team could become successful if they followed some of these principles of team dynamics. And so then we started educating people on what does it take to improve psychological safety, etc. 

Parker Mitchell: So this is in the late, sort of, 2015 to 2020 time period. We are now almost 2025, where AI is playing just a huge role in all parts of this, whether it's leaders, whether it's team dynamics. How would you be thinking, how would you add to these studies if there was the AI part of the equation that was added to it? 

Prasad Setty: And I think we have touched upon this throughout today. In your opening, you talked about context, you talked about personalization. And certainly I'm a big believer in analytics. I founded Google's People Analytics team. I've been in this space for a lot of time. But I think it's, with our experiences, we start recognizing some of the things that are limiting as well. And one of the big issues that I have with how we think through analytics is that we flatten people, right? We collect 10, 20, 30 data points about them, or in some cases maybe it's much bigger than that, whether it's state variables about things that you know about them from HR systems, or from their resumes, from the work they've done, or from survey responses. But we're still flattening a whole bunch of rich people experiences.

And I think with AI, you now actually are able to sort of uncover that richness. And I think that richness comes about in asking people to interact with you in language and to understand more about the context that they operate in as well as the people in the teams that they interact with. So it is that dimension that I think is new because, when we think about performance, it's really about how people have the skills to engage in behaviors that are appropriate to a particular situation with the teams that they're working with, right? And so there's like multiple variables out here. And so I don't think we had the signals or the capability to understand that entire richness without AI, and now we are able to. 

Parker Mitchell: It sounds similar to what Anna was talking about, with the idea of sort of skills being a little bit too one dimensional, and that the addition of context, the skill application in the context is going to be important. And I'm hearing even more dimensions from you. It's not just sort of the work context, but it's the team context. It could be the moment in someone's career. And so, as you say, we're unflattening the individual, and we are adding new dimensions. That sounds revolutionary for people analytics. How do you see that advancing or evolving in the next few years? 

Prasad Setty: One of the thoughts for me, we are from the people function, so our first instinct is to think about individual people, and we should absolutely do that. I absolutely agree with the humanness and the value of humanity that shouldn't be lost in the middle of this. But as it relates to organizational context, I think, before we think about workforce planning and skills, we need to think about work planning. I think knowledge work today, a lot of knowledge work is about skills being applied in certain dimensions but with very few degrees of freedom. And I think that's the kind of work that people don't like to do because it doesn't give them too many options, right? Like, you're just like pigeonholing people and all of their capacities into things that are perhaps easy for AI to do in the future. It wasn't possible with existing technology, but with AI that will be possible. 

And so I think knowledge work itself will evolve to skills being applied where you have many degrees of freedom with work and in instances where you have a lot more uncertainty about outcomes. And so then knowledge work really becomes about, can you make consistently good decisions when you have multiple degrees of freedom and lots of uncertainty? That is where humans will have to excel. And that, to me, is what will then result in the compound interest effect of good decision making. And that, I think, is where we, as humans, will need to use tools like Valence’s Nadia, etc., to improve our capabilities on. 

Parker Mitchell: I mean this is fascinating, I'm hearing this concept of sort of the aperture widening, and more degrees of freedom in the choices that people are going to have, and that that's empowering for them. That the horizon that they could see is going to be a little bit shorter because there's so much complexity. And then on the flip side of it, we're also trying to measure performance. We've talked a lot on this stage today about how do you calculate ROI? How are you going to assess performance? Where will the impact of AI show up in that? Do you see any contradictions between those, or how will they be married? This sort of choice and expansion of options and the need to define performance and measure it, you know, quite strictly. 

Prasad Setty: It is going to be evolutionary. I think when we talk about performance, I think about four Es: effectiveness, efficiency, experience, and equity.

Parker Mitchell: Effectiveness, efficiency…

Prasad Setty: …experience, and equity. 

Parker Mitchell: Experience and equity. 

Prasad Setty: And I think when you put in these kinds of systems, first you want to focus on, as many of the speakers today have talked about, make the experience really good, make sure there are no inequities in them. And so you get people starting to use them, and that becomes your first sort of activity measure of performance.

But then over time what you want to really do is to see if people are becoming more effective at their jobs, right? And so that is where a lot of folks were talking about augmentation, rather than subtraction, I think is going to be important. And then the efficiency measure. I'm sure all the CFOs are excited about it, but I think that becomes the least important of all of these. 

Parker Mitchell: That's interesting. So I think what I'm hearing is that there's certain phases as new ideas are introduced or new technologies are introduced. And the ROI, or the measure of effectiveness of a new intervention, of an experiment that we've talked about should change depending on where it is. Is it early on, and is it experience and uptake based? Or later in its evolution, when it's more efficiency and maybe harder numbers. Is that right, that there should be an evolution? 

Prasad Setty: I think so. Otherwise I think people, the criticism that I have typically is that people very quickly jump to the efficiency side of it. And then it just becomes something that your organizations resist because they don't want to just be seen as people who are just cranking out the wheel faster. They want to really be better at their jobs and sort of result in, you know, much better decision making and so on. And so I really loved what Tim at Delta was talking about in terms of the kinds of signals they're capturing for their sort of the fourth category of work. Because I think one of the signals that is very hard to capture, but it'd be useful, is what is the quality of decision making that is happening? Right? Across your manager teams or your, you know, your leadership teams. And so, how do you capture that, and how do you see if that is consistently getting better because of the application of this kind of technology? And that to me is going to be the real productivity improvement in the long run. 

Parker Mitchell: I want to jump to that in a moment, but I'm picturing, or I'm assuming, Google is a company of engineers, very numbers based. How was the point of view that you introduced, that you don't have to jump to efficiency, you don't have to jump to ROI right away, how was that received within the broader Google community? 

Prasad Setty: You know, Google was very expansive in its nature. And the thought for many years out there was that innovation trumped efficiency. And so anything that you could showcase as helping people have the autonomy to think creatively. We heard a lot about critical thinking, we heard a lot about imagination in the previous session. So anything that furthers those elements is certainly going to have a better chance of resulting in innovation. And so that was always seen, at least a long time back, as more important than efficiency. 

Parker Mitchell: So you've talked about just the nature of work changing, the nature of knowledge work changing. And when it changes, as it evolves, the return on judgment is going to be higher. Are you seeing any early glimpses of types of work where that change is already happening, where knowledge work is changing?

Prasad Setty: This is one that I would love to see, but I loved hearing some of the experiences that Lesley and Jennifer and others are talking about, right? I think they are truly leading this kind of thought. And then Anna spoke about the world of athletics. One of the things that I'd love to see in terms of whether these capabilities are improving, is in using AI for better simulations. In the learning community, I think it's sort of well established that, as adults, we learn by doing. And so that is why new job experiences are so much more valuable than perhaps classroom education. 

But what exactly do you get from experiencing something? You get to experience multiple contexts, and you get to repeat your behaviors in different interactions. And so if you could simulate that, and if you could accelerate that kind of development using AI, then I think you are furthering everyone's development. And so, if you look at the sports analogy, for instance, in Formula One, race car drivers sit in sims, sit in simulation engines before they get onto the racetrack.  And that is how they practice.

And so what is that equivalent for a daily manager? And I think the equivalent of a game in any kind of sport, the equivalent in the corporate world is meetings. We all go through millions of meetings in our career. Are we getting better in each meeting throughout our careers? And if not, why not? And how can AI or other things help us be better? 

Parker Mitchell: I mean, it's interesting. I'm tying together a few threads here, but if AI helps us be a little bit more productive in our jobs, it can save some time. And if that time is freed up, it can give us more time to do some of this practice, to be able to refine these skills, and then be able to bring better judgment to bear. You know, rather than just be busy for 40 hours a week, we can be thoughtful for 10 hours a week and then be high value for 30 hours a week. And so I think there's a world in which the efficiency gains can free up time for some of these refinements in judgment. 

Prasad Setty: I think you summarize it beautifully. That is the arc that I think we all want to get down to. And you're sort of seeing that even with the development of these large language models, right?  The initial versions of, you know, I've worked in technology for a long time. We'd always think about latency. How quickly is the technology responding back to your query?

And we'd always want to reduce that. When you type something into Google search, you wanted the latency to be as small as possible. But now you suddenly have GPT4.0, where the thought is, let us spend as much time as needed on inference so that we can reason better. And so that is exactly the equivalent on the people side too. How do we get to slower, better, good judgment, and therefore better decisions in the long run. 

Parker Mitchell: You and I have had a chance to interview a number of CHROs and other talent leaders over the past six months. And I think one of the things they've always said is they just don't have time. Even those who are embedded in Silicon Valley, they don't have time to think through how the technology is changing. What's one piece of advice that you would leave for folks in the audience about the importance of doing something even in that imperfect fog that you've just mentioned? 

Prasad Setty: It is certainly a hard challenge, being in any kind of senior operating role, particularly in the HR side, and there are lots of things that are important. But as Bill said right from the beginning of this morning, this is certainly on top of every board conversation. And so I'm sure everyone is thinking through what the right activity is out here. 

I guess I would go down to a couple of sort of fundamental principles that I strongly believe in, particularly for this audience. I think after all these years, and even with all of this technology and with all the pressures that we see, I still believe that people managers are an incredibly important leverage point for any organization. The moment you have more than, let's say, 30 or 40 people managers in your organization, that has got to be the place where you invest disproportionately more of your resources and your attention to that community. Because that is what is going to define what the lived experience of your employees is, and therefore you'll get the best sort of return from that. So that's got to be one of the most important use cases. 

And just to echo everyone else's views out here, there are two ways to think about AI applications right away. One is, can AI be capable of doing things that you don't want to do or your people don't want to do? So it's very much a subtraction-oriented application of AI. Or the second is that you think of it as an addition-based view of AI, which is to say, can I deploy AI in a way that is going to help my people be better at their jobs and grow and learn better? And I do think that Valence’s Nadia certainly falls in the latter category.

I don't think you should choose only additive technologies. I think you have to look at subtractive ones too, but I think the additive technologies are likely to land better with your organization because they don't feel like they're being displaced. And any moment that you're waiting before you deploy these kinds of additive technologies, you're robbing your people of learning opportunities. And that, I think, is a waste of their potential.

Parker Mitchell: I mean, the urge to move fast, I think, despite the uncertainty, we've heard that over and over again. So thank you for joining and sharing those thoughts with us. We really appreciate it. 

Prasad Setty: Thank you, Parker.

Making Managers Excellent

Prasad Setty led Google's original research, Project Aristotle and Project Oxygen, into what makes teams and leaders effective. In this fireside chat from Valence's AI & the Workforce Summit, he discusses the power of AI to provide personalized leadership development at scale.

Prasad Setty

Former VP of People Operations, Google

Making Managers Excellent

Upskilling an AI Workforce

AI is already changing the jobs we do and how we do them. It’s also one of the best tools we have to navigate the changes ahead. In this panel from Valence's AI & the Workforce Summit, HR leaders Rachel Kay (CHRO, Hearst),  Chris Louie (Head of Talent, Thomson Reuters), and Tina Mylon (Chief Talent and Diversity Officer, Schneider Electric) share how they're thinking about upskilling their workforces for the jobs of tomorrow.

Rachel Kay

CHRO, Hearst

Chris Louie

Head of Talent, Thomson Reuters

Tina Mylon

Chief Talent and Diversity Officer, Schneider Electric

Key Points

Das: For our next panel, I'd like to introduce Tina Mylon, Rachel Kay, and Chris Louie. As you guys make your way, I'll do some intros just so that we can keep things moving. Tina is the Chief Talent and Diversity Officer at Schneider Electric, where I believe currently you have an initiative upskilling at scale, called Upskilling at Scale. Rachel Kay is the Chief People Officer at Hearst, leading recruiting, diversity and inclusion, compensation, and talent planning. I'm sure quite a few other things across all of Hearst's businesses. And Chris is the Head of Talent Development at Thomson Reuters and also teaches, I believe, algorithmic responsibility for the Human Capital Management program at NYU.

Chris: That’s right. I work for Anna. 

Das: Wonderful. One of the things I love about a conference or a day like this, a summit like this, is that everybody comes from such different businesses and industries, but is talking about a topic that's shared. AI is going to impact different companies and different businesses in different ways, but it's going to impact all of us. So I'd love to just start by hearing, what is the impact in your industry and specific business, and then, if you're able to, what are some of the insights into the new skills and roles that you're finding you need? Round robin, maybe, Chris, if you want to start. 

Chris: Yeah, happy to start. So there are lots of analyses that are out there looking at different industries, looking at workflows and roles, and trying to estimate how many of those are either able to be augmented or automated with AI. If you take a look at legal, and if you take a look at tax and accounting, those are usually the ones that are all the way in the–whatever your two by two is, but– upper right, those happen to be our biggest businesses. Reuters is also part of that as a news business. And so, a very profound potential impact of a AI, both on our product set and then, because of that, you know, for our own employees, it's very easy for them to get their heads around the fact that AI is and can be existential, which honestly has made our job a bit easier. Thinking back to that change management panel a little bit ago, to help employees understand the importance of, you know, getting proficient in AI and continuing to develop skills as, you know, some skills that you may have rested your laurels on for the last several years–or even decades–may be either less relevant or less protected. So some pretty profound impacts, both on our external business and then that flows through to our internal. 

Das: Rachel?  

Rachel: Let me just do two examples. So Hearst actually is a portfolio of six very different industries. But two examples of things we're worried about–and I think we don't know what's going to happen– but, take magazines. Right now, magazines is largely a digital business. The bulk of our revenue in magazines comes from digital advertising. How do people find our magazines? They go online, they might search: what are the best air fryers? They click a link, it goes to the Good Housekeeping website. Now they're looking at our website, we're getting revenue from advertisers because you are seeing advertisements on the website and then if you go ahead and click on the link, to Amazon or whatever, to buy the air fryer, we're getting revenue from that. Those are our two biggest pieces of revenue. Now, when you go on to Google and search for one of the best air fryers, you're going to get a blurb at the top that synthesizes it for you. You're never going to come to our page. So we're facing, in magazines in particular, some really existential threats around two of our biggest revenue streams and how we are going to accommodate for that. 

At the same time, of course, all of the different–and you see these in the lawsuits–chatbots or LLMs are using our content to create the answer. So it is an interesting question around how we square that circle, because there's a lot that's unknown, and some of it will be legal, I'm sure, some of it will be other revenue streams, but we're really working that out. 

Another business that we have which is very different, Fitch Ratings. Fitch Ratings is a credit ratings agency. They compete with Moody's or Standard and Poor's, you think about what they do, those analysts, they're just combing through tons of information–much of it publicly available–to assess the risk of purchasing a bond from a particular company. That, theoretically one day, could all be done automatically. Why come to a ratings agency? Now, at the moment, you have to come into a rating agency because it's highly regulated and that's required. But in the future, will investors need that? Or will they be able to determine those things on their own? So I think there's some really big questions our businesses are struggling with, and so much of it is still unknown. 

Das: And that existential piece, I think, is an interesting one. Tina, you're in a very different industry. So what are you seeing at Schneider Electric and how are you feeling? What's been the impact on the company and some of the roles needed and skills? 

Tina: Can you guys hear me? Okay, just testing.

So hi, everyone. I work for Schneider Electric. So we are an almost 200-year old company headquartered in Paris, and it is half an industrial tech company, as we call it. So, basically in energy management, in terms of automation, digitalization, anything, a plant, a data center–that's a big part of our business–a factory is using to manage their energy efficiently.

So, our whole mantra is: energy access is a human right. And it's a distributed workforce: about 50,000 people in Asia, 50,000 in Europe, 50,000 in the Americas. Part of what we do is really try to make sure, at the end of the day, whether it's your home or whether it's a plant or whatnot, are you using energy in the most effective way? With electricity being one of those game-changing technologies we talked about in the morning as the most efficient vector. 

So, back to your question, part of what we're interested in, especially for the customers, is how do you create a more sustainable energy landscape? And what is the role of digital? And now, 10 times, 20 times, 100 times over, what AI–even generative AI–is doing? It’s a lot about managing the consumption of energy. So how can we be more efficient that way? It's a productivity gain. And we see a lot of interesting opportunities with AI, as well as generative AI. And then, also, what is the energy mix? So how do we shift also about more sustainable, like renewables, and how we do that. For me, in the talent and diversity space, that comes to also a very pragmatic question that all of us, I think, are grappling with is what are the most pragmatic use cases when it comes to making sure our workforce is equipped to, basically, serve our customers and also the broader sustainability goals that we have for society at large.

The last thing I'll quickly say, I mean, for us, one of the things I think that's most interesting to me–and we'll get into use cases–but a couple of the panelists and some of the speakers talked about inclusion and equity. And that, in the face of generative AI–wearing my DEI hat–is super fascinating. So really, in the advances of all this accelerated technology, how do we make sure no one is left behind, just like energy access? How do we make sure the data is as good data as we can do? We are all struggling with that, as you guys alluded to, especially around biases. And then how do you make sure everyone has access to that transition to a more AI-first world? 

Das: We have some great themes here, I think, to talk about. There's this existential threat, I think, that a lot of people are starting to feel. As a content producer myself. I know there's sometimes that feeling of: back against the wall, I need to change, or what's going to happen next? We have this idea of, how do you bring everybody along within that environment? And then you have the fact that  some businesses, it is existential, and others have huge opportunities. Within all of that change, and how fast everything is moving, where are you now with understanding, do you know what skills and jobs you kind of need in the future? Are you just figuring that out? Where, where are you with that question? 

Tina, you highlighted it as something that's being grappled with. 

Tina: Maybe I'll start and then have Chris and Rachel come in. For us, the answer is mixed, quite honestly. We are embarking, and a couple of folks I talked to, it's one of the things that's keeping me up at night–and Anna spoke eloquently about skills and what it is, what it's not. At Schneider Electric, we are embarking, and it's already been a year and a half in the journey, on a major revamp of our skills ambition. So, we're actually not even starting with technology. We're starting with the whole job and skills and career architecture. For those of you who have been in this space for a long time, you know how painful that can be.So we are redefining our job architecture. More outside in, more market data, and at the granular skills level. We've just engaged, in the last couple weeks, a technology partner to have the end-to-end user interface. And for us, it's really starting with the most critical skills, which I think will be familiar to all of us. Technical skills for our R&D area, especially around engineering, digital AI, though probably more even on software development for us, and certain human skills. This is where we're trying to codify in the system, in the user experience, where people are, what the gaps are, how do you have the pull through when it comes to truly upskilling people at scale.

And we are 150,000 people, like I said, so every year 15 percent are turning, through hiring, but the 85%, that's the mighty majority of folks that we want to very much be ambitious and also focused on how to support them in their upskilling. 

Rachel: Yeah, I don't think we know what the skills are we're going to need or the roles are going to be because so much of it still is unknown in terms of what our future business model looks like. You know, I think what I'm what I'm struck by right now is that the most important skill at the moment seems to be curiosity. And what I love about this and living through this moment is in some ways how egalitarian it is because nobody has the answers and so that means the answer can come from anyone. 

At Hearst, one of our, you know, new folk heroes is this guy named Mike McCarthy who was hired a year ago as a salesman in our Connecticut newspapers And he has, just like Jennifer was describing, when the ability to create your own GPTs launched, he was one of the first people just to dig in and start playing around. He created a whole raft of GPTs, you know, one is the skeptical buyer GPT, so a salesperson can go on and practice their pitch. Another is the deck creator GPT, he has like 10 different GPTs just for newspaper sales. And he started playing with him on his own, he got his team to use them, and all of a sudden, you know, other people across our newspapers businesses are picking them up. And now he has a new job: he's the head of AI for newspapers. So, you know, just thinking about, would you have picked a first year salesman from our Connecticut newspaper as the future leader of our newspaper's AI efforts? I never would have. If I was looking for the skills for that role, I wouldn't have thought, let me go find a guy who's really great at sales in Connecticut, but because we were open to it coming from anywhere and because he was curious, he was able to take that on. I just think it's really exciting and I think what I'm trying to check my own biases on is being too prescriptive on what the skills are that we need because I think we're still figuring it out. 

Chris: Rachel, I'd add on to your curiosity, important skill, I’d definitely agree with that. Two things I would add and put kind of at the same level. One is critical thinking. Because, you know, curiosity, you can ask the questions, you have the, the sort of, like, courage and the compulsion to ask the questions. There are so many different places where you can get answers back now, and as, as we know from, you know, kind of leveraging AI, a lot of those answers can be really, really wrong and hallucinated. I mentioned before, one of our biggest businesses is in the legal industry, and, you know, the cautionary tale that keeps going around are the couple lawyers who submitted, you know, case histories that were completely fabricated. And, you know, they got their slap on the wrist. You need that critical thinking, we all need that critical thinking in the world these days, but also especially in trying to leverage AI.

The other thing that I would add into that is, I guess I might call it imagination. Because I think the beauty of the promise, the potential of an AI solution is that it is giving us capabilities and skill sets, both as organizations and as individuals, that we might not have otherwise had or had access to. And as a result, you can imagine and invent both new avenues to go down in different ways of doing things. And, you know, to loop back to your point around trying to figure out, you know, how things should work and where the business is headed. I think that that is like, the first thing that you need to do in order to try to figure out what skills you ultimately need to develop. I don't think it really works the other way around. Where's your business going? How's it going to work, or how could it work? If you don't know, but you can have some scenarios and then, Okay, what skills do we need or will we need in order to be able to either explore or deliver on that promise? I think we all either are or should be kind of living in that in that space right now. We. We certainly are in Thomson Reuters. 

Then Rachel, the other thing I would, I would share is, you know, the example that you just gave of, you know, the salesperson that may not have been the stereotypical person to put on top of that project. You know, we did a similar thing with somebody in our news business as well, taking them out of their job, having them spend time with our Thomson Reuters labs to really understand the potential of what AI could do, and then unleashing them on the business. And, while again, it may not have been the stereotypical background, I think if you think about those different dimensions of curiosity, she definitely had that to even be up for this. Critical thinking, I do think that that's a hallmark of, you know, folks that have operated in the news industry, whether it was AI or just human beings telling them stuff, they have to really apply that filter and lens to what they're hearing to try to get at the truth. And then the imagination piece, I think, you know, what better person to dream up than somebody who's been living in one place but with the expertise of, Hey what was painful or what has been painful and what could be better? 

Das: Yeah. And it's interesting, as we're highlighting these skills, kind of the three big ones I heard were curiosity, critical thinking, imagination. And then even the word upscaling. And the big word that comes to mind for me is this idea of learning. And one of my favorite business quotes is from a guy named Arie de Geus, who said learning faster than the competition is the only sustainable competitive advantage, especially in these sort of fast-moving times.

So taking that kind of learning lens, how have you approached designing the learning programs necessary? I know you all have different initiatives around this. Talk a little about those initiatives. Like, how are you creating that space for learning so that you can learn faster than the competition, so you can adapt? And I think earlier somebody highlighted the innovator’s dilemma. Like, how do you get out of that? And what are you doing? 

Das: One of the things that we've done is actually to go right at what you just mentioned, of creating the space. So, you know, I don't know that this is completely novel. I think many companies have taken a stab at doing things like having a focused or dedicated or regular learning day or learning week or learning month. You know, and it sounds like, listen, we should always be learning, and therefore why do you need to just dedicate a day? Having said that, there's a lot of stuff we should be doing, you know? And it's just how do you demonstrate to your organization that, hey, this is really important, that it is a priority at the level of the other things that you are doing, and we are in a concerted way together going to block the time. 

Chris: So that's what we've done. Every quarter, we have a learning day. We instituted that in early 2023. Probably not coincidentally, the first learning day was dedicated to AI. So we had sessions on, you know, AI and LLM 101, AI in our products, AI in our internal operations, and then a workshop on AI in your job. We ran multiple versions of those sessions across time zones so people could get the benefit of live.

And, again, that demonstrated to people to make time for learning. And it also became a key pillar of our approach to getting our workforce kind of spun up on AI. We tried to go broad in educating. This, again, was an example of that education. Enabling, so we put effectively a privacy-protected version of ChatGPT in everybody's hands. And then experimenting, both with that solution and then through hackathons, etc., to just show people that not only was it okay to experiment, but that it was valued. 

And then the other piece of that AI learning was actually focused, right? So we had a bit of a SWAT team that would go in with individual organizations and with their leaders to help diagnose where the biggest business needs, and therefore potential AI use cases, might be, and figure out how technically, operationally, and procedurally, and from a human standpoint, we may help them to get over the hump to actually enable their workforce to realize those use cases.

So that's kind of the approach that we took. And they was specialized, function-specific learning that went along with those focused approaches. 

Tina: We're really trying to move away, and we're not really there yet, from learning hours. So coming from a very KPI, somewhat traditional learning environment around the world. When I joined the company eight years ago, it was like this obsession. Like, tracking every little thing, you had to mark it in your LMS. So we are similar. And I think we've borrowed your phrase as well, because our campaign is around creating time and space to learn. Now, the thing is, by freeing that and encouraging people to really upskill, at the same time, we are very focused on codifying that data.

So that's why, back to the skills transformation, having a system to do that. And sometimes it sounds a little harsh, but also saying: this is about us staying ahead of competition, growing the business. But you grow yourself, you grow the business. So that whole expectation is: it's your job. It's your job to upskill, stay relevant. The company has to do its part to provide the great user experience, the great content. But that mindset shift, we're trying to message more and more. It's not easy. Everyone's still used to: How many trainings do I take? Which ones are mandatory? Like, how many hours? And that broader shift is something we're trying to implement.

Rachel: So we're kind of a hot mess when it comes to, like, learning culture. Hearst is interesting. Whenever I go to these panels, I always feel, like, inferiority complex creep in, but, 

Tina: No, it's like collective commiseration. 

Rachel: We're incredibly decentralized, right? I mean, Hearst is incredibly decentralized. We have six very different businesses. And, from the center, we tend to provide carrots, not sticks, right? So here's stuff. If you want it, business, you're welcome to use it. And if not, that's okay. Our feelings aren't hurt. This is one area where our CEO kind of came down and said that we need people to be familiar with this.

So in terms of AI, I'm thinking about it as 101 and 201. 101 is mandatory, and 201 is optional. But in terms of the mandatory, we wanted to launch training to make sure everyone was familiar with the tools. Our tech team had invested a lot, and I know many businesses have, in creating proprietary versions of the OpenAI, the Claude, the Anthropic, all those different tools.

But we weren't seeing a lot of uptake. The curious, the early adopters were using it, but over 30% had never ever even tried it once, and the vast majority had gone and looked at it and never gone back. And so we wanted to force people to do at least something, but it had to be something of quality and something that felt relevant.

And so we invested in having live learning sessions. Everyone had to go to a live session, virtual live session. And it was customized based on your job function and your business. And so we had 17 different versions of the training, and we delivered the training over 100 times. We got 8,000 people to go through it.

The scores for the training were pretty high. They weren't out of the park, but, you know, my L&D leader kept getting very upset by that. I'm like, look, we dragged some people to this, so of course they’re not gonna say it was fantastic. But, in general, it got pretty good feedback, and it was customized, as I mentioned, based on what they do. So if you're a salesperson, you're going to go and see use cases for how to use this in sales. If you're in HR, the use cases were in HR. If you're in content and news creation, it's going to be use cases relevant to that. 

That group, by the way, was our lowest scoring, both in terms of value for time spent and also propensity to use genAI in the future. Anyone who's a content creator, based on what I was just saying earlier, are very skeptical that this is going to replace their jobs. They're very skeptical that this is going to downgrade the quality of what we do. And so, you know, seeing that, having them go through the training, getting the feedback, hearing what they're saying, has actually been really helpful in pulling together our future communications on that.

So, that was the 101. In terms of 201, we have genAI champions, we have our first Tech Academy, we have a TechNext Conference, where we have in external speakers like Ethan Mollick. So we're trying to keep it in the water now that people can keep getting smarter. But for us, the big unlock was making sure everyone at least had that first exposure, and then they could kind of decide where to take it from there.

Das: So we've come across this idea a little bit of measuring success, right? How do you measure success with upskilling, and how do you measure success on upskilling when you're trying to figure out still what those skills are? So I'd love to hear a little bit about, just if we could dig in a little bit more on what are the metrics and the measurements and how you're thinking about that.

Rachel: I mean, for us, and again, this is probably pretty basic. For us, right now, the measure is people's usage of the tools and active projects under discussion that use genAI. So, if you look before and after we launched that training program, we saw a 175% increase in the number of users using our internal ChatGPT. And we saw a 200% increase in the number of projects under active discussion for development. So, for us, that's how we're measuring it at the moment, right? Eventually, it'll be actual business results. But for right now, I think it's just people engaging, is our first horizon. 

Tina: I think, for us, just simply breaking down more maybe soft or human skills versus the technical skills and the digital skills. On the latter, maybe we're overengineering it, as Schneider typically does, but we are going hardcore in codification, certification, levels. What's helped me is it's allowed me to put more business case to also, frankly, outsource a little to go faster. So going to like a Coursera or a platform, where it’s fairly structured, you can cater it by certain domains. Because we have this habit of wanting to create and build because we think we know it all, and that takes up a lot of time as well. So on the latter part we are super structured about certain domains and job families, where you have to be, and some of it's very compliance driven.

Chris: We are having very active debates right now about this very question and the different metrics, including the activity and usage metrics that, Rachel, you just shared. And then trying to be more regimented about what do you need to actually have developed, and have you spent the time to develop it in these specific functions, et cetera.

Those are certainly some of the things that we have been measuring but that we are talking about, because we are trying to figure out: What are our top two or top three things, right? We all have like hundreds of things that, you know, metrics that we look at. One thing that we are talking about is trying to get past the activity, kind of to your point, and thinking about the outcomes. And one set of outcomes, from the employee or from the colleague standpoint that we've talked about, kind of gets at, I think, a core belief. Like, yeah, we talk about skills all the time. We, literally, we in this room talk about skills all the time. And I think to any individual employee who might be thinking about themselves, I don't think that skills is the thing. They can think about skills, but they think about skills as a means to an end. And I think that the end is often like near-term and long-term career growth, career opportunities. Having a job that I like or am excited about. Having things that I can go to that I might be even more excited about in the future or work toward. 

So in that vein, the engagement survey that we put out has, like everybody else's survey does, statements that we ask the degree to which you agree or disagree on it. And statements around, I believe I'm growing my career at Thomson Reuters, or I believe I'm developing the skills that I need to be successful, to be able to do my job or unlock future opportunities.

We're looking at those measures, and we're thinking about those measures very seriously. Those have improved, by the way, over the last couple of years. We want to keep that going. And so we're thinking about that from a colleague standpoint. From the business standpoint, I don't know how many of y'all talk about workforce planning all the time, or strategic workforce plans. We certainly do. And if you think about it theoretically, there's a case to be made of, okay, delivery against your really great crystal ball defined strategic workforce plans in terms of the skills that you will need. If we could somehow both get to those plans in a good way and then assess the degree to which you've developed those skills so that you're  hitting on your plan, you're delivering on your plan, which means that you have the talent that you need now and in the future to address business needs. That would be really great. You know, are you 90% of the way there? Are you 50% of the way there? A lot of the stuff I just said, we don't have the things solid, but, theoretically, if you did, that feels like something that could be really great and on target with delivering for the business.

Tina: It's not perfect, but we're doubling down so much on skills. And listening to you guys and the audience, if I'm going the wrong way, you guys need to tell me, and I need to pivot because I'm like, oh my god. But one quick thing that Anna said that really resonated with me, also. I totally agree, Chris, that skills isn't the end-all, be-all. It's the growth and the career evolution of employees. That's super important. When we make talent decisions, we actually have that sequence, meaning we do start with skills and qualifications to be able to do the job. And then we do two, and this is small scale, maybe that's the top 1,000 jobs, but we look at two factors. So you look at skills, and then we look at your preferences, a little bit more psychometric. And then we look at strategy. So you may have skills where you're a really strong communicator, and then, in your preferences, you're a huge introvert, like myself. And then you look at the strategy. Tina might be good at communications. She's a strong [communicator], but she's a huge introvert. Yet what does she do strategically? That's like, what is she actually doing about it? And that formula, we're testing it, and it works pretty well. That's back to the holistic, beyond skills, how do you really grow a talent and assess a talent? 

Das: So I know we're coming up at time, but I have to get one last question in here because you just teed it up so beautifully, which is: You know, we started the day talking about this great promise of AI to really personalize knowledge to each of us if it has the context it needs. And so, even though AI is what we're trying to adapt to, it's also this really powerful tool that can help us adapt. And I'd love to just hear how are you thinking about that, and how are you thinking about an AI or an AI coach as a tool for learning and that sort of upskilling and adaptability.

Rachel: I mean, whether it's a coaching tool or AI in general, AI is great at teaching things that it already knows. I mean, I see this personally with my kids, right? You can go on and generate quizzes for, you know, my son had to read the first 60 pages of Fahrenheit 451, ask six questions that would test to see if my son had actually read the first pages of Fahrenheit 451. Worked great, and he had. But I think it is great to reinforce things you already know, right? So I think it can be used either in this coaching context or even just, you know, I want to test my ability to articulate a philosophy for this or an approach to that. What are some questions I might get asked? Or what are some things I should be thinking about? So I think the tools are out there to use already that can hold yourself accountable in your own learning journey, whether it's customized for that purpose or not. A lot of them are just generally good at it. So, you know, I haven't thought yet about how to formally incorporate it, but something we tell people is: you know, once you've learned something, those same tools you just learned can now help you to retain that learning. And you should be thinking about how you do that on a regular basis. So that's just what I am thinking.

Chris: I'll add on to that of, I feel like with most problems, most business problems or situational problems, the answer is usually within the person. It's usually within you, and you just need something to help you kind of like pull that out. Most of the time, not all the time. Sometimes I have no idea, the answer is not in me. And I think an AI solution, a coaching solution can help. A great coach can help with that. I think a coaching solution can help with that. And then the other thing I think is, we don't scale, you know? What HR learning organization here is built to scale in anything that remotely looks like a one-to-one way, at least for most of the population, right? And so part of the beauty of this is, it can scale and it can personalize. 

Tina: For me, just to add, it is, to me, I think of AI, especially generative AI, as augmentation and acceleration. And nothing replaces the human touch. So high tech and high touch go well together, and the choices we make will have big implications.

I, myself, have been using Nadia for about half a year now, and we're piloting it in our organization. And it's going quite well. And I shared this with Parker, but, at the same time, my chief AI officer has been on my case going, “Pay attention to this, da da da da.” They were super nervous. But the way we position it, again, to augment, to accelerate, like a check in, as a leader, I myself have found that super useful. So, I'm positive, cautiously optimistic. 

Das: That is wonderful, and, you know, thank you again. I feel like we could take quite a few more questions on this, but, for the sake of time, I'm going to let everybody get to their break.

First though, I want to just give a huge round of applause for Tina, Rachel, and Chris. Thank you so much.

Upskilling an AI Workforce

AI is already changing the jobs we do and how we do them. It’s also one of the best tools we have to navigate the changes ahead. In this panel from Valence's AI & the Workforce Summit, HR leaders Rachel Kay (CHRO, Hearst),  Chris Louie (Head of Talent, Thomson Reuters), and Tina Mylon (Chief Talent and Diversity Officer, Schneider Electric) share how they're thinking about upskilling their workforces for the jobs of tomorrow.

Rachel Kay

CHRO, Hearst

Upskilling an AI Workforce

Head of Talent, Thomson Reuters

Tina Mylon

Chief Talent and Diversity Officer, Schneider Electric

Game-Changing Coaching

Author of The Digital Coaching Revolution, Dr. Anna Tavis believes HR professionals should look to performance athletes and their coaches to understand the power of a work coach to drive performance.

Dr. Anna Tavis

Chair of the Human Capital Management Department, NYU

Key Points

Parker: Great. And this is a perfect segue. I'm going to welcome Anna to the stage. Anna is the Chair of the Human Capital Management program at NYU. Anna has literally written the book on the future of digital coaching. We're going to talk a little bit about some provocations around, you know, first, sort of why coaching is so important and then what are some areas that we can look to to potentially get a glimpse of what coaching in the workplace might look like. 

So, welcome. Thank you Anna.

Anna: Thank you so much. You know, I want to build on and maybe challenge a little bit of what Brent said here before, and that was about the future. So, you know, not to go to a cliche saying that some of that future is already in the ecosystem and might not be sitting in organizations. And that's the topic of, you know, he was in my discussion, is where do we see this coaching and technology, you know, blend, in combination already working in setting some precedent for what we could be looking for in our organization. What are the patterns that have been already tried and proven that are working? And that was kind of the idea that I had when I was researching my book, and I want to share it with you. 

Parker: Wonderful. I want to get to that analogy, but tell us a little bit about you and why it was coaching–and the blend and intersection of coaching and technology–why is that so important to you?

Anna: You know, I rebounded into academia. I was an academic, I left and I spent 15 years in business, both in technology and financial services. And that's where I realized, you know, how important it was. I worked in Europe. I worked here, I was in charge as a head of global talent development, etc, and it always felt that coaching fell short as a tool, as a method. It was very effective with some and even with all of the investments that we were making at the top of the organization.  I don't think it was optimized, that investment, because it was a little bit too little too late. The challenge has always been, how do we make that available and accessible to different levels in the organization.

The other thing is, as most of the people in the audience know, it was primarily applied as a corrective tool. Coaching, you know, got a bad reputation. If you tell senior executives on Wall Street, where I worked, that they need to get a coach, that was like a curse. That means that next will be a performance improvement plan or something along those lines.

So I don't think that coaching got the impact that it was intended to make in organizations. And, obviously when technology became available to us first through the platforms, you know, can we connect? But I remember in 2019, as I was doing my research about coaching and technology, the kind of the roots of it, discussions, whole conferences around whether coaching would be effective on Skype. Remember that technology? Or on the phone. Remember, could coaching be done through the phone? No way. There was a lot of resistance to that. And so that desire to really optimize what this particular method of learning could do, that we know goes back to the Greeks, as probably, you know, historically, the effective method, tutoring.

Parker: So we know it works. We know that spreading it, sharing it more widely, democratizing it would be helpful. We're also looking at the future. So where do you look to to get a sense of where the future of coaching in the workplace might be? 

Anna: Yes. So as I was looking around, sports. Specifically professional sports was where I was starting to do research. It started with data. I remember, Moneyball came out. And then when I looked at athletic performance, even the latest Olympics, there's no way those humans could get to where they got without coaching. There's no question and it's interesting that we have this conversation here about trying to prove to our organizations that coaching works, but no one questions the effectiveness of coaching when it comes to sports. You know, even in little leagues, your basic soccer camp for your three year olds, no one is challenging the fact that all of those kids deserve to have a coach. So that barrier is down and has already been established. You need a coach to get to any level of performance in sports. So the next generation of coaching, how do you get from, you know, the basic training where your parents are doing the coaching, to even mid-level, school-level coaching.

And that's where technology has started to come in a long time ago. From the videos, basic feedback on, the speed of your swing and getting those types of feedback, data points to athletes early on. It doesn't mean that you take away the human coach, the pro in your golf game. But technology was accepted from the get go as soon as those tools became available. And the significant amount of investment that's been put into developing the whole ecosystem of startups. In fact, some of the clubs, some of the professional associations, groups started creating their own technology ecosystems to encourage innovation in those types of technologies that can accelerate coaching for professional athletes.

Parker: I think one of the words you used there was feedback.

Anna: Yes.

Parker: And so one of the things that's incredible about the technology is that it can codify that feedback, deliver it well, and if it's technology it can do that at scale. Where does that analogy sort of have a strong parallel with work and where is work a little different potentially than the sports world?

Anna: Everyone here on the stage who was talking about how they apply these tools talk about performance and productivity performance, etc, etc. And that's where the Venn diagram is between athletic performance and performance on the job. At least in the broader definitions of performance, that's what it is.

So I think there is a lot of similarity in what we expect from either an athlete or an employee. And, you know, I've written and I've done a ton of research on performance management and one of the main headaches and failures of the current performance management systems is inability to provide ongoing feedback.So, in fact, you know, if you look at the athlete, if you look at an employee, it's the frequency of the conversation with a manager around feedback on performance, just in time, in the flow of work, that makes a difference in the ultimate output of that particular employee. And that's where organizations started to kind of manually mandate managers to have frequent conversations. We moved from once a year to quarterly. And there was a whole renaming revolution about, you know, check-ins and other types of language. Imagine that, just like with an athlete, you can provide feedback pretty much on a daily basis. Obviously, there's no human capacity from the manager perspective. A lot of research that has been done on managers, and the role of managers, and the importance of managers, and the conflict in the design of the role of the manager. The manager has to produce and give feedback. So it was really a catch-22 situation in companies around performance management.

So, imagine that you would have some way of providing that daily feedback that could be incremental and building up to the final performance appraisal, or whatever it's called in your organization, so it's actually built on pretty much almost daily feedback. And I think that that is the guarantee that, just like with any athlete, their performance is going to improve. 

Parker: I think one of the interesting things, as we explore the idea of AI, is that sports professionals can get feedback as they practice, so they'll do a lot of practice before they actually perform. Whereas managers and leaders, I mean, it's sort of constantly on the job and AI coaching can actually give them that feedback in an extraordinarily safe way in a practice environment so that they might feel the confidence and have the skills to put it in practice in the real world. 

Anna: And I really want to emphasize that, because we've done some research on using these feedback AI tools, people feel a lot more comfortable and psychologically safe in the environment where AI is providing this very basic feedback vis-a-vis your manager. Because there's a status difference and, you know, and not necessarily always the best, including the fact that managers are not available in their capacity to do that. But, it's a much safer situation when you can get objective feedback on how you are performing from an AI system.

Parker: There's a  quick story that I often share with CHROs. Maybe I'll do it here in this room. People say, well, maybe the manager should be the coach and so, you know, you should just be able to ask your manager. So let me ask a quick question. Quick poll here, how many people here have had a manager at some point in their career that they've had trouble working with? Hands up if that's the case. I can't see, but I can imagine. Okay, how many people here now have someone on their team who's having trouble working with their manager? It's a bit of a trick question. But yes, we all have, we don't have a perfectly safe environment and so this AI coaching is actually in some ways way safer for a lot of people to be able to have their first draft of things.

You and I have talked about a couple of provocative ideas and I know that that's really interesting for people to sort of chew on. One of them is around skills. And you're not necessarily–I don't want to put words in your mouth–but the current path of skills might not be the one that you think is accurate. Can you share a little more about that? 

Anna: I think we all agree that skill is just a very basic, foundational sort of element, a building block of what performance really represents. And those of us working in organizations, I think we've heard a lot about the context, you know, psychological safety, the team dynamics, there are so many elements that contribute to that ultimate performance. Because skill doesn't guarantee performance. There are so many different elements that need to contribute to that perfect outcome that we're looking for. But the ability for us to measure what that ambient context is, oftentimes, has been very limited. Speaking about data. 

Skills, yes, we can infer based on how fast you type. That's where it all started, et cetera. So I think we're going to graduate from skills, with the help of AI, and we'll be able to contextualize, you know, performance. And maybe we need new language.  I think “skills” has its own baggage because we've been using it for so long. If we're able to have all of that wraparound context in addition to, you know, identifying a very specific granular scale, we're going to get closer to actually identifying what it takes–if performance is our goal, which it is– to actually perform at the level of excellence and competency that is required of a job.

Parker: I just think that's such an important insight because skills could be too one-dimensional. And as you say, a skill in one environment might produce a performance outcome and that same skill in a second environment might not produce a performance outcome. If we're in a world where we can understand that context, that is a second dimension to the single dimension of skills that is just so important to try to track. 

Anna: I want to bring the sports analogy, again. All of those Olympic athletes that we admire, they are not thinking about their skills when they are competing for the gold or the silver. They're thinking about mental acuity, they're thinking about visualization. There are so many different elements that they worry about that skill is just automated by that point through the practice, et cetera, et cetera. And I think that’s the same thing we are going to see in the workplace, where we're going to learn a lot of skills are going to be automated through AI and delivered to us. So what will be required is this higher-level, working at the top of the license. Working at the top of your human ability is where we will need to compete. 

Parker: I just love that because I think that ability to sort of move up that, you know, that scale and being able to bring the best parts of who we are  in the world of transformation that we're all going to experience, that's so important. 

So, really appreciate the provocations for the audience and thank you for joining us today, Anna.

Game-Changing Coaching

Author of The Digital Coaching Revolution, Dr. Anna Tavis believes HR professionals should look to performance athletes and their coaches to understand the power of a work coach to drive performance.

Dr. Anna Tavis

Chair of the Human Capital Management Department, NYU

Game-Changing Coaching

AI Coaching Early Use Cases with Delta, Experian, and ADI

AI will change every workplace, but where does that change start? In this panel from Valence's AI & the Workforce Summit, HR leaders Tim Gregory (Managing Director, HR Innovation & Tech at Delta), Lesley Wilkinson (Chief Talent Officer at Experian), and Jennifer Carpenter (Global Head of Talent at Analog Devices) reveal how leading organizations are testing and experimenting with AI, the most valuable starting use cases, and how to expand from initial use cases to effective AI at scale.

Tim Gregory

Director, HR Innovation & Tech, Delta

Lesley Wilkinson

Chief Talent Officer, Experian

Jennifer Carpenter

Global Head of Talent, Analog Devices

Key Points

Parker: Terrific. I think Tim is somewhere and will be joining. We're going to be talking about AI coaching. I think, you know, we'll start with the coaching bit and the investment in managers and why it is so important to make investments in managers. We'll just do a quick go around to understand how you and your companies are thinking about that use case.

Leslie, I will start with you because, as far as I know, you have the most. exciting news that's fresh off the press. Do you want to share with the room what that news is? I think it fits in nicely with this coaching thing. 

Lesley: Thank you for giving me the opportunity to say it out loud. I've been saying it out loud all morning. We just found out this morning that we're in the top 25  great places to work in the world. So, the top 25, just found that out. 

It feels good because our philosophy is about people first. It feels like the right reward for being a people-first organization. So if your question is about why would we invest in something like coaching, I guess it's just because we know the potential that coaching has to unlock performance. It unlocks performance and it unlocks human potential. Why not give that to everybody? Our job is to unlock performance and potential and yet, we've given it, historically, to such small groups of people, it's been cost-restrictive, it's been time-restrictive, and it's been based on that really messy human relationship thing. So if you can find another way of doing that, then why not? 

Parker: Jennifer? 

Jennifer: Yes, hi. Congratulations. 

Lesley: Thank you.

Jennifer: I think just to build off what you just said, when we built the business case at Analog, I immediately went to equipping managers. Then I thought to myself, am I doing an actual disservice by thinking in that narrow of a business case? Because when you look at the representation of our management, they're largely heavily based in North America, largely a male audience, et cetera. So we widened our business case to actually have 40% managers, 60% individual contributors. I'm very happy to say the invitee list had 50-50 gender representation in it. So I think it's really important. We're going to study not only the adoption patterns–and I have a little bit more data to share about the adoption patterns–but also longitudinally understand how performance is being impacted, and so forth. Something I would say to all the talent leaders in the room: when you are thinking about your early adoption use cases, make sure you're not thinking too narrow of a view. Democratize access to these types of products because we do believe very wholeheartedly that it will ultimately help improve individual's performance and performance of the company. I think we all have a really deep responsibility to think very broadly about how we are inviting people to be early adopters in these things.

Parker: Tim, how about you at Delta? What's the business case for investing in leaders and managers? 

Tim: For us, the front line where employees engage with our customers, that's where it's all one. We put a great deal of effort in making sure that those experiences are phenomenal. We want to make certain that the employee experience ultimately drives the customer experience. So for us, it's really self-evident. We’ve got to really focus on making sure that those managers and our frontline leaders are delivering those moments that delight our customers. 

Parker: Wonderful. Jennifer, you're briefly mentioning, sort of, your pilot. I know that you're just in the design phases, but you've done a really thoughtful job–as you said–about who are the right people and you want to set that up so you can measure ROI, you can measure uptake, usage, feedback. Can you share a little bit more about how you design that with the room today?

Jennifer: Sure. And I'll tell you exactly how I was thinking about it. Did your mother ever tell you when you make an assumption, it makes an ass out of you and me?  So I thought we can't make assumptions. So instead of being an ass, why don't we ask? We really were trying to be thoughtful and I did make a mistake, I did make an ass out of myself. In some of our early pilots, we just gave the generative AI tools to people. And crazy, some people didn't want it. There was no parade being thrown for the generative AI product that we just kind of shoved down their throats.

With Nadia, what we're doing instead–and I really recommend anybody do this–is you invite with a very delightful proposition and then you allow people to opt in and opt out. What we found is we had 1600 people opt in, it's about half of who we invited. Of that, about half responded, about 81% said they’d love to, about 18%, 19% said they did not. But I now know why they did not. 

They also said the reason they opted out is they didn't have time. Now, I don't know if I believe them in that because I also kind of embedded another research question in the mix. Only 40% of them agreed that they're confident in their ability to use AI tools. So, do they not have time? Or are they more reticent because they're not sure? That tells me a lot about adoption. There's going to be people who are more a passenger than a pilot on the plane. Who are those passengers and how do we give them both the optimism that it can improve productivity and quality and the agency that they can do this, that they can be effective. 

Parker: Was there a difference in the percent who were confident in their use of AI for those who accepted? 

Jennifer: Yeah, it was crazy. Get a load of this. Of the people who opted in, 75% of them agreed with the statement that I have confidence that I can use this thing. The people who opted out only agreed that they have confidence in their ability 40% of the time.

Parker: That's quite the jump. So it's probably a pretty good explanatory variable, with time being the excuse.  I think this is what we're going to see, story after story, not everyone is going to adopt it with the same level of enthusiasm and the thoughtfulness with which we design these rollout programs is crucial.

Lesley, you've had a chance now, I think you're in month six or month seven of the rollout of Nadia in Experian. Can you share a little bit more about how you designed–I think–a very thoughtful, curated experience for people who are using her as a coach? 

Lesley: Actually, it's a bit longer than that, Parker. We're almost coming up to a year of our first experiment. We roll everything out through a series of experiments. That's the way we design products, that's the way we design HR products as well. So it's really natural to just go for a series of experiments. Our first experiment was, I'm going to call it “Vanilla Nadia.” Which was, putting out there the coaching solution. We had 100% take-up from the first experiment group within the first three hours of extending the invitation. It's the dream of us learning and development people, it's never happened before. But less repeat take-up. 

So we were wondering why. and one of the  problem statements was about: does it feel relevant to me and my organization, my job? So, experiment two, we train Nadia on our own characteristics of leadership, on our own philosophies and leadership content. Then we started to see the repeat use as well as the immediate take-up. 

So, experiment three, we have our engagement surveys, what happens when our engagement surveys come out, how do our managers get to use that? A shout out for my brilliant colleague, Brad, who's in the audience, who runs our engagement surveys and said, you know, we need Nadia to sit with a manager and explain, and talk, and coach on: what are the next steps?

Then we did the same with performance management. So, can you imagine, you're about to do a performance conversation. You can role play it, you can role play it verbally. That was the next experiment. That's when we started to get really strong ROI. One of the main measures of this is, do we see an increase in leader effectiveness on Great Place to Work? And yes, the answer is it's a 5% uptick in leadership effectiveness on the people who use Nadia, who don't use Nadia. Our last experiment is a program offering for all middle managers, the kind of leadership stuff you need to do on a day-to-day basis. The whole program will be based on Nadia.

Parker: I want to double click on that in a moment. But, Tim, I know we're earlier on in our journey, but one of the things that has come up in the conversations that you and I have had is you're looking at a range of technology options. You're looking at internal, external, current vendors, new potential partners.So you scanned the landscape. What was it that caught your eye about Nadia as Delta began their initial trial and now deployment? 

Tim: Sure. I think as Bill had mentioned very early on, there's only so much capital that goes around and you need to be able to create a competitive position and explain the value of AI.So I can share with your audience sort of how we got our leaders to understand this and then where, sort of, Nadia and the Valence tool kind of fit into our model. 

It started with having to explain this ML/AI thing that you did at the very beginning there. It was like, Hey, Tim, you know, it didn't seem like that long ago, we're describing everything is ML and AI. Now, it's just AI. What happened to the whole ML thing and how is this different than that? At Delta, we've been using machine learning for quite a long time for fuel management, even for weather prediction.  In that case, if you can imagine like an X, Y graph with a slope on it, and you've got a couple points on it, and you can change the value of those points, and change the slope to find a better prediction, and you can explain why it is that that particular point changed. You moved it this percentage and the prediction quality got better. You can update that information real time with the Internet of Things. 

So we've been doing that sort of thing for a very long time. This is fundamentally different with generative AI and the use of these neural networks that can learn, in ways that makes explainability a little bit difficult sometimes. The first thing was just kind of explain that, just like you did for your audience here. Then what we did is, we took this idea of learning and its ability to learn. You know, Lesley, as you'd mentioned, grounding it in your own information, we went down a similar sort of path. But we explained the process, we created four categories of this generative AI learning, sort of, use case scenarios. The first one was really focused on things you could do very quickly and add value. You could say, Hey, look, we're doing some stuff in generative AI, but it doesn't know you, it doesn't really understand the organization.

Then in the fourth category, you have stuff that knows you very, very, very well, so we'll talk about that in a second. The first category, we found, there were lots of our existing vendors that you could flip a switch, configure it, and turn it on in your existing environment. You could check the box and you've done something in AI. The value of it is questionable, it doesn't really know your enterprise, it doesn't understand the context of your business. Really, what's happening in a lot of those applications, they're just making calls out to these big frontier models that have been trained on Reddit, and so forth. You can do them very quickly and very easily oftentimes, but the value is minimal.

The next category is where you can actually ground it in the information of your organization. You can do it relatively quickly. The work that we've done with Valence, I think it was a couple months, maybe, from the very first time we had a conversation to the point we had employees in there using it and evaluating it. We put them through Qualtrics so they could evaluate the experience, and so forth. But this category is where it starts to get grounded in the information of your organization. 

The next category is, really, more sophisticated. We're really doing some of those things internally right now, where the model is actually being trained on the data. So we're using Delta IT and we're building some things and there's some opportunities for us to work together down the road. I know you guys got a great direction in your head in that space as well. Then the final category is stuff that we don't really have anything in anywhere in Delta right now, but the most cutting-edge technology that you're seeing with these reasoning models and compound AI, where you have smaller models working together to solve big problems at the enterprise level with an agentic, agent-based technology and all those sorts of things.

That is an extraordinary realm, particularly as the cost of silicon goes down, the compute value gets better and better. So that's kind of how we set it up. We said AI/ML, we're going over here, ML with generative AI.. Here are the categories and Valence sits right in there. It's like great value, quick to implement, grounded in your enterprise context. It made it real simple for us. 

Parker: One of the themes that I think I'm hearing from the three of you and from other conversations is that there will be a multiplicity of solutions. There is no one solution. It's important, I think as Bill said, to have a portfolio of options. Jennifer and Lesley, I know both of you have really thought about how Nadia as a solution will coexist with Microsoft Copilot.

Jennifer, you've got some data that you've already sort of found in the early days. Can you share more about how you,shared Copilot and the uptake there, and the use case, and then how you see the link with AI coaching. 

Jennifer: Sure. Parker and I have also talked about, Oh, Jennifer, you've been using generative AI for so long. Well, so long, when you think about, we're here in November. Just to remind everybody, November 30th, 2022, was when ChatGPT was released. A few days after that, I got in there and it helped me write, while I was at IBM, my team's performance reviews. Then, November 6th of 2023, ChatGPT released the ability for people like us to create GPTs, our own little agents.

So a year after that, I created a GPT to write performance reviews and I shared it with my executive leadership team. I said, hey, just for us, try it out, what did you think? The feedback they gave me was you're too nice. Your feedback assistant is too glowing in their language, funny enough.

Now, this past October–so that's three cycles–in three weeks, I worked with our teams to create a Copilot studio GPT that we put into Workday. Now, when you think about that hockey stick that Brent was talking about earlier today, a use case of one just helping me, a use case of maybe six people just helping six people, in that period of time. we had 22,000 engagements with employees writing their self input and setting goals. That was in the last month. Interestingly enough, because–remember, I never make an ass of myself, I always ask–I asked those people who were using the writing assistant, think about how long it would normally take you to sit down and do the dreaded task of reflecting on your annual contributions and setting your goals. How much time did that thing take to save you that we just produced in three weeks? I was floored that 45% of the people said it saved them about half the time. They got it done in half the time. I was even more surprised that 3% said it saved them 75% of the time. 

But, you asked me to say, how did this relate back to Nadia? I can track that we now, because those people sampled a writing assistant and began to use it, they're some of the highest adoption and opt-in of Nadia, just a few weeks later. So my hypothesis is the more we introduce these experiments and we get people more optimistic and more confident in their ability to use these things and to see value, they're going to be more, and more, and more willing to try and reap the benefits that we know that Nadia can provide to them. But we have to get over this mindset issue that we have of it's either–you know, I love the comments that come through because I don't just ask them statistical questions. I ask them, what else do you want to know? One person who doesn't want to use a writing assistant or a coach said that they don't like the BS, bingo generator of AI. Another person doesn't like Nadia because it says her ex husband's girlfriend's name, so I get all kinds of insights from the comments. But you know you have to listen and really understand what is it that is getting in people's way, but I'm just thrilled to say as we do these little micro experiments we're one, helping people save time and improving quality, but getting them more ready for what's next. Because I think that's what all of us, as talent leaders, are responsible for doing, preparing people, and, in some ways, protecting them as well. We're building up this muscle and all these little experiments are helping us get them into shape, whether they want to go to the gym or not. We're helping them do the reps. 

Parker: And it's so crucial what the speed, the exponential speed that you talked about, to get them starting now because at some point it might just become too much, too overwhelming to get them started.

Lesley, I know that, you know, when Nadia was rolled out, it was pre-Copilot, but now Copilot coexists with Nadia. What are some of the lessons that you've drawn from that experience? 

Lesley: Well, actually, I'm just going to–Jennifer, such an interesting example. So maybe phase four of that example is actually Nadia role playing performance conversations, which is what we're experimenting with now, which is something that–big fan of Microsoft Copilot–works fantastically for the kind of example that you've just given. It can't right now role play and give feedback, but unleashing that power now all we've done at the moment is give that to leaders to do that, it’'s really changed the quality of a performance conversation. The next step is to give it to all employees. I think using Jennifer's is just a great example, actually.

Parker: Tim, as you've got a huge frontline workforce, and this is a frontline workforce that is probably not as familiar with technology as the HQ workers. How does Delta think about ensuring that both parts of the workforce have access to this technology that could be transformative? 

Tim: All of the technology that we develop, we really start there and engage where employees can connect back to the organization, so, obviously with the frontline, it's mobile technology. So, we're very focused on making certain that, their voices are heard, particularly in the early stages when we're designing these solutions.

I mentioned earlier when we did the pilot with y'all, we had seen–I said “y’all,” I'm actually from New York. I moved down there six years ago and now I’ve got the “you all” thing. But they are included in the early design phases. We implemented Qualtrics for all the measuring pieces se got to hear all the verbatims and then fine tune it and improve it more. So, really everything, 90% of our entire workforce is out there engaging with customers. So it really is at the heart of all that we do. 

Parker: And Jennifer, you've been using AI sort of personally for a while, you've experienced some of the change in capabilities. If you try to picture yourself and Analog Devices as maybe 12 months in the future. What are some of the hopeful use cases that you could imagine emerging? 

Jennifer: When I think about the outcomes of the goals, like let's set the intention. One outcome I hope is that we see a direct correlation to increase business performance and revenue growth as a result of this. Because, you know, when you think about this passenger versus pilot mindset, one study that was done by Jeff Hancock from Stanford's Social Media Lab–and BetterUp Labs as well–they looked at across a population of 10,000 workers, 18 industries, and they found those people with a pilot mindset–you know, that high optimism, high agency–were 3.6 times more productive. Now, I have got to trust Stanford, they know what they're doing, I wonder how they calculated that productivity measurement, but let's say it's more right than wrong. If we can introduce, 12 months from now, and help people get into that upper right hand quadrant of pilot mindset, productivity increases, innovation is unlocked, collaboration with AI also will foster creativity with their colleagues. And I have to believe that that's going to improve revenue growth because it's going to improve, you know, performance and hopefully include engagement and make it a best place to work. So I think if we really harness this right we're going to get people prepared, we're going to make more pilots and we're going to be more profitable. 

Parker: Lesley, the last example that you shared was around the integration into a leadership program, and I think that's a new use case where we've only worked, really, with you and Brad and Sophie and many people closely on this. Can you share more about the idea behind it in just the early stages of the genesis so far?

Lesley: Yeah, actually, and I've only just realized this in that last question you asked and Jennifer's response, actually, on the problem statement. And I think one of the opportunities, the problem statement is about the problem of humanness and rather than worry about how AI can cause issues to take away our humanness, how instead does it help us solve the problems that come with humanness?

What has been making me think is that, you know, the biases that we all carry around with us in the people processes, in our people decision making, it all comes together when we have conversations, when managers, how many times as HR professionals have we said, or have I said, Oh, if only the leaders would, you know, this is all about the leaders and their relationship to. Well, does AI have a space in there to intervene in that relationship? Or, to actually take away some of the bias in that human-to-human relationship? We're running a big leadership program next week, which is still face-to-face, so that ability to have retreats is still important. And on the plane over, I was going back and rereading Thinking Fast and Slow, so it's really got me thinking about this first-order thinking and how do we get the hell out of that.

And that, I think, is what we're trying to do in the leadership program, with Nadia. We're trying to replace the problematic bit of humanness with a machine and allow space for human-to-human conversation between people and with their managers. So Nadia will take over some of the basic tooling and some of the space for reflection and then leaders will do the rest.

Parker: I mean, this idea of AI being closer to being a human, I mean, Diane mentioned it. Thinking Fast and Slow, interesting that you brought that up. It was one of the pieces that I was able to talk to Geoffrey Hinton about. How can we take that analogy and apply that to LLMs and how do we help them think slow, because it's hard for us to, but they're able to do so with, you know, the size and the scale. 

Tim, you've talked about. four different dimensions and you know, I saw your eyes light up at that fourth dimension of the sort of fourth pillar of what's possible. What are some of the steps that Delta is taking to do the kind of experiments that we've been talking about today and something that also feels pretty high-stakes?

Tim: Sure. We're really preparing for the future in two ways. One, as we all know in the digital age, data is the oil of the digital age. We've heard that for quite a long time now. I think there's a new dimension to that, with the age of AI, and that's a reward signal. That's another really, really important thing. The machines as they're initially trained on the data will present a prediction. They need to be aligned with the culture of your company, and so on. And that's really where these signals come in. So to the degree that in our case, a hiring manager is engaging with a prediction on what a particular skill may need to be for a particular job, they can evaluate that. They can give it four stars, thumbs up, thumbs down, or maybe the initial prediction had a hundred words in it and they changed 50 of them. All of that is a signal that is really, really important to improving the quality of the prediction down the road. 

So what we're doing now to prepare for the future is really focusing on that data. So we're building some applications that will capture the initial prediction the machine made, store it, take the next value, the preferred value that the human provided, put that into a relational database as well, and generate transactions. So that, ultimately, those input/output pairs, those comparatives, can be used to train the machines. 

Everything we're building, we're really thinking about how we can keep that signal. We think it's going to be a strong, competitive advantage over time. This technology, unlike anything else we've seen before. Traditionally, you purchase it, and then five to six years later, it's depreciated and sort of its value does this: I think, Brent, who showed that sort of hockey stick, it nearly goes vertical with some of these things. As long as you're capturing signal and you've got good quality data, the value of these assets are going to be extraordinary. I think I wouldn't be going too far out on a limb to think at some point, you'll see, you know, from even from a Wall Street perspective, evaluating the quality of these capabilities that you have within your organization, how do you value these assets? Like, in our case, we're building things that know all the skills that we need to run our business. What is the value of that on the asset as opposed to what you traditionally have with a lot of technology? So brave new world coming up. But to answer your questions, it’s really focusing on the data, making sure that we're capturing signal, making sure people understand the value of that, to prepare for the future.

Parker: Wonderful. Jennifer, you talked two years ago, I mean, you were an early adopter 1, 6, 22,000. What are the experiments that you're running right now that are sort of in the 1, 6, 12 category that might explode a year from now? 

Jennifer: We're focusing on things that are fairly turnkey, I would argue. So, you talked about Copilot. We have a mini experiment, about 300 people, that we've been studying very closely on Copilot 365. That's going to ramp to mostly 14,000 employees, I would think, within the next year. That's really just focusing in on core productivity and just early adoption. I think getting people accustomed and climbing that, you know, that optimism and agency ladder, so to speak.

We also have a growing software and digital capability, AI capability within ADI. So we're introducing, obviously, GitHub Copilot and other coding instances and things like that. That's really going to, I think, drive enormous value. So, in the next 12 months, it's around upscaling and readiness and listening. Because we always have to adapt. And I think Nadia is going to come in and be at kind of the softer edge of, it's not just about productivity, but it's about your performance. It’s about a little more personable, perhaps, than maybe driving the productivity dimension so hard.

So I'm really excited to see in, I think it was, the first five days we got 1,600 people to opt in. We do have plans to continue to scale out Nadia even beyond the 1,600. But we want to study and learn so that we're going to make it even better for people for the next wave. 

Parker: I mean, it's funny we hear Diane Gherson talking about, you know, the delight, and we want to make Nadia delightful, and we want to make it a great experience, but we also want to make it a Trojan horse for productivity. We want to help people be better and faster at their job.

Jennifer: Well, people want to be better and faster. 

Parker: They want to be better. They want that. That's what they're excited about. Lesley, what about for you? Are there any experiments? You and I have a lot of chats about sort of the agile mindset in HR and it's obviously working with the results that you're getting and the recognition. What are some of the experiments that are most exciting for you these days? 

Lesley: As an organization, we've been using AI for the last couple of years. So, for new product development, it's super exciting for driving financial inclusion and equal access to products, that gets me really excited. In the people space, for productivity, for the front end of our systems, and in our leadership offerings, we built something called the Leader Exchange, which is our own program, which has AI built into it for search and dashboarding. But it's this space, really, I think it's in the space that Nadia can do that other forms of AI can't do right now, which is to replicate the human dynamic, but remove some of those problematic areas of human dynamic.

I've been thinking a lot about the hockey stick. This is going to be, I think, a much shorter period of time, before we get up into the turn. And then the thing we're really working hard at is how do we create the conditions in the organization and the skills in our people in HR and in the wider organization to constantly see what might be coming around the corner, to understand how we quickly respond. Paying as much attention to that skill set as to the actual experiment on the technology that comes at us. 

Parker: And I think that mindset is just so crucial. It's sort of: try to look around the corner, try to pay attention from, you know, every level of the organization. And then I just really love the, you know, the reinforcement of the idea of experiments. I mean, that's how we are going to learn in a world that's changing so quickly. Preferably measurable experiments as, as we've all talked about. And it's been wonderful to have you share your experiences with the group today. 

So, thank you all. Thank you for flying from, you know, none of these are New Yorkers today. They might have New York roots at some point, but they've all traveled to get here, so thanks, Tim. Thanks, Jennifer. Thanks, Lesley.

AI Coaching Early Use Cases with Delta, Experian, and ADI

AI will change every workplace, but where does that change start? In this panel from Valence's AI & the Workforce Summit, HR leaders Tim Gregory (Managing Director, HR Innovation & Tech at Delta), Lesley Wilkinson (Chief Talent Officer at Experian), and Jennifer Carpenter (Global Head of Talent at Analog Devices) reveal how leading organizations are testing and experimenting with AI, the most valuable starting use cases, and how to expand from initial use cases to effective AI at scale.

Tim Gregory

Director, HR Innovation & Tech, Delta

AI Coaching Early Use Cases with Delta, Experian, and ADI

Chief Talent Officer, Experian

Jennifer Carpenter

Global Head of Talent, Analog Devices

Change Management for AI with Citigroup, IBM, and Novartis

No organization is a monolith, and within any company, there are AI enthusiasts and early adopters as well as resisters. How do you bring the entire organization along and shift the most resistant employees from fear to curiosity? HR Leaders Cameron Hedrick (Head of Learning & Culture at Citigroup), Diane Gherson (former CHRO at IBM), and Lisa Naylor (Global Head of Leadership Development at Novartis) share their insights on getting global enterprises ready for the AI era.

Cameron Hedrick

Head of Learning & Culture, Citigroup

Diane Gherson

former CHRO, IBM

Lisa Naylor

Global Head of Leadership Development, Novartis

Key Points

Das Rush: So, Cameron has led talent initiatives at Citi for more than two decades. Currently, he's the head of learning and culture, where he leads the learning and culture teams, including overseeing their deployment of Valence's AI coach, Nadia. Lisa is the global head of leadership development at Novartis and has been an HR leader for 15-plus years now.

Okay, we're not counting, but if we were, maybe. And focused largely in healthcare, and you've had previous roles at AstraZeneca and Siemens Healthcare. And Diane is the former CHRO at IBM and first started using AI in 2011 to predict attrition, and that tool went on to get—or, sorry, not to predict attrition, as a replacement for HR business partners.

And that was a tool that went on to get 1.65 million inquiries a year. And we'll talk a little bit more—there's some even more impressive stats on that. So we're going to get into that, but today, you're a professor of business leadership at Harvard Business School, a board member, and senior advisor to a number of companies, including Kraft Heinz.

So, with those intros, welcome. Thank you for being here. And to kick off, I actually just want to do kind of a quick round robin, because change management can be this term that gets thrown around and can be kind of vague sometimes. So I'd just like to hear a little bit from each of you, like, when you think of change management and AI, what does that actually mean to you, and what do you see as the role for an HR leader?

Cameron, would you want to start and then we'll work our way over here? 

Cameron Hedrick: Sure. Change management, HR, as it relates to AI, I think, first of all, it's contextual about what company you work in, because we all have equal access to most of these technologies in our personal lives, but we do not have equal access to it based on the company in which you work.

So I work at Citi. We are, understandably, cautious about how this works its way in and what we're doing. And so, there is a competition of sorts between your outside life and your inside-the-firm life that we're trying to manage. But the short version of the question is, first of all, we are trying to help people understand what it means generally, in the universe, how to negotiate what it might mean for your work, because people are understandably fearful, somewhat skeptical.

So we're talking about that proactively. And we're just moving these technologies into the firm very carefully in very limited ways. So, we're right at the beginning of our journey of adopting this inside the firm. 

Das: And, Lisa, how about you? 

Lisa Naylor: Well, I mean, maybe I'll say a controversial thing so we can have, like, a debate-y panel, but I mean, I think for me, it's no different than other kinds of change management.

I mean, I really think that when we think about this work, we need to understand what's valuable to people. You know, as practitioners, we can roll out the thing, we can make it available, but making it valuable, value add, is a whole different story. And I think for us, for change management, we really have to think about what is the value for people?

What are they going to do with it? What is it going to be for their work and for them? And then we have to figure out how to bring that to life. 

Das: Wonderfully said. And, Diane, how about you? How are you thinking about change management and AI?

Diane Gherson: I'm going to be controversial too. 

Das: Let's do it. Okay. 

Diane: I think change management's dead. Okay? I think it's out of date. And I think we've learned that through a number of different changes we've just been through. Like, for example, return to work, right? So making a change and expecting to sort of gloss it up and make it good for people afterward isn't cutting it anymore, right? So, I think that the idea that really is working is creating a movement inside the company, around the change you want to make.

And that's certainly what we did at IBM. And we started by just attacking the things that bugged people the most in their employee experience and made it delightful using AI, right? And so, you know, if you were applying for time off, for example, you wouldn't have to go to a website and click on things and, you know, have to remember how many days you had left.

I mean, it would all be there for you, answering your questions, and so forth. Or if you wanted, if you were going to get a mortgage and you needed an employee verification letter, which was awful—sometimes the mortgage rate would go up by the time you got your approvals.

It was all done for you. Do you want us to write the cover letter for you? Blah, blah, blah. I mean, it all did it for you, right? So, it was done in three minutes. And so, that kind of thing made people not just curious, but excited about the promise of AI. So, I think you start with that movement. Then, you know, you've got to deal with issues of concern, like, for example, privacy.

I mean, we were getting a lot of employee data through our digital channels. And learning about engagement, you know, all of those kinds of things. Hotspots, they're useful, but also snooping. So, we had to come clean, right? And we had to explain, here's what we're going to look at, here's what we're not going to look at.

We're not going to look at email. But, you know, if it's an open Slack channel, all hands are in. And we're not looking at people individually, we're looking at trends. We're looking at highs and lows. So we explained what we were doing with their information, which is important to build trust.

The third thing is, we started with areas where we were upping the game of people as opposed to taking their work away, right? And of course, gen AI is just fantastic for that. The study that Brent mentioned, the Harvard study with BCG and others, have proven that there are some areas where it's just incredible, right?

So let's take customer service. 19% improvement in efficiency, but AI is more empathetic than humans. Why? Because we heard from Parker earlier about the amygdala. We get amygdala hijacks when someone's really rude to us, right? No matter who we are, we do. But fortunately, that doesn't happen to gen AI, right?

They can reframe the situation very calmly and deal with that irate customer. Well, that's great. If I'm a customer service agent and I'm being yelled at all day, right? So it actually improves my productivity in a way that actually makes me a little more sane, right? So, again, delighting people with improved productivity is really important.

And then, of course there are areas we've talked about, what Brent talked about, the Venn diagram. But there are areas where you're going to replace people's work. And there, it's really important to involve people. They are the experts on their work. And getting them to feel like, we're going to upskill you, if you want to be upskilled, if not, okay, the job's going to change.

But instead of making people feel like they own their job, having them feel like they own their skills. If you feel like you own your skills, then you want to keep upping them, right? And the job will change, the workflow will change. So, I would say those are sort of the four—work around that wheel of change that we did at IBM, and it was really quite effective.

Das: It's wonderful, because we've teased out some tensions, but I actually, at least for me, I'm hearing like a lot more agreement, and there's this theme that I feel like is coming out of, fundamentally, change management is taking something that is maybe a pain point to the organization, to employees, and turning into something that's more delightful.

Or, I think the way Brent kind of put it, of like, shaping the future we want, and change management is maybe that process by which we do it. I want to kind of go now, because Cameron and Lisa, I know that you both have some specific AI initiatives you are rolling out, and that, I think there's this tension you have to navigate between the parts of the organization that are kind of resisting.

Whether that's over, you know, security concerns or just employee resistance. And then there's the parts pulling you forward, so I'd love to just hear a little bit about what are your AI initiatives? And what are the things that are going well? And what are those real blockers that you're having to work through?

Cameron: Sure, so just a couple of examples of the initiatives. We've got a generative AI pilot related to writing performance reviews, which language models do exceptionally well. We are using, well, we're talking with Valence to get this product in as well. We are using language models to look into cultural matters and cultural measurements in ways that have heretofore been unavailable to us. So those are some examples of the initiatives.

But as you said the headwinds are manifold, and some of which you've talked about. There are privacy concerns. There are security concerns, and the list is very long. And it's easy to kind of, for me, to be very frustrated with this, but if you work at Citi, and you make a mistake in these areas, info security, for example, the world's economy comes grinding. The consequences are not small.

So we are just treading that middle ground as best we can, right on the knife edge. Because there's true competitive advantage that comes with it as well. But we're not alone in finance trying to navigate that. So those are a couple of examples of the headwinds and the tailwinds.

Lisa: Yeah, I mean, obviously, we put AI in lots of different places, I think, at Novartis, when we look at some of the work we need to do all the way from the science. But, for today, if I just kind of hone in on the things that we're doing with our people and some of this work, we have, I think, the difference between the AI that people know about, the AI that's in the background, and then the AI that they kind of feel.

And so we have various tools that sit in the background, which is really kind of looking at how you engage some of the things that we do with global products and match and other things where it's really helping people think through their careers and their development and grab skills and really curate the way that they engage in a lot of different things, which is where we see a lot of value. 

We are a very large company, and it's easy for people to not understand how to navigate the system. And we find that there are methods in use for AI where it helps people just kind of grab and gather data in ways that maybe they didn't think about.

Really specifically, we're leaning into some work that we're doing with leaders. I mean, if you go around and ask people, are you a great leader? A lot of them are going to go, “of course I am,” but is the reality, are they? I'm not sure, in all of the moments. I mean, I'm very sure, but I'm being kind. So I think, you know, the thing for us is, we're putting tools in to make the work of your team more discussable.

So we have some partnership with Valence where we're really thinking about how do you have people with insight into who they are? How do we then bring that insight into your team? And then now we're starting to lean into things like Nadia and finding people in the moments they need it most. And I think this is kind of that different trigger when we talk about the use and the application of AI.

Do people go out and find it when they need it? Or how do we bring it to them in the moments that they're already challenged with and then really enhance the use? And so this has been the place for us. We've almost gotten kind of a viral-level pickup of some of these tools because people are realizing, “gosh, it was a little bit hard for me to figure out on my own, but with just some of these more discussable, accessible ways of thinking, then now I'm more interested in it.”

And, you know, you get like one interested friend, and then they tell another and really kind of picks up in a way to help our leaders. And maybe in a way that, if we had asked them a few months ago, they wouldn't have articulated they even needed it on their own. 

Cameron: Just one other thing that'll be interesting for this audience, and it builds on your first question and involving people in the journey. But one of the things we're exploring is we're very interested in understanding the skill profile of each individual. Do they have it and how good is it? And everybody in this room has wanted that, and you've asked for it, and you've tried to have them explain it to you, and that is like pushing a rock up a hill. 

But we have the opportunity now to do this passively, to infer these types of things with much greater accuracy and much greater dynamism. And that is going to be terrific, because now we can manage our skill portfolio just like I manage a real estate portfolio or any other asset that we have.

But that is a scary thing for some people, like, what are you going to be using that for? How are you getting that? Is that going to be a performance management thing? So that's the next frontier that we're really thinking a lot about too. 

Diane: That's actually something we did at IBM really early on. We used AI to infer people's skills from their digital footprint.

And we did that around, I want to say, 2011, 2012, and at first we were afraid to ask for validation. We were using sort of skills councils and what does “best” look like, and all that kind of thing. But, actually, over time, we realized that the only thing you can do is ask for validation, and it's actually now up to 96%.

And people are positive about it, but we were scared at first. We thought, “oh my god, what if we got it wrong, and what if we're going to get into endless debates?” It's a whole lot better than the previous version, which, of course, was all manual. But it is really important, and I think it's so important that I actually sit on the board of one of the companies that does skills inference, TechWolf, because I just think it's critical to this whole wave of introducing AI, and there's been, I think—a few companies have tried and not succeeded at doing it.

So it is, I think, just foundational. I totally agree with you, Cameron. 

Lisa: Yeah, I do think there's this, and maybe it goes to where we started on these elements of change management, is when we come and we talk to people about the thing, right, here's the thing you're going to have, but they're not sure to what value is that. For me, I think this is the place we've seen this unlock, when we can help people understand, hey, when we asked you, how's development going or what does your career look like? And they're really struggling. We then talk about the outcome instead of the moment, right? So I think this is what's really where we find the value.

And we're talking to the business and we're talking about next stages. And should we do this and why? We're not talking about, it's cool or everybody else is doing it. What we're talking about is, this is a unique challenge and this is where we think it's going to accelerate something for us, right?

We heard this at the start of our time today, engagement and people and value and belonging and connection. All of those things are kind of opened up with some of these opportunities that we've seen with AI and the tools and the partnerships they create in the organization. Look, I mean, I always say, when I walk in the grocery store and they change it all up, I'm super annoyed that day because I don't know where anything is and it feels uncomfortable and it's new, and then two weeks later I don't notice anymore, right?

And we have some of that too, is this just kind of, how do we get past the initial “newness” or the worry and kind of drive people to the next levels of understanding, and I think that's really the trick we've seen. 

Das: Yeah, I think this is a great frame, and we sort of already in this kind of first half here, started to tease out, fundamentally, and we call this, you know, from fear to curiosity, because you've got all these things that really represent the fear, whether that's the fear on your technical teams around security and privacy, or it's an individual's fear in trying something new that you have to overcome with this, like, momentum of value and these new things that are being unlocked with these new capabilities.

And, like, Lisa, I think you in particular really highlighted this kind of bottom-up adoption or this kind of organic adoption, and I'm guessing a lot of people in the room have had the experience of trying to mandate a top-down initiative, and that rarely goes well for change management versus how easy it can be when individuals see the value, when employees start telling one another.

So how have you, you know, in various initiatives, been able to generate that kind of bottom-up excitement that just makes change management so much easier?

Diane: Well, a Shark Tank kind of contest—so across the whole company, give us some examples of, you know, what you'd like us to work on, what you would like to work on with us with AI, and that started it, I think. I think that the second thing is your point about top-down, that it has to start at the top that we're all learning, right? You can't have know-it-all senior leaders saying, “you guys go do your learning, right? And these skills are important for you.” It's more about, “I'm taking the learning with you.” So our CEO actually held a monthly learning session, and she'd interview different people and ask a lot of questions, a little like Oprah.

But with them, we all went through learning and got certified, whether it was cloud, AI, whatever it was. But the whole idea was we're developing those skills. So very senior leaders would be having town halls and go, “oh my god, that third module of AI was a killer. I failed the test three times, and I wanted to throw my laptop out the window.” And everyone was like, “oh my god, well, if he's doing it, then maybe I should be doing it.” So there's this sense of we're all learning together.

Lisa: There is something, right, about what people talk about, what people are interested in. And, I mean, we have this pleasure in that we are thinking differently about AI in lots of different facets, right?

So we're not only trying to talk about it in one space. We know that there's something special in the service of science and other things. So I do also think there's how it gets articulated. So, inside of our own company, we often talk about this as, look, this isn't about replacing people. We sometimes say it's like an Iron Man type of kind of energy, which is, you already have something strong, and then how do you enhance it or accelerate the value?

So, I think this has been something big. I mean, at the most basic level, we're just meeting people in the scaffolding of the problems that they already have. So, we're not trying to find a moment, right? We're saying, we already know this. We hear this from you, and we have this idea that you may want to try.

Look, there's always people in there who are like, no, thanks. You know, I'm not interested. And then we use all of our best skills, a little competition, a little cooperation, a little something to say, well, oh, that's cool. But Joey Schmutz over here thought it was great. So I think it's also about how we talk about it, but I think simpler is better in a lot of ways.

And really figuring out the use case and the value is where we've been spending our energy. But we have, as you said, like some viral pickup on things that have just been about meeting people in those problems. 

Diane: A couple of things though, I just want to say, we did replace people, right? So, it sounds like you didn't.

In our case, we said, look, take payroll. You know, our attrition rate is 14%. We're not going to replace that 14%. You've all got jobs, but the 14% that leave, they're not going to get replaced. So we were very clear about what the situation was, but the jobs are going to change.

You'll need to upskill. And everyone understood what skills they needed to learn in order to now do the higher-level pattern finding instead of the reconciliations that they were doing before. So I think there is a need for clarity around that. I think the second thing is, you’ve got to link learning with career.

So, in our personalized learning system, it would say, congratulations, Diane, you just passed this certification. Do you want to apply for any of these five jobs that have openings? You have a 90% match. And so there was this immediate gratification that you got as you went through your learning that you were now eligible for certain jobs that maybe you'd never even thought about.

Lisa: The value of being honest, though, that's super important, right? Just talking about what we're doing and what we're not doing, because I think it goes to your point about what's new and what's fear, and let's just be open about what we're doing and then how people can find their way in it. 

Das: I want to come back to this idea of simple communication, but, Cameron, I'd love to hear from you.

Cameron: Yeah, listen, I think part of my job is to create the conditions for the things to happen that I hope happen at the firm, which is a very easy sentence to say. But the doing of that is very difficult. This is grossly oversimplified, but I kind of think of it in two buckets: bucket number one is to find the human and predispose the human and equip the human, absent any other trammeling force, to do the thing, right?

So this is overcoming fear, it's in skills, and the like. And that's hard enough. But then there's, like, human in the context of the firm, which is a much different matter. So does everything in your firm, the compensation systems, your rating systems, your promotion systems, does it align with—does it reward the thing that you want?

So very practically, right now at my firm, I can drop the person who's predisposed to innovation and using generative AI in the system, but the system is not built for that human to thrive. So, systems alignment and harmony is the big part of my job, at least on this particular topic. 

Das Rush: Yeah, and I'm being kind of cognizant of time, and I know that you've all had some experience with Nadia, with AI coaches.

What has kind of been the role of that in helping with change management? Because obviously part of an AI coach is, you know, leadership development, work coaching, so how do you think about using something like a tool, like an AI coach, to help bring employees along on that journey, and has that been helpful? 

Cameron: We have not done it yet at Citi. Conceptually, I'm in. In my personal life, I've done it and created my own agents and things like that. But right now, it's a concept. So I don't have applied experience, but I hope to. 

Das: Yeah. Lisa, I know you've had a bit of experience, too. 

Lisa: Yeah, I think we're just getting started and picking up speed, and it's also, for us, we're just looking at this kind of the whole continuum.

What do people need at different times? And we're just holding this belief that, you know, sometimes you have this moment, this need, and it's short. It's the, “oh, I just wish someone could think through this with me,” and it's just fleeting, and it kind of gets lost into the rest of your day. And so, we're really trying to plug that in, and in the moments that people need it, I think, which is kind of on brand to some of the other things I've said. But we'll have to, like, see you at the next—we'll have to see you at the next one to talk about where it went wrong and where we really picked it up as part of application.

Diane: And I just have to say, I know Eric Erickson is down there somewhere in your Valence system. And that's where they start, right? Is putting yourself in the shoes of the other person, asking those kinds of questions. I'm sure, I haven't used Valence, but I'm sure that's what it does. Eric Erickson is just brilliant, but not all of us can be as good as him, but Nadia will be.

Das: Yeah, like, sometimes just having that—you know, I think the point you made earlier about not having the amygdala hijacking, having something there that's just going to be a calm presence while you're working through that fear, going to your curiosity. 

I want to come back to a point that you made, though. I think you all made it, but I really heard it in kind of what you were saying—communication just needs to be simple, like you need to help people understand what this is, because that's really the starting point of the shifting. How have you gone about communicating the value of AI or what you're doing both to employees and maybe particularly to the resistant parts of the organization?

Lisa: I mean, we have a pretty nerdy organization and that works on our behalf sometimes. I mean, people are really invested in understanding the value to a lot of different moments and kind of in the work that they're doing. But I think for us, what we're looking at is helping people to understand that we can increase and advance the way that they use their time during the day.

So we already have some of this, right, in partnerships we have with Copilot and Microsoft. People are seeing that in just some of the simple use cases of their work. And of course, the people who have personal application, right? They know and understand what they do. And so, for us, it's just about showing them that, “hey, did you feel a little uncomfortable when you had to sit with your team and you had to figure out how it was going? Well, actually we can make that way more discussable for you. And here's the tool to help you figure that out.” 

And that's some of the partnership that we've already seen with Valence, right? So, for us, it's rank. It's not a big thing. It's not a big moment, a big brand, a big name, is to just say, look, we recognize we're in a moment in time of strategy shift or a challenging something, or the world is hard.

And guess what? As a leader, you have to hold that space. And so, how do we help you figure it out? And so, that's really where we come and we introduce it, is to say to people, “look, you're going to do this thing anyway. And so it can either be a little bit weird and clunky, or we can help you get the best discussion, and here are some of the ways that you can think about it.” 

And so, by doing that and then following some of the data we already have, and just saying, hey, this is what people say, or we nudge in and say, “hey, when we look at the last couple of X and Y data points, this is where you are. So we have this idea and it might help you. Are you interested?” Is, again, just trying to find them right in the moment they need it to support. 

Das: Anything you want to add there? No? Okay. Great. I'm kind of curious in particular on this idea of sort of the partners you need to work with, whether they're your CTO or your chief data officer, and I know, Cameron, you in particular face these kind of headwinds.

What is the communication like there, and how are you finding, like, the ability to kind of communicate the value even with—and mitigate those risks? 

Cameron: Well, early on, some of you know Ethan Mollick, and we brought him in and talked to our seniors and the board just to show what the possibilities were. So, I think at the most senior levels, people get it, alright?

I really do think they do. I also don't think this is the type of thing that needs to be born, bred, and run inside the CTO office. This is more of an HR thing, in my view, but I'm biased, so you need to discount that a lot. But to your point, like the headwinds for us, anyway, have not really been tech.

Well, it's been info security number one, privacy number two and just really understanding what this is. What are we going to let loose on our data sets? You know, what are the vulnerabilities that creates? We're moving through that now, finally. So now, we're moving into some of the—they're not esoteric at all, I mean, in the universe of things, but we're moving from that to some more navigable things. But it's never been selling the promise of it or the functionality of it or what it can do. It's just how can we get it through safely and responsibly? 

Das: Yeah. Like finding enterprise-grade tools.

Cameron: That's exactly right. 

Das: We've got time here for one last question. And so, I think the question I want to ask, because this is a fast-moving space, and we're talking about change management and that's really envisioning what you want the future to be. So 12 months from now, if we're back here, we're sitting in these same seats, where do you hope you are with AI in your organization? 

Lisa: Well, I mean, I hope people are telling stories of, it helped me be better at the thing I was doing. And ultimately, that's what we're after in a lot of spaces, whether it's an application in science, whether it's helping us, you know, think through a vacation and leveraging AI.

I think, ultimately, what we're trying to do is find the stories where people say, “I was kind of two or three steps better than the place that I started,” and, yeah, I think most in play.

Cameron: This may not seem ambitious enough, but weighing all the factors I have to weigh, what can I do safely and penetrate, you know, in 12 months. I think the performance review use case is good.

I think the culture measurement use case is good. We have a comparison of three or four documents together, summarize meetings, Copilot. That's good. Just get that out at scale in 12 months. That would be a victory. 

Das: And, Diane, I know you're not actively, but you are advising. So, where do you hope to see the companies you're working with go from where they are now to 12 months from now?

Diane: Well, look, I mean, I do think it's important that people feel delighted by AI and not intimidated by it. And so, they've had to have had some of those early, delightful experiences, and they've had that, then we're in right. We saw that happen with digitization where, in the consumer world, you know, people were delighted by the fact that Amazon knew who they were, and all this kind of stuff, but their companies were way behind, “dear employee” letters were going out.

And we caught up as corporations, but it took us longer than the consumer platforms. I think it would be such a win for HR, because I agree with you, HR is leading this. It would be such a win for HR to be as delightful, if not more delightful, than the consumer platforms are as an experience.

Das: Oh, I love that as a goal for everybody. Next 12 months, HR can compete with the customer-facing things, and let's see if we can do more delightful employee experiences than customer. That's awesome. Hopefully both are, but I love that little internal competition. Great note to end on. 

Cameron, Lisa, Diane, thank you so much for coming here. I really appreciate it. And with that, if we can get a round of applause.

Change Management for AI with Citigroup, IBM, and Novartis

No organization is a monolith, and within any company, there are AI enthusiasts and early adopters as well as resisters. How do you bring the entire organization along and shift the most resistant employees from fear to curiosity? HR Leaders Cameron Hedrick (Head of Learning & Culture at Citigroup), Diane Gherson (former CHRO at IBM), and Lisa Naylor (Global Head of Leadership Development at Novartis) share their insights on getting global enterprises ready for the AI era.

Cameron Hedrick

Head of Learning & Culture, Citigroup

Change Management for AI with Citigroup, IBM, and Novartis

former CHRO, IBM

Lisa Naylor

Global Head of Leadership Development, Novartis

AI Myths

In this condensed version of a popular internal presentation at Microsoft,  Brent Hecht, Director of Applied Science at Microsoft, dispels common AI misconceptions, including why AI isn’t all hype, why AI’s best use isn’t to replace people, and why we should be shaping the future of work instead of trying to predict it.

Brent Hecht

Director of Applied Science, Microsoft

Key Points

Jeff Dalton: Thank you very much for that introduction. It's my honor and privilege to be able to introduce our next speaker that we have here. So we have Dr. Brent Hecht. Apologies. Dr. Hecht is a distinguished kind of colleague of mine. He inspires both my academic research at the University of Edinburgh. He also inspires the work that we do here at Valence on the applied side.

I first encountered Brent's work from the Future of Work study from Microsoft. Maybe some of you are familiar with it. And that really looked at what we could do from hybrid work and how that was transforming work in the age of COVID-19. More recently, there's been work on looking at the new future of work that's focused on the future of AI.

There was one published last week, or last year, and there's also one coming next week. So, coming soon, so look out for that. One thing that stood out to me in one of the recent reports was the role of AI as critical systems thinking. So, being a provocateur to be able to allow us to enhance our people's ability to work effectively and to think critically.

And so he's going to be doing that for us today. He's going to play that role of that kind of provocateur to help us think about where the future of AI is going for work. I want to give you a little bit about Dr. Hecht. He's a director of applied science at Microsoft Research, an associate professor at Northwestern University, where he leads the People, Space, and Algorithms Research Group, focusing on how gen AI can be a positive influence on society. 

He's a prominent figure in responsible AI with over a decade of human-centered work, over a hundred publications in top journals. His research has helped lay the groundwork for developing equitable and transparent AI systems. His foundational work on algorithmic bias and methods for measuring and improving AI systems has really led to the fact that we can now measure AI with satisfaction, as well as kind of essential to building useful, trustworthy AI systems, and fair AI, which are core pillars that guide what we're doing with work on Nadia. 

Brent's research has been featured in major publications, the New York Times, NPR, as well as major venues like Wired. So we're honored to have him here. He's also a founding member of the ACM Conference on Fairness, Accountability, and Transparency, a leading venue for AI research in responsible AI, and today he'll be addressing AI misconceptions. So I look forward to hearing what he has to say for us. Please welcome Dr. Brent Hecht. Thank you.

Brent Hecht: Hey, folks. Thanks for welcoming me here. And I'm excited to talk to you about some misconceptions about AI. And I will try this clicker now. Alright. So I joined Microsoft from Northwestern, where I was a professor, as was mentioned, in 2019. And the pitch was come, you know, invent the future of work.

I thought, great, Microsoft's a wonderful place to do that. Little did I know I would have such an amazing opportunity to hopefully help change the future for the better, thanks to really generational changes in work that happened and are happening over a five-year period. And the first was the switch to hybrid work, which is stabilizing a little bit, but it's still going on, and I'm sure you all deal with that all the time.

And then language models, which is something that I studied for many, many years, they became good enough to become practical for a number of applications and unleashed AI into the workforce in a way that folks had predicted for a long time, but it's here now. So when the first disruption happened, we all remember, I started at Microsoft in, roughly speaking, September 2019.

So by March 2020, I was already doing a slightly different job than I was expecting. One of those was trying to corral Microsoft's amazing research capacity into helping leaders at the company, our customers, and of course product leaders understand what hybrid work is, what it might be, how we can make it better.

And one thing we did in that process was develop this presentation called “The 10 Misconceptions About Hybrid Work” and delivered it all over the place. It was a really fun presentation to give, talked about things that people might've heard in the media, people might've assumed based on first principles, and why the science suggests that might not be correct, and what we might do to correct that misconception. It's also—this format is just a really fun way to talk about a bunch of cool science. So it was a really fun presentation to give. So when the AI boom hit, we decided to put together something similar, in this case, five misconceptions.

So, maybe afterward, when we're chatting, you can nominate a couple more. The five misconceptions that are in this talk, and I have to sadly tell you, we won't have time to cover all of them today. This is a 45-minute talk in its full glory. If you're a Microsoft customer, I'm happy to come by and chat with you all sort of more directly.

Even if not, too, happy to chat as well. Today we're gonna be actually covering, one, two, and five. So the first misconception we'll talk about is that generative AI either is going to make my workforce or my company a zillion times more productive, or it's basically useless, basically just hype.

Second misconception is that the best use of generative AI at my company is to replace people. That is a misconception. We are really lucky that's a misconception and that the science points to that being a misconception, but it is one. The two misconceptions that we're gonna skip, we'll talk about how text is not likely the—we would talk about how text is not likely the future of computer interfaces.

And one that's close to my heart, close to a lot of my research too, is that for organizations and people that make content, generative AI can be a huge boon instead of something that's threatening. So the full version of the talk goes over that. But then the fifth misconception is a carryover from the hybrid work misconceptions talk, and the misconception is that we can predict the future of work, and we should spend a lot of time trying to predict the future of work. So, we'll talk about that at the end. 

So, let's jump in with the first misconception here. And that misconception is that generative AI is either going to, you know, as soon as I install Copilot, I'm going to be a zillion times more productive. My organization's going to be a zillion times more productive, or it's just hype and I could ignore it. And the reason this is a misconception is that generative AI is—all the evidence is pointing toward, or most of the evidence is pointing toward generative AI being what's known as a general-purpose technology. And we know a lot about how general-purpose technologies affect productivity individually in organizations and across the whole economy.

We know a lot because the analogies to other general-purpose technologies are, you know, proving out to be, at least the evidence suggests right now somewhat accurate. So other general-purpose technologies are electricity, the automobile, the internet, these types of things. And simplifying things really dramatically, and if there's an economist in the room, I apologize, but I think you'll agree this is directionally correct.

Simplifying things dramatically, general-purpose technologies affect productivity in the economy in a two-phase process. The first phase is when we get the general-purpose technology, but we have our existing complementary technologies and our existing workflows.

So, for example, when electricity became widely available in the United States, all manufacturing was steam-power based, or most manufacturing was steam-power based. And I don't know if we have any mechanical engineers in the audience. I'm not one of them, but I think we all can agree that electricity probably, we can imagine, useful for manufacturing things.

At the time though all the factors were laid out to take advantage of steam power. All the processes were designed to take advantage of steam power. So they looked and they said, “hmm, this electricity thing seems cool, but I don't know what to do with it.” One thing they did was, instead of having someone run around and light candles to keep things going at night, they replaced those candles with electric lights, and that does increase productivity.

The former professor in me really wants to use a laser pointer here, but that's that linear part of the growth there. That's a real productivity increase, and that's what we see in phase one, is sort of incremental productivity gains. Twenty or 30 years later, which, not coincidentally, is about the work life of, you know, it's roughly a career, or roughly a generation at least, someone figured out how to lay out a factory, and there were a bunch of new technologies developed to take advantage of electricity that allowed them to radically increase manufacturing productivity.

And that's where you see that hockey-stick growth that folks want to see right away, but it's unrealistic, in almost every case, to expect that. Another really good example comes from automobiles, also a great general-purpose technology. We could take, you know, a Toyota Camry and put it in 1910, and that would be pretty impressive.

But actually, it wouldn't be all that useful. We wouldn't have roads, we wouldn't be—just think about the number of inventions that had to be developed to implement gas stations. We didn't have gas stations, you know, anywhere we would need it. Those needed to be developed over time.

So that's a really good example of people having to invent a lot of new complementary technologies to take advantage of that general-purpose technology. So you need those changed workflows and those new complementary technologies to unlock that hockey-stick growth. 

I mentioned 20 to 30 years. Historically, that's about right in terms of how long it takes. There's reason to expect it'll be a lot shorter this time. One reason is that a lot of the infrastructure we need to build out particularly with those complementary technologies, is digital infrastructure, and we have it already. So, I have some colleagues at Microsoft Research that I work with very closely that suggest that, you know, maybe five years might be a reasonable thing for us to expect before we see this hockey-stick growth.

The other thing to mention is there's an argument that because of the way the generative AI tools are built. I think a lot of folks know they take data from the internet; they're inherently dependent on that data from the internet. You could argue that this is actually the moment of the internet's hockey-stick growth.

So it's actually the internet as the general-purpose technology. But that's a conversation over cocktails. I want to go back to that first phase though, and tell you why I'm so excited about it, and that we shouldn’t ignore it. So, one thing I do at Microsoft is help to coordinate tons of studies that look at how much Copilot, specifically, can increase productivity on specific targeted tasks.

And the results there are pretty good. It makes it easy for a scientist like me to come talk to folks like you about the results. It's not very—there aren't a ton of complex stories in those results. Almost all of these are lab studies, and they're showing, you know, roughly 25-50% productivity gains, across the board. There’s some exceptions, you know, in these types of things, but that's pretty good for a phase-one productivity increase.

And it's not just Microsoft that's putting that. OpenAI has a bunch of studies, we have partners at Harvard that are putting out studies with similar results, and these types of things. The good news is, even if you assume that these tasks only apply to about 2% of what people do every day, which is conservative, but it's not 20%, it's not 30%, and you take that lower end, that 25% productivity increase for most people, you're going to be creating enough top-line or bottom-line value to be able to say, hey, I think we're selling a value-creating product, which, again, for me as a scientist makes me feel good and comfortable being able to talk about these things. 

So even though we're not going to be in phase two for a bit, those phase-one productivity games are important and valuable for companies, and companies that leverage them are going to be more successful than companies that don't.

The second thing I'd flag, and this is called—we just put this out two or three months ago. Another thing I do on my team is put out reports to help people sift through all the science that's coming out about the ways that work is changing, and hybrid, and now generative AI. And we put out our second AI and productivity report, and in this report, it's the first public mention of an incredible study that some of my colleagues at Microsoft have done. Sixty customers signed up with them to do a randomized, controlled trial of Copilot being deployed in their organizations. This is almost medical-quality information. So, you know, they randomized the seats that got access to Copilot initially.

And we're able to see how work changes by comparing people who had access to Copilot to people were randomly selected not to have access to it, and we're seeing really good phase-one style productivity increases, like we would expect from the lab studies. So, we're seeing 10% more document creation, roughly the same order of magnitude or the same effect in email time. Meetings, really interesting. Some companies see a significant drop in meetings, some companies see an increase in meetings. Looking into the increase, it's that Teams Copilot is becoming so effective that people are using meetings to, for instance, write documents.

So, you know, hey, let's talk about this memo together, and let's have Copilot write the first draft of that memo. So, pretty cool stuff, and very impressed with the study that my colleagues have done here. 

Okay, moving on to the second misconception, which is an important one, and one that I'm sure a lot of people are thinking about for their organizations, their personal lives, themselves, and their kids.

When people first see generative AI, they often think, “wow, this is going to replace this set of jobs, my job, this organization within my company.” And I've anecdotally found this to be a very widely held misconception. But it does run against a key principle in the literature on how technology has changed productivity and, quite frankly, improved living standards.

And broadly speaking, that literature rolls up to betting long on human labor. So, it's a good bet for the last 300 years, since the beginning of the industrial revolution. Specifically, betting that humans will become more—the time of humans doing work will become more valuable with advancing technology has been a really good bet.

And those who place that bet have generally won. And those who’ve placed a short bet or assumed that labor saving technologies are mostly substitutional, to use slightly technical terms have, broadly speaking, no pun intended, come up short. So, my colleague at Stanford, Erik Brynjolfsson, wrote a great piece.

I'd really recommend folks check this out. It's a very much-needed cultural critique of my field, as it is a discussion of the topics associated with this misconception. It's called the Turing Trap, and it sort of critiques computer science for using—many of you have probably heard the Turing test as the goal of the entire field.

It's an inherently substitutional test. Like, how can you trick someone into thinking that they're talking to a human instead of a computer, rather than thinking about new, incredible things that humans and computers can do together. One thing he talks about in this essay, which I find—or an anecdote he has in the essay, which I find very powerful, is he talks about apparently there's an ancient Greek myth—must be a long-tailed one because I don't remember learning about it in school—where someone had invented some magic device that could do anything that a human can do.

And he thinks about, okay, what would happen if that were deployed? Well, you know, no one would have any work to do, but we'd still be stuck with latrines. We wouldn't have vaccines, you know, these types of things. All of those computer or technology plus people, quality-of-life advances—and another way to understand quality-of-life advances in this context is productivity improvements wouldn't have happened, so it'd be stuck in that era.

And so, you can imagine, if you just replace all the people at your company with generative AI, you'll have the same potential, simplifying things, selling the same amount of stuff. If you take generative AI and make your people more powerful, you might be selling 100 times, 200 times, 300 times more.

So, for those of you who are stock market folks, we are in New York. If you take a short on human labor, the most you can save is the cost of human labor. If you take a long bet on human labor, the potential upside is infinite. So, implications for leaders like yourselves. I would shift, if you have the instinct, many do, it's okay, don't feel badly about it.

This is a labor-saving technology, but who can I replace—how can I reduce costs in my company with this labor-saving technology? Shift that first question to how can I make my people more productive using this technology? How can I do more things and do different things than I used to do? There are some complexifiers here, to use a term from a former president.

The first is the nature of demand. So, let's take software engineering as an example. Many software engineers are concerned right now that these technologies will replace them. I'm less concerned about that because my boss's boss runs all—he is in charge of all of Microsoft's productivity tools.

And he never says, can you make the same number of tools, at the same quality, at the same speed. He wants more tools, better quality, faster, right? And that is an implicit statement of a lot of unmet demand for software engineering. So if these tools make us 100%, a 1,000% more productive, there's still a lot of demand that will be there to, roughly speaking, use that productivity gain, right?

Where the demand is capped, things get more complicated. A lot of people talk about customer service as an example. I'm less convinced about that. But if there is a fixed demand for customer service at your firm, and these tools do make someone 50% more productive, you might think, then, that labor substitution might be something you would consider.

However, having dealt with some customer service, particularly within the insurance industry lately, I can say there's a lot of unmet demand for high quality customer service, at least from this customer. So we can think about how to improve the quality of the customer service instead of laying people off initially, and that will present, potentially, some very significant business gains for you folks. 

So, the bigger caveat, I think, is actually on this next slide, which is that everything I've talked to you about is industrial revolution economics, and the goal of many people in my field and many large organizations in my field, most notably OpenAI, is actually to end those economics.

We don't know if they'll be successful. They have not been successful yet, but the goal there is effectively to create a technology that is so productive, that no amount of unmet demand really matters. And this is very much an explicitly stated goal in OpenAI's case. Their charter, the second paragraph, says their goal is to create an AI technology that can do most people's jobs better than they can.

That's how they define AGI. It's the implicit goal of a lot of people in the AI field as well. It's a goal that we should be discussing more as a populace, because there are a lot of implications from that that we don't have time to talk about now. But if they are successful, then a lot of what I said is not as relevant. But they haven't been successful yet. 

Alright, let's jump into the fifth and last misconception here, and that's that we can accurately predict the future work, and we should spend a lot of time doing this. So this is where I tip my hat to you folks who know how to manage people. I'm a computer scientist, my computer science PhD advisor would frequently tell me, Brent, the social sciences are the hard sciences. We know much less about how people work than we do how computers work, and people in my field sometimes forget that and make predictions that turn out not to be correct. 

So on the right side of this slide, you can see a whole bunch of very, very famous computer scientists making very, very inaccurate arguments about when we'll have, effectively, AGI, roughly speaking, technology that could do anyone's job, as per what I was just saying.

And so we have to be careful about listening to those and making decisions based on those, in part because oftentimes they're either subconscious or conscious attempts at self-fulfilling prophecies. So we have to be careful about, oh, someone said this is the future, then everyone says, okay, this is the future, and they make it the future.

So instead of that, I'll actually skip ahead a bit here. And suggest that—you folks are all business leaders. I work at a very large tech company. Instead of trying to predict the future, our energies are better spent trying to create a future that we want. So when I hear at Microsoft saying, “what's the market going to be? What's the technology going to look like in 2025? What's the market going to look like in 2030? Will we have AGI by date X?” 

Let's think instead about what we'd want if we had that, or what type of technology we'd want to create. Our time is much better spent doing that. We are very poor at understanding one person, let alone the complex dynamics of how a technology and a society will work together.

So, beyond that, to see if I know how to move back, I don't know how to move back here. So I will make an attempt to speak over slides that we just skipped through. I do want to say this doesn't mean that we just say, okay, let's ignore. You know, you folks are business leaders.

You have to plan as well. So, first and foremost, we want to be creating. What do we do when we need to plan ahead? This is again where I turn to your expertise. Business leaders know how to handle uncertainty. You diversify. So instead of making a big bet on a single potential outcome—AGI will arrive by 2028, you know, AGI will never arrive, these types of things. Diversify your expectations. 

Hopefully the presentation today walked through some of the higher probability potential outcomes. But planning for a low-disruption outcome, a medium-disruption outcome, and a high-disruption outcome is reasonable for your organization, and actually for your personal lives as well. 

The long version of this talk has been unreasonably popular internally at Microsoft. I rolled out of bed, fed my three year old, and got stains on a sweatshirt. I was like, “maybe I should take the sweatshirt off before I give this talk that I thought 20 people were going to attend.”

I had 1,200 of my colleagues attending one of the times I've given this talk. And one reason is they all kind of want to know what to do with respect to, for example, a question, “what should I tell my kid to major in? What should I think about for my own future?” 

For your personal lives, too, I recommend think about a low-disruption outcome, the internet arriving, standard general-purpose technology, high-disruption outcome, maybe electricity, a big general-purpose technology, and then the very high disruption outcome, which is sort of the OpenAI successful outcome, and plan for each of those rather than trying to make a big bet on one of them.

And, with that I'll close with this slide. You know, one thing I do at Microsoft is help out with a lot of our responsible AI work. And this is a very simple slide that I use to help guide that work within a company. We are limited in doing stuff that's great for the world that's outside of our self interest.

But there's a ton of stuff that's great for the world and is in our self interest. And getting this stuff right, getting generative AI to land well in the workforce, is definitely in that center, so that everyone feels like they are benefiting from generative AI versus it's something that's happening to them.

I feel very passionately about that. I suspect many of you do as well. With that, here's a list of the misconceptions again and a link to where you can learn a little bit more behind the science of what I've been talking about, the annual New Future of Work report. And I will take questions when we have a chance to mingle.

AI Myths

In this condensed version of a popular internal presentation at Microsoft,  Brent Hecht, Director of Applied Science at Microsoft, dispels common AI misconceptions, including why AI isn’t all hype, why AI’s best use isn’t to replace people, and why we should be shaping the future of work instead of trying to predict it.

Brent Hecht

Director of Applied Science, Microsoft

AI Myths

AI: Now or Never

AI is a fast-developing technology, and it may be tempting to wait and see how it evolves. But, as former Vanguard CEO Bill McNabb explains in this fireside chat from Valence's AI & The Workforce Summit, those who lean in early will benefit from compelling productivity gains and develop new and better capabilities.

Bill McNabb

Former CEO, Vanguard

Key Points

Parker Mitchell: So, Bill is the former CEO and chairman of Vanguard. And Bill's also been a close follower of Valence for the past five years and has been an extraordinarily valued board member for the past three years. And I thought we'd begin, Bill. I mean, when we were first being introduced to Vanguard, I know we, you know, dressed up nice and tried to pretend that we were a big company, but I think you saw through it a little bit.

Tell us a little bit about why Vanguard decided to take a bet on what you call a garage startup. 

Bill McNabb: Well, you were a garage startup. So, well, it's good to be here, Parker. Thank you for having me. It's actually even more than five years ago now, which is really remarkable. One of my former colleagues had met Parker and had come away impressed with some of the ideas Parker talked about and Parker and his team talked about in terms of making teams more effective.

And you have to understand, you know, the two most impactful experiences I had sort of before, you know, getting to a place like Vanguard were, one, I was a competitive rower, so team orientation was sort of, you know, became part of my DNA. And second, I was a teacher. And so the whole teaching, coaching thing became really interesting to me.

And at Vanguard, we had done a lot of work pretty—you know, we're investors, so we think about everything, it comes down to an ROI calculation. And we were struggling with the amount of money we were spending on development, not because we thought it was a bad thing to do, but we couldn't figure out why there was, like, big drop-offs after someone did the initial training and workshops we ran.

And the idea of having a team-based platform that actually reinforced some of the concepts we were trying to teach really hit home hard. And then, of course, as you've evolved the business to the coaching model, that, to me, was the big gap that was missing. So it's been really exciting to watch it evolve.

Parker: That's terrific. And you're also on boards. So you are seeing this not just, you know, from the stories you're hearing from Vanguard about the day-to-day challenges. But I imagine that 90% of board meetings are about AI. Can you share, maybe, what are the differences in the tenor of conversations from 12 months ago to now at the board level?

Bill: Yeah, so, and you know what's really interesting is, you know, I have the privilege of serving on two very large public company boards. But what's really interesting is I also sit on boards of several startups and sort of smaller-cap companies.

And we're having the same discussions. And I would say the big tensions, if you will, are how fast to go and where to sort of put your bets. And, you know, in most of the discussions, at least that I'm involved in, what we as board members are doing is really encouraging companies to not talk about it forever, but actually go do something and find use cases that really make sense for their particular business and go try something.

It's interesting, and the larger companies in particular, there's becoming a little bit more of a tension between the business leads who want to go try things and the chief technology officers who are like let us build it for you. And one of the things we're doing, at least again in the boardrooms I'm in, is we're saying to the CTOs, “yeah, great, you guys go develop this.”

But we're also encouraging the business to go experiment with people who are maybe a little deeper on particular topics. So, you know, Vanguard is an example. And again, I'm not on the board at Vanguard anymore. But I know, you know, talking to my former colleagues, they're very early adopters of Valence.

Love it. It's deployed through probably about 80% of the company at this point. 

Parker: We checked, 16,000 users. 

Bill: Yes. So 16,000 out of 20,000 employees. So pretty remarkable adoption. We have a company called Writer, which is a startup in there doing content creation, and the CTO has four or five big, you know, projects that he's driving the development for. And, again, for the companies with those kind of resources, I love that kind of approach.

You know, for smaller companies, I think it's fine people like Valence who can really solve a specific problem for you really quickly and give you experience with AI. That's what really makes a lot of sense. 

Parker: I've heard other people talk about that. The sort of portfolio approach, some internal, some external, some existing vendors, some new vendors.

If you were giving advice to a leadership team, how would you give advice to navigate that over the next couple of years? Because there'll be tensions between different groups wanting one thing or promising something else. 

Bill: You know, I actually wouldn't overthink it. You have a certain amount of—every company has a certain amount of capital to deploy.

And some companies, it's a large amount. Some companies, it's really small and, you know, really tightly controlled. I think the thing is to make sure that you actually do have a balance. The one thing I'm pretty convinced of, unless you're a deep, deep tech company yourself, no matter how good your engineering team is—so, again, let me just step back. You know, at Vanguard, 35% of our employees are software engineers. Most people don't think that. They think you're an investment firm. We're an investment firm with 35% of our population are engineers and our engineering team is good. Like, they're really good, and they think they're even better than really good.

And you know, the truth of the matter is we can't be as nimble and agile on things like an AI coach development as a company that's designed to do it. And I think, as business leaders, the advice I would have is just make sure whatever the capital allocation you have for these kinds of experiments, you've got a piece where you can pick a couple of firms and go really deep with them, because it's not that expensive. And I think you're going to get insights that you won't get from your own teams. 

Parker: Are there any interesting results from experiments that have floated up to the board level at either of your two public companies? 

Bill: Yeah. Less on the coaching side, more on the content side.

But the company I mentioned earlier, Writer, has gotten, you know, in one of the companies, it's like, whoa, these guys are really good. Like, we're able to do things from a content perspective that we never thought possible before. I think there are also a couple of players out there who are really, very deep in helping build out customer service, just automating in a much more intelligent way how customer service reps respond to calls in particular. And if you can make those folks more accurate, more efficient, overall more effective, again, huge amount of savings, but also a huge jump in quality. 

Parker: One of the things that you and I have talked about is your belief in not just the value of leaders, but the importance of investing in leaders, and that's been a through line throughout your career.

Can you share a little bit about how that showed up for you at Vanguard, the investments that you made in the pre-AI world, and then we'll talk about post AI? 

Bill: Yeah, I mean, you know, one of the things, I was talking to somebody before we started here, and I had the great privilege of joining Vanguard when we were just a little beyond the startup phase, a few years into our history, and our founder, Jack Bogle.

And Jack was, you know, for those of you not in the financial services world, he's really an iconic founder. Visionary is sometimes a word that's used too often, but in Jack's case, it was true, and he completely disrupted investment management with our approach. But Jack also had this instinct around people and that, you know, he had a saying, “even one person can make a difference.” And no matter how big we got, he kept repeating that mantra.

So, I was lucky to grow up like that. And then Jack's successor basically took it another step further, and he was like, we've had this amazing founder who's a visionary and, you know, pretty directive in terms of how we built the company. That's not going to scale. We'd gotten to a hundred billion dollars doing that, from a startup of one and a half billion, but if we wanted to go to a trillion, we were going to need to have a much more team-oriented culture.

And so he really installed the ethos that, at the senior level, a high-performing team was the way we wanted to build the business, not like one visionary leader, you know, sort of directing everybody where to go. And then, as you know, I was our third CEO. And one of the things that we began to see is, while the high-performing team at the senior level was working really well, farther down the organization, there was a little less engagement than maybe we wanted.

And we had seen, just from a business-case standpoint, a direct correlation between employee engagement, and we used Gallup at the time, versus net promoter score of those particular client groups. The higher the engagement, the higher the net promoter scores. Again for this audience, that's probably like, yes, of course. You all know that. But I would say, this was the early 2000s. People didn't—it didn't come to them naturally. And so we actually pivoted and changed the whole way we thought about attracting and developing leaders and made that the central part of what we do.

And you know, the way I like to think about it, we did a lot of work with Jim Collins. And so those of you who are familiar with his “good to great” concept, we built a flywheel. Like, here's the business model that we want to have and what the different components of that flywheel were. I won't bother you with all those details.

But at the heart of it was high-performing people, and that was like the axle upon which the flywheel would spin. And so, we started talking about this, and we had the, again, really good luck in that one of our neighboring companies was run by a guy named Doug Conant, who runs the ConantLeadership center now.

And Doug was the CEO at Campbell’s, and some of you are probably familiar with his work on engagement, but Doug came to us and said, “how are you going to know if you're successful?” And some of our people were asking. So we had four numbers that we focused on at Vanguard.

One was investment performance. You would expect that. One was client loyalty, so net promoter score. You would expect that. One was our version of profitability, which was, you know, what expense ratio we charged our clients. Doug came in and said, “you need to have a people component to that, and it needs to be first.”

And we actually did that. We used an engagement ratio that we calculated from Gallup engaged to disengaged. And that was our number-one number. So when we would report out to the company how we were doing and what our aspirations were, we always started with employee engagement. And if you're going to get great employee engagement, you've got to have great leadership.

And so, that's where the work really began. It's actually what led us to you guys originally, was how can we take that to an even higher level? And I remain convinced, you know, at Vanguard, and again, you're always a captive of your own experience. But when you ask people—I sat with a lot of our competitors at industry trade associations or whatever.

And, you know, people would say, “oh yeah, Vanguard, they're really, you know, they pioneered that indexing thing,” which most of our competitors hated. They're really low cost. Most of our competitors hated that because it cut margins. No one ever talked about our people. And so we didn't really brag about it a whole lot because we didn't want them to know.

We thought that, actually, of course, we had advantages structurally that helped us with the cost, and we had the indexing idea. But what really turbo-charged our growth was when we doubled down on people engagement among the employees and leadership, among, you know, frontline leaders all the way to the top.

And you can see a company that was growing at a pretty good clip, then went into hyper-growth drive when we got better at the people side, and it was just direct, you know, it was just math at the end of the day. You could then take that to the company, and you could take it to your board, and you could say, “see, all these investments we're making on the talent side, look what's happening to the business side.”

Parker: And when you have that straight line, that correlation, causation equation, That's incredibly powerful, because then as people see the number go up, they understand why the investments are being made. If we turn our eyes to the future, the next three, five years, there's a world in which there'll be quite a bit of disruption on the people's side.

How would you suggest that CHROs or CEOs who might be like-minded to you, that they think about that? 

Bill: Yeah, you know, and again, it's really hard to know ahead of time exactly where it's all going to come, but you get some clues just by watching what's happening even today. 

What we did, and again, whether it's applicable across the board or not, we went through a very early period of reskilling our people. So, you know, I said 35% of our workforce were software engineers. Every major financial—every major legacy system in investment management was, you know, essentially built on COBOL code with DB2 relational databases.

Like, that was it. And so we had all these engineers, that's what they knew how to do. Well, imagine what happened when all of a sudden workstations came around and then the internet came around. We actually reskilled most of our engineers. And we had really aggressive programs to teach them new languages, new ways of coding.

We move from a waterfall-developments approach to an agile approach. You know, I think we probably started that move almost 20 years ago now. And obviously, today, I'm sure they've gone way beyond anything I can imagine. But the commitment to reskilling people, again, we were fortunate; we were a really successful organization.

So we had the resources to do that. But I'll tell you, I can't prove it, but we had much lower turnover than our competitors and much better performance as a company as a result. So here's a correlation. Whether there's causation, I can't really prove it. I believe there is. And I believe very strongly there is, but that's how we did it.

And it wasn't just—I use the software engineering group as an example, but in all of our other major areas, similar things happened. Our processing groups had to change what they do. And again, we tried to reskill as much as we could, and, look, you know, we had people who resisted the reskilling, and they usually ended up taking themselves out of the equation.

We never had a riff through my tenure. So for the first 40 years of the company, and I think a lot of it was because we got out in front of some of these issues. 

Parker: And I'll add that I think you took over as CEO the week after the financial crisis or the week before the financial crisis, 2008.

Bill: Two weeks before. So, I had two weeks.

Parker: One of the things you and I have talked about is sort of the only wrong answer is doing nothing. Can we just end on sort of advice that you would give to someone who's struggling to convince their team on, you know, why they need to take action today?

Bill: You know, so I'm a huge fan of Clay Christensen's work on The Innovator’s Dilemma. And again, in this audience, I know most of you are familiar with that. And, you know, as soon as you feel like, okay, we can't do something new because it's going to disrupt what we've been doing, and we've been really successful.

It's the beginning of the end. You know, Jack Bogle, our founder, was actually a fan of Clay Christensen's deep predecessor, a guy named Schumpeter who was an economist, an Austrian economist, and he talked about creative destruction. And you know, my observation is creative destruction is actually one of the most important elements of capitalism.

And companies that are, you know, if you can imagine something could happen, somebody's already doing it, and they're coming at you from a competitive landscape. So I actually think this isn't like you get to choose. I think you have to do this. And, again, I don't know exactly where AI is going to end up, none of us do, or maybe a couple of our later speakers do—we've got some real gurus here. But I think the biggest sin anybody could do here is not do something, because I think this is coming.

And the more experience, the more testing, the more, you know, pivoting from lessons learned that we do at this phase, the more likely we are to not only succeed, but actually seize opportunities as businesses, you know, from this new technology. But I'm completely convinced that this will be as disruptive as the internet was.

And, you know, obviously the internet was as disruptive in many ways as the original industrial revolution. And if you sort of play that out, you want to be part of it. You don't want to be a victim. 

Parker: And there's a learning curve, so I think it's important to get started early. I just want to say thank you.

It's wonderful to have these conversations. I know you're joining us en route from Philadelphia to almost the Canadian border, but we really appreciate you making the time today. Thank you.

Bill: Well, it's a privilege to be here, and I'll just say, having known this guy since he literally started the company, I am not unbiased when I say this, but as an investor in the company and as a fan of what Valence is doing, it's really exciting for us to see all of you here, because we're going to learn from you, and, hopefully, it's going to make us a better company as well. 

Parker: Absolutely. Thank you, Bill.

AI: Now or Never

AI is a fast-developing technology, and it may be tempting to wait and see how it evolves. But, as former Vanguard CEO Bill McNabb explains in this fireside chat from Valence's AI & The Workforce Summit, those who lean in early will benefit from compelling productivity gains and develop new and better capabilities.

Bill McNabb

Former CEO, Vanguard

AI: Now or Never

The Era of Personalized Knowledge

The internet gave us access to more knowledge than ever before. But in today’s more complex world, we’re drowning in information. We need help to navigate through it – and as Valence CEO Parker Mitchell explains in this keynote, LLMs can provide that help by personalizing knowledge in every aspect of work.

Parker Mitchell

Founder and CEO, Valence

Key Points

Parker Mitchell: All right. Welcome, everyone. It is such a delight to be able to host an event like this. It's sort of a well-kept secret within Valence that we actually only began organizing this seven or eight weeks ago. And it was because I'd had so many conversations with CHROs, HRLTs, and they always came back to this same topic of how AI is going to be the disrupting force of the next three, five years, and how many outstanding questions they had, and probably most importantly, how much they valued learning and talking to one another.

And so we thought we're having these kinds of conversations across the US, across Europe. What if we just brought people together and had a focused event? An event where we could hear from people who are luminaries and thought leaders in AI, people who are peers who are experimenting with AI, and people who are going to offer some provocations, some ideas to just get us thinking.

So, it's incredible that in seven weeks we were able to put the word out, and we now, as Das said, have 200 people who will be in and out over the course of the day. I think we have 1,000+ who will be joining us virtually. And they represent companies that, as Das said, have 7.5 million employees. And the thing that we're so excited about is just the potential for AI to literally reach, in positive, empowering ways, each and every one of those 7.5 million employees. 

And the thing that I think we all know—we all know AI is going to be disruptive, we all know that it's going to affect the workforce, affect the workplace in ways that we can't even begin to imagine and can't even begin to predict. And, I think many of you, as you are thinking about what this future looks like, you are also having to navigate the present.

I imagine that most people here have pressure from their boards, from their CEOs: try to do more with AI. What kind of use cases can we come up with? And I also bet many of you here have had pressure from your AI councils and your chief risk officers to maybe do a little less with AI. These are the tensions that we are always navigating, and I think one of the reasons, though, why we're here is we know that it is absolutely incumbent upon us as HR leaders to take this moment to lead.

Much of the conversation is around how AI is going to affect productivity. And there's undoubtedly going to be productivity gains. But, as we all know from the conversations we've been having, those productivity gains are going to cause shifts in the workforce. And for us to be able to navigate those shifts, we also have to invest in AI that is going to augment human potential.

And I think that's one of the reasons why we're all here today, to get those ideas about how can we find the AI that is not just going to automate processes of parts of jobs or even entire jobs, but how do we find AI that's going to augment our workers and help them navigate through this change? While most of this is going to be a conversation among peers, I am delighted to just share a few words about Valence and why we're here today.

So, I think we've been as close to some of these questions about AI in the workforce, about AI in the talent strategy as pretty well anyone out there. We've had, as far as we know, the first AI coach that was deployed in enterprise, and I think now it's among the most, if not the most, widely deployed coach.

We've had partnership conversations with many people in the room, many of our partners who, you know, are overseas or will be joining us virtually. And we've really tried to wrestle with both the future of AI and how to help people, and how do we introduce it, given all the complexities of large global companies.

But we're also only here because we've been thinking about these types of things for, you know, collectively, the leadership team, the founders of Valence for the past multiple decades. My background, I was the founder and CEO of a group called Engineers Without Borders. And we quickly realized that the people-side of things was what really mattered when we were working on these difficult, complex projects.

And so, you know, back in the 2000s, we were teaching thousands of engineers a year concepts like the amygdala response, the ladder of inference, the Johari window, well before they were sexy. And I gotta admit, I don't think they're sexy, even now. Unless you're an IO nerd like some of us are.

But this is what we were trying to do. And we were doing this because we believe that this idea of personalization of knowledge, of personalized learning is truly transformative. And so when we founded Valence, it was this idea of, can we bring some of the personalized experience, the experience of having an executive coach or a team facilitator, can we bring that to the masses? 

And we were more product nerds. I was an engineer, and we were way better at that side of things than marketing. So actually, the very first name of our company was BetterTeams.coach. It's not exactly a sexy name, but it's pretty clear about what we were trying to do. We were building tools for managers, digital tools for managers, so that they could understand their teams better, work better together, and lead those teams better.

We're going to get a chance to hear from Bill McNabb, who is the CEO of Vanguard, is one of the early believers in these kinds of team tools, and this is what our early customers like Vanguard and Coca-Cola, Nestlé, and others have been deploying at scale. And what we learned from that is some of the challenges that managers face, what it's like to work in global companies, what it's like to work in countries around the world. But we knew that, ultimately, our vision was how do we try to offer this type of personalization at a way higher scale. 

Now, in the, you know, late 2018, 2019, all of our competitors in the HR space were talking about AI, and they were always having an AI module of this, and my team, even my investors, were saying, “Parker, Valence should really talk about, like, what our AI strategy is.”

And I actually refused. I said, none of the stuff that’s being done right now is actually valuable AI. It’s machine learning, which is really great if you have large, labeled data sets, but that's not what it's like to engage as humans with one another at work. The thing that matters most to us is language.

We are complex beings in a complex world, and we are communicating with one another by language, and until we can understand that, AI isn't going to make much of a difference. And this is in 2019, and we all knew in 2019 that AI that could understand language was decades, decades out. Well, we were wrong, but I still remember sitting down with a very good friend of mine who was a researcher at one of the large AI companies.

This was a little more than two years ago. It was a few blocks north at the café near Union Square. And we were talking about what was possible. He knew my views on AI and he said, “Parker, we're actually getting pretty close to cracking language,” and he let me play around with early versions of what they had.

And I remember, after about two hours of playing around with it, I went home and I wrote a blog post, and the blog post was titled, “Are We in Our Gutenberg Press Moment?” 

So let me explain a little bit. As I said, I think language is one of the most important things for charting the course of humanity. Language is how we codify ideas, it's how we share ideas among one another, it's how we pass them on to generations. But for the first 70,000 years, we were just limited to the spoken word. The last 3,000 or 4,000, a small elite had access to written language to begin to sort of codify that knowledge. But the year 1440 was a particularly pivotal moment, I think, in history.

In the 50 years after Gutenberg invented his printing press, more words were printed than in the 50,000 years prior. So we just saw this explosion of literacy, explosion of knowledge, and explosion of human potential. But where we are now, if you think about going about your daily life and, you know, your personal life, or especially in the work life, we are just inundated with information.

We are drowning in a sea of it. And information is not the currency that matters, it’s how to apply that information to the context at hand. And when you have an incredible teacher, an incredible mentor, an incredible guide, someone that can help you make sense of your environment, it’s an extraordinarily powerful thing.

And so, when I was playing with those LLMs, what I saw was because they were language based, they could understand how I think, which is in words. They could understand the life, especially the work life, that I'm in, which is also in words. And if they could understand those two things, then we would be able to create, essentially, a personal assistant.

And so we think generative AI, it’s creative, it’s incredible in all sorts of ways, but the true promise of AI is going to be personalization. And for those of us who are in the L&D space, the HR space, we think that this personalization is going to utterly upend how we think about learning, development, how we support leaders.

It is going to allow us to rethink from first principles. It’s going to unlock us from some of the shackles that we were facing in the past. We're going to hear, later on this afternoon, how AI is so exciting in the field of education. And that is because it’s moving from teacher mode, a sort of one-size-fits-all, to a tutor mode, which is bespoke to each individual person. 

And the educational attainment, the speed of learning in tutor mode, is twice as fast as it is for teaching. And so if we can—that personalization exists in education, we think that personalization is also going to exist in onboarding and learning and understanding and in performing our jobs as leaders and managers across companies.

And so if there's one thing that I hope you take away from today, whether it's Valence that develops it or not, we think that every employee, not just every leader, not just every manager, but that technology will be good enough and affordable enough that every employee will be able to have a personal work assistant.

Understands them, understands their world, and helps them, makes their life a little bit easier. Now, for those of you who’ve experienced conversations with ChatGPT, we know it’s not that at the moment. It’s going to take some work, but we think the building block is there. Companies like Valence are trying to translate and transform general AI into specific, purpose-built AI.

So in our case, we are combining together many LLMs, many agents, if you want to use the technical terms, to be able to perform parallel tasks and be able to deliver a smooth coaching conversation. And I just want to talk a little bit about what that might feel like. Because general AI knows a lot about a lot of things.

People have described it as a compression algorithm for the internet. And it's sort of zipped all that knowledge into an incredibly dense set of parameters and weights. And it knows a lot. But it's not designed to know anything about you. And so the one layer that we are adding on to it is a personalization layer.

So trying to learn as much as possible about you and your job and not just be there to respond to a question that you might have, but to always be thinking proactively, how can I reach out to you as an individual and help you in your learning goals or in your job or in the question or challenge that you have? So this idea of sort of proactive coaching, so essential. 

The second thing that we make sure that our AI coach is focused on is coaching. We want Nadia, we’ll introduce you to her in a moment, to be an expert on coaching, to know when she should dive deep to understand a root cause, when she should tactically help you with something that you're facing, when she should pull a framework that you might have seen before and remind you, or when she should try a new approach.

But she is deeply trained on all the best practices, sort of like a librarian that can choose from the world's library of coaching frameworks. And then, finally, we'll get a chance to hear how Nadia can be bespoke to your company so she can understand your leadership frameworks and the moments that matter in your talent cycle and understand the business challenges that you might face and start to coach, not just on sort of learning and development, but how do you understand the world that you are in, and how do you perform a little better in it?

So this layer of purpose-built coaching that really invokes this context and memory around personalization, around expertise, and being bespoke for your company, we think this layer has a chance to fundamentally transform this general AI into purpose-built AI. So this is AI that knows, and we think this is the type of AI that's going to be incredibly helpful.

Nadia: Hi, Parker. We'd booked time to chat about John. How is that going? 

Parker: Nadia, I'm pretty frustrated. John’s a VP who should be setting big goals, not making excuses. He’s got to be a more ambitious leader.

Nadia: I know you’re frustrated, but we’ve talked about how you fail to set new people up for success when you’re moving too fast. John hasn’t had a proper chance. Let’s problem-solve this. Can I set up 15 minutes for you and me on Friday, when things are usually calmer?

Parker: So we wanted to just give you that quick overview of what Nadia’s capabilities are. I know a few of the speakers have deployed Nadia, and you might hear some stories about the uptake or the use cases. So we want to just give that introduction, and we’ll also give you a sneak peek later in the day of Nadia’s capabilities that we're going to roll out very soon in January.

And I just want to call out, what’s exciting when we talk about Nadia is able to coach anyone, anywhere, you know, as we speak, she is being used by production leaders in factories at General Mills or Schneider Electric or AGCO. So these are people who never would have had the idea of a personal coach. 

She’s being rolled out on the front lines at Costa Coffee. We’re doing a webinar with them in a couple of weeks, but there are great, successful experiences there, at On Running, at Delta. She could be used by knowledge workers. She could be used in regulated, unregulated industries. The great thing is, she could be used all the way from the CEO—we literally have the CEO of On Running, who’s provided a quote because he’s just so excited about how Valence’s tools have helped him and his co-founders and his leadership team, and then everyone at On.

So it can be used top to bottom. It could be used across sectors. And I think this idea of democratization is just so important. And so the one thing I’ll just leave you with, as I’ve said, this idea of investing in potential is so crucial if we’re going to navigate all the changes, the upheavals, the transformations in work.

And we think that betting on AI coaching, betting on having AI that your employees can interact with and test and work through, that is absolutely a bet worth taking. So thank you for joining us today and I'm delighted to welcome Bill up on stage to join us.

The Era of Personalized Knowledge

The internet gave us access to more knowledge than ever before. But in today’s more complex world, we’re drowning in information. We need help to navigate through it – and as Valence CEO Parker Mitchell explains in this keynote, LLMs can provide that help by personalizing knowledge in every aspect of work.

Parker Mitchell

Founder and CEO, Valence

The Era of Personalized Knowledge

The Future of AI in the Workplace

Any big technology leap comes with a central promise and a lot of rough edges. With AI, the central promise is personal assistants and coaches who support us in every part of our lives — including at work. In this keynote address, Valence CEO Parker Mitchell lays out his vision for how work will change as AI makes personalized coaching available at scale to global workforces.

Parker Mitchell

Founder and CEO, Valence

Key Points

00:00  The Imperative of Our Time

Parker Mitchell: So I feel very, very fortunate in the position where I am. I get to talk to a range of folks, thought leaders like Gillian, like Ethan, like Reid, people who are—Geoff Hinton, who's coming up at the end—who are talking about how is, at the thirty-thousand, fifty-thousand-foot level, how is technology, how is this wave of technology beginning to impact work, beginning to impact us as people, beginning to impact societies. But it's equally as exciting, it's probably even more exciting that I get to chat with many of the folks, I recognize some names of folks who are joining us today, people who are our partners, partners in trying to put AI into the hands of their workers. And you'll hear in a moment that I think this is one of the imperatives of our time, being able to give people, workers at every level, every seniority, every type of job, AI fluency, AI literacy to work with the most powerful tool, I think, the most powerful tool that any of us have experienced. I think that's the imperative of our time, and it's just such a privilege to have a chance to partner with people who believe equally, themselves, in their own companies about the importance of this and are able to help navigate the sometimes difficult mazes. And so I've had a chance to distill some of these ideas and conversations that I've had into a few key sort of themes and thoughts that I'm delighted to share with you today 

01:37   The Future is Here, It’s Just Not Evenly Distributed

Parker: So one idea that again I'm privileged from this position of being able to see so many different things happening is that we see a future that is actually in many cases already here it's just not evenly distributed. It's a classic quote from William Gibson and one of the things that we're seeing with early AI adopters, and these are individuals in companies who are saying, "Hey, I want to make AI part of everything that I do." You know, we get to see this from Nadia. We get to hear from them often because they feel like Nadia provides them with so many resources, but we see just such an incredible range of use cases, creative ideas of how can I set Nadia up to be able to coach each and every member know each and every member of my team, so that as I'm talking about them, Nadia is able to remember them and able to give me specific advice to the relationship I have with them. 

So we heard that from one of the users, I think it was over in Ireland, maybe two or three months ago, hear so many ideas of people being able to sort of push the frontier there. And I think the lesson that I take from this is just the importance for companies being able to hear those voices being able to make a safe space for people to be able to share their innovations, draw them out and then be able to amplify them. And so as we look around, we sort of say, hey, where is AI, you know, where's the impact of it? Hasn't shown up yet in the productivity statistics, and our belief is that it's going to take some time to show up in the productivity statistics. That's about widespread adoption, and we're going to talk a lot about that today but the spike, the first 1%, 2%, 3% of a company that is already there. And so we have to go out and find it now as we look into the future. 

It's really hard to make predictions in a chaotic world in general, but it's especially hard to do so in a world where the future is exponential. I've had a few conversations now with Geoffrey Hinton about this, just about the trajectory of the change of technology that he's experienced particularly in the past 15 years, and it's hard to remember that 15 years ago, this work on back-propagating neural nets was considered sort of the backwater of AI. It was still considered a little bit of, you know, not where the innovation was going to happen, but even he was unable to fully see the potential and how quickly these new innovations would arrive, the new models would arrive. And so, as we look forward, it's also important to realize we can't just extrapolate from the past. Things are going to continue to accelerate, and it behooves us all as leaders to really try to, even though we can't predict the future, to try to get glimpses of what it might look like and to set our organizations, our leaders, our employees, our workers up for that. 

04:35   A Brief History of Big Technological Leaps

Parker: Now a very quick digression into the history of some of the technology leaps because I think it's really interesting to see what were society's reactions to some of the big ideas, the ideas that clearly were going to change. And the history of the car is an interesting one. If you look at the newspaper reports from about 125 years ago, as the first cars were being introduced, they weren't glowingly positive. Cars were having so many negative externalities. They were noisy, they were dangerous, they were dangerous to pedestrians, they got into accidents with each other, and that is obviously true, but there were a lot of innovations that then were able to come with it. So there were seat belts, there were street lights, there were better rules of the road. 

And I think we can sometimes get—when new technology comes in, we can get caught up in the rough edges. There are certainly going to be challenges to how AI models produce information, but those challenges are going to be overcome-able. The work that we've done, and many other companies deploying AI in enterprise, to just put really strong guard rails on the models and make sure that they don't hallucinate, they don't talk about topics that aren't allowed to be talked about. That was a relatively quick solution for us and others to put in place, and it takes care of problems like hallucination. And so I just think it's important, even if there is a momentary issue that causes someone to want to hesitate, to know that there will be solutions to that. 

06:03   Generative AI & The Era of Personalization

Parker: So the big idea that we believe is incredibly empowering is that generative AI will usher in an era of personalization. And when we talk about, with our product team, what we really want to help, we really want to help have our AI coach, Nadia, understand the mental model of each user, understand how each person sees the world and how they are trying to navigate it, and then be able to support them as much as possible. And this is a long-term vision. This is something that's going to build over time, but I think this idea of personalization, of having a coach alongside you the same way I think every child is going to have a tutor alongside them, an AI tutor that knows how to help them on their learning journey, this AI coach is going to help them at every stage on their professional careers and learn how to collaborate. And I think that is going to be one of the most profound changes to how work is done, and it's incredibly exciting to be part of, to be at the vanguard of this. 

07:05   Our Origins and Motivating Principles

Parker: Now, we didn't start there. I’ll briefly do a quick digression. We were founded with this idea of a coach but knowing that we didn't have the technology to get there. And so you see a very primitive prototype of our team tools. I know some folks on this call are Valence team tools users. This is a version of a line done by paper, just seeing, hey, how will people react to that. But I share that because at the center of our mission has always been, how do we help people work better together? How do we help them understand themselves? How do we help them understand others, and through that, how do we help them have a better collaboration, which is really how work is done? 

And so we've taken that ethos, and we've woven that through every product that we've built up to and including Nadia, our team coach, and so these are our motivating principles. We started with this idea of democratization that everyone in the world deserves this idea of a personal coach. We talk about a world where potential is more valued than credential, so if you have a growth mindset, if you have a desire to learn, if you have an openness to feedback, if you have a personal coach like Nadia that will compound over time, and we think that those traits of potential are the ones that should be rewarded socially. 

08:32   Reimagining the HR Talent Platform

Parker: Now, as we've come to build Nadia, we've also seen that there's a huge opportunity for something else, which is to help modernize talent programs. And I don't mean talent programs as the design; I mean the technology that's behind delivering the talent programs. And I think many of us, many of you on the call, will probably if you think about the tech stack that you use for your talent programs think, yeah, that might not actually be the way that I would design it from scratch if I was redesigning it today, especially if I had the power of generative AI behind it. And so I think it's been, again, it's been a great privilege to partner with heads of talent, heads of leadership with people who are thinking about, how can we redesign these programs to make them more personal, to take down some of the burden to make them more fair, and it's an exciting journey to help modernize some of the technology behind the talent programs. 

09:29   Our Mission: Augmentative AI for All

And then the final motivating principle that we have, and this is what I alluded to at the beginning, is that we think the change that is going to sweep through the workforce, the change that is going to be driven by generative AI, which is a new way of comprehending our world, which is mainly in the written and spoken word and reasoning over that world, it is going to cause enormous change, and I think we are still just only catching glimpses of it and probably underestimating the change that it will drive. And the solution we think to this change that's going to be caused by AI is to give each and every employee generative AI that is augmentative, not AI that's going to automate and take away things that they do. 

That's very important as well, but AI that they will interact with and learn how to use and co-create with. We and others call this augmentative AI, and we think that the imperative of our time is to put this augmentative AI into the hands of our employees, imperfectly at first, but then it will get better and better and smoother and smoother. And so that's why we, at Valence, do what we do, it's why we're building Nadia, it's the principles behind Nadia that we build, and it's just been an enormous pleasure to, again, partner with such great folks to help discover what is going to work in this new future world that we're all moving toward.

The Future of AI in the Workplace

Any big technology leap comes with a central promise and a lot of rough edges. With AI, the central promise is personal assistants and coaches who support us in every part of our lives — including at work. In this keynote address, Valence CEO Parker Mitchell lays out his vision for how work will change as AI makes personalized coaching available at scale to global workforces.

Parker Mitchell

Founder and CEO, Valence

The Future of AI in the Workplace

Product Demo: What's Next for Nadia

Valence CEO Parker Mitchell shares the next evolution of our AI work coach, Nadia. The power of AI coaching comes from gathering context: about each employee, each team, and the entire organization. Nadia's unique ability to understand and remember what she learns about individuals and organizations allows her to take the moments that matter in the talent cycle (goal setting, performance reviews, the launch of new leadership frameworks) and personalize them to every employee, in every role.

Today, we're building Nadia to better understand team dynamics, not just individual needs. She's ready to help people role play difficult conversations, based on what she knows about each member of the team. And she's able to help team's close the feedback loop, making anonymized feedback easy to share in minutes and helping users synthesize and understand how they can become better leaders and teammates in real time, not just during official performance review periods. Finally, Nadia has new abilities to integrate into the HR talent calendar, personalizing and assigning initiatives to individuals across regions and functions and giving HR leaders a seamlessly way to track progress on the critical growth moments across their organization.

Parker Mitchell

Founder and CEO, Valence

Key Points

00:00   The AI Coach for Users & Talent Teams

Parker Mitchell: The core innovation, the core capability of a coach is really that idea of memory, and you can get the sense of what Nadia is able to do because of the context that she is able to integrate. And so as you see on, you know, the personal side of things, there are moments that naturally happen.

00:28  Building Context: How Nadia Helps

You might have a meeting with your boss that's stressful, and you talk to Nadia about that. Maybe the NPS score of your unit or your, your function is going down and you wanna understand, hey, what's wrong there? You might have a poor performer on your team and have a challenging conversation. So Nadia is building her understanding of you through each of these moments. 

And then as a company, you are using Nadia to Make the talent initiatives that you have personal, to make them personal for each and every person. Goal setting, OKR setting is obviously personal and people could benefit hugely from having a personal coach to work them through that. As managers prepare for their mid-year team development conversations, a personal coach can be invaluable to help reduce some of the stress around that. If you have a new leadership framework or new cultural values, imagine the power of 10,000, 50,000 people each having, again, an individualized coach, helping them try to take this concept that's usually just a set of words on a wall and bring that to life for their particular tiny part of the business. And all those tiny parts add up. 

So there's an incredible power there as Nadia learns about you, and as Nadia understands the programs and the moments that the company has, and the intersection of the two is so valuable. And so we're gonna get a chance to explore a few of these new features. Now, I'm realizing that there's probably a lot of foundational elements to what Nadia is that people might not fully understand. Again, we're more than delighted to showcase that individually or collectively over the coming weeks and months. But I wanna show what's new. 

02:08   Building a Personalized Understanding of Users

Parker: So we talked originally about this idea of building out that context from you to your environment, your team. And I wanna introduce you to a fictional character named Aidan. We picked the name Aidan because Aidan is Nadia backwards. So just in case anyone's wondering if this is real, Aidan is a fictional character. Aidan works for an airline, is a station manager located in a small airport. And we picked this because Nadia is incredible at coaching such a range of people, knowledge workers working in, you know, behind a desk, but also frontline leaders, frontline managers who are working with workforces in very, very diverse, diverse circumstances. 

So what I wanna first highlight, I'm gonna highlight three key new features that we're offering. So we began with a profile, building a profile for individuals. So Nadia is, as she has conversations with Aidan, she's building an understanding of what he is like, what his leadership styles, what are the particular challenges of his role. Noticing that he's a new manager. This is all coming from information that she's gathering from conversations. Imagine her taking, you know, coaches notes after every conversation. She also has hypotheses. She has ideas of what she wants to learn about him, and she's gonna weave those into the conversation. 

03:39  Nadia Knows Your Team

But the big thing that people have always said is they said, I wish Nadia knew my team, not just who they were, but what my interactions with them are, what my trust levels with them are. And so we have introduced this new feature called Your Connections or Your Team. You can pre-populate that if there's an integration into your HRIS. But what Nadia is doing is understanding who are the people you're working with, what are the interactions and the collaboration you need to have with them, and what are suggestions almost like a, a coach on your shoulder as you go through your daily work, your weekly interactions. What are her suggestions for how you can be a better leader and a better collaborator with each of the people that you work with. 

So to give you just a quick glimpse of what this looks like, you know, we've pre-populated this with some, again, fictional characters. And for each of these people, Nadia's able to build, she's building this with, from her memory. So she's got a series of pieces of memory from conversations. Again, this is all the conversations that you've had with her, so she doesn't know what Alex actually is like. but she's able to understand your take on her and give you coaching and guidance through that. 

Now, one of the things that people are, you know, what we find that people are doing is they're saying, okay, I want Nadia to understand all these different people that I have. Give me these proactive tips. But then often there's difficult conversations. When we talk to managers, a conversation of resetting expectations and performance, it's one of the most nerve-wracking moments for them, especially new managers, but even experienced managers. And so we can go directly into a role play now. role play is one of the most powerful features. People just love this chance to try out a topic, a thread that they go through. And we've, we've really focused the effort of personalizing it. 

So the way Nadia will respond if she's, you know, pretending to be Alex will be different than how she'll respond if she's pretending to be Rachel. Or if she's playing the role of Paul, she is going to, take on as much as she knows of their personality and try to react in a very realistic way. And that's so helpful as people navigate what they think might be a difficult performance, or promotion or any of the types of conversations that they have. So this is a powerful way of adding the context in not just, you know, I'm talking to Nadia about this one-off and I'm getting intelligence, you know, ideas back, but Nadia knows my context. 

06:19   Closing the Feedback Loop

Parker: Now, the third thing that we want to really highlight is this idea of closing the feedback loop. We believe that if organizations were able to have more collective feedback, if individuals were able to get more feedback from their peers, that is one of the most important elements of helping people learn and grow, closing that feedback loop. And so we built a very simple and easy way for Nadia to collect feedback on your behalf. And so if you were asking for feedback, you could send out a link automatically, you could just copy that link and paste it into, you know, to send it to the people that you are interested in. And an anonymized version of Nadia is going to ask them thoughts and feedback and reflections on what their take is. 

And so Nadia, this is a fully private, anonymous, confidential conversation. Nadia will synthesize all the results. It's extremely short, takes two, three minutes. And the big innovation is that people find it quite easy to actually use just a sort of stream of consciousness saying, hey, here are my quick thoughts. Nadia's then able to synthesize that, play it back to you, see if that's correct, and then say, okay, this is what I will bring into my report. The early feedback on this is incredible. People actually say, I wanna give more feedback than just the question or two that's there. They love the natural voice interface. And then when you come back as a leader, you get to see what your feedback report is. So this could be formal feedback moments in a business, but it can also be informal, you saying, hey, it's been a couple of months. I'm a new leader. I just wanna hear what's going well, what I could do better. And it's extremely powerful on that front. 

So those are a few of the highlights. For those who haven't seen what it's like to have a conversation with Nadia, as a reminder, you can come in and ask about anything. In this case, this is an example of a skill building plan to build delegation skills. So I'm not gonna obviously go through this whole thing, but Nadia is giving you some ideas. Asking you where you wanna start, being able to go through and work with you to refine what that plan is. She's suggesting a four-week plan, again, through time, to be able to build some of these skills. Inviting you to respond on it. She's refining that and saying, okay, do we agree that roughly these are the things that we could do to be able to build this skill? You’re looking at it and saying, eh, this is actually not quite right. I wanna refine it a little bit. So she refines this. 

And one of the things that's pretty powerful, just at the end, she is able to then do some of these, do some of this work on your behalf. So she's saying, you know, let's have another meeting. Let's check in, let's make sure June 11, does this work for you? And then she's able to generate, you know, what you see here is Aidan asked, can you draft an email, one to my manager and then another to a couple of the people that I'll be working with, communicating my plan? And so she writes this email for you. You can download it; you can copy and paste it. You can make this as, as seamless and easy as possible. 

So these are just a few of the new features. This idea of building out this concept and trying to make it as easy and painless as possible for people to be able to get the kind of guidance and advice that, you know, they need kind of on their shoulder. And to be able to gather the feedback from their team and their environment so that they can learn and improve. 

So we're extremely excited. We've had great early responses from the beta customers that we've worked with. And we just wanted to share this because this is sort of capturing some of the vision that we have of Nadia that's able to be integrated into every part of your business. 

10:30   Customizing Nadia for your Talent Calendar

Parker: Just to highlight one more thing, so this is what users are loving. As a person deploying in your talent program, you have an opportunity to customize this, we call it performance review, but it is probably more your, company moments timeline. And so this is customizable by individual. So you can say, you know, these roles get these moments, this geography gets this moment. We understand all the challenges of running, you know, a global business. And you can assign work, assign initiatives to people, you can check to see what the progress is, and you can make sure that these ideas, these best practices that we're often trying to suggest and to push out into the business are well received, they're personalized, they're engaged with, and then they're able to leverage these. 

So it's a powerful new set of features, going back to this vision, building out to have the talent team ability to push these programs into, you know, the hands of users and to do so in an incredibly personal way. And then to have users be able to use Nadia, week over week, day over day, as they problem-solve and go through the particular challenges that, you know, we're all facing as often overwhelmed and time-poor leaders. Having this kind of coach is both, it frees up time and it allows us to do a better job.

Product Demo: What's Next for Nadia

Valence CEO Parker Mitchell shares the next evolution of our AI work coach, Nadia. The power of AI coaching comes from gathering context: about each employee, each team, and the entire organization. Nadia's unique ability to understand and remember what she learns about individuals and organizations allows her to take the moments that matter in the talent cycle (goal setting, performance reviews, the launch of new leadership frameworks) and personalize them to every employee, in every role.

Today, we're building Nadia to better understand team dynamics, not just individual needs. She's ready to help people role play difficult conversations, based on what she knows about each member of the team. And she's able to help team's close the feedback loop, making anonymized feedback easy to share in minutes and helping users synthesize and understand how they can become better leaders and teammates in real time, not just during official performance review periods. Finally, Nadia has new abilities to integrate into the HR talent calendar, personalizing and assigning initiatives to individuals across regions and functions and giving HR leaders a seamlessly way to track progress on the critical growth moments across their organization.

Parker Mitchell

Founder and CEO, Valence

Product Demo: What's Next for Nadia

AI: The Untold Story

We sat down with LinkedIn co-founder Reid Hoffman and Financial Times editorial board chair Gillian Tett to go beyond the headlines and get a deeper understanding of the economic and workforce impact of AI. Their message for HR leaders: act now, because the change is already here.

Key Takeaways

1. AI is here to amplify human potential. Instead of focusing on AI as a way to cut costs or reduce headcount, Reid and Gillian see AI as augmentative: a new member of the team that changes workflows and unlocks capacity for human creativity.

2. "AI is the best educational tool we have created in human history," Reid says. There are real challenges around AI's impact on how young people learn and develop the skills they need to enter the workforce. But Reid and Gillian explore how AI can create new models of education and training that personalize instruction in a way that was previously only possible at the most prestigious universities.

3. With AI, everyone becomes a manager. Reid sees a near future where every employee has a team of AI assistants that they manage to get work done. "There won't be such a thing as individual contributors anymore." In this world, the same EQ skills that make people great managers and coworkers become the skills that make them AI super-users .

4. "If you're not using AI, you're going to be under-tooled," says Reid. From leveraging AI to run better meetings to reimagining what's possible to achieve with an AI assistant at every employee's side, Reid and Gillian outline concrete starting points for driving change at scale. Because, as Reid says, in six months, if AI isn't embedded in your workflows, "It'll be like saying, 'I'm a carpenter, but I use rocks, not hammers.'"

Reid Hoffman

Co-Founder at LinkedIn, Manas AI, & Inflection AI

Gillian Tett

Editorial Board Chair, Financial Times

Key Points

00:00   How the Media Covers AI

Parker Mitchell: I thought maybe we could kick it off actually with you, Gillian, and the conversation, the public narrative around AI. How is the media covering the idea of how AI is gonna impact work and the economic and social impacts of that on our livelihoods? And wondering what are the things you think the media might be under-covering, or what are the stories that you think should get more coverage? 

Gillian Tett: Well, I think the media is pretty negative on AI at the moment. A cynic might say that's because they know that their own jobs are threatened. And one of the really striking things about the AI revolution is that it's threatening white collar jobs, not just blue-collar jobs. And so it was very easy for pundits, who are working at financial newspapers, to say, actually, productivity increases are good, which is code for let's have less blue-collar workers if we have automation. And now that white-collar jobs are threatened as well, be they lawyers or traders or journalists, suddenly the narrative has changed. 

So I think there's a real concern about AI. From time to time, there is now a recognition about the extraordinary things that AI can do in relation to life sciences and other research capabilities. But for the most part, it's pretty negative. 

01:17  Now is the Time to Learn How to Work With AI

Reid Hoffman: Well, Gillian's exactly right about the, you know, kind of general media response. And, you know, I think that the kind of things that aren't covered is that one, look, whatever you're wishing, AI is here. If you haven't actually, in fact, already personally found significant use cases that would help you in your own work and in your own life, that means you're not trying hard enough. And there's a general reflex to wait for when it stabilizes. Like, well, when they release the new iPhone, I'll check this out. And it's like, no. It's actually improving on an order of magnitude of months. And so, you actually have to be, you know, kinda going with it. And so what, I think that it is scary. It is changes. It is changes to white collar work. It is the case that businesses start when they, you know, when they can encounter something new with how do we cut, you know, costs. 

So, you know, could we take our marketing department and could we take it from 10 people to two people? You know, can we do this with less journalists? You know, hence the point that Gillian was, you know, gesturing at. But the actual thing is that it'll actually change workflows and change everything else and that, as individuals, you can already begin to see what that is. And so we're telling, one, the positive story, namely, how do you get into it? What do you what you should be doing? What should you be experimenting with? Two, what are the ways that we're that we are, essentially experimenting with to expand our capabilities as individuals and as offices because those capabilities are really there. 

Like, you know, for example, you know, one of the things that I regularly use deep research for is as a research assistant on a broad variety of topics. What I'm trying to actually— I can now think much more broad and synthetically about a number of different kinds of areas relating them to what I'm doing, and I have an immediate research assistant. Now you say, should I get rid of the research assistant I have? No. Actually, frequently what I'll do is generate something and send it off to the research assistant saying, hey, could you track down this, this, and this about this and maybe use, you know, you know, deep research to follow up on these things and so forth. You know, as an iteration and as a as a thing to do. And I think that's the kind of thing about that kind of positive story, that's really important. 

And the other thing that, Gillian was gesturing at is the negative story that we've kind of, you know, kind of wrapped ourselves in the West is actually ultimately, you know, damaging to us. It isn't to say that we don't pay attention to the risks. It isn't to say don't pay attention to the issues. But the question is, AI is coming. It's like we're in a river. It's going down. You can say, I don't like this river. I'm gonna throw my oar up, and I'm gonna yell. Like, okay. That's a really good way of navigating a river. Right? So it's like actually, it's like okay. 

So how do I start steering? What do I learn? What's going on in terms of the currents and that kind of thing. And that's the kind of thing that we need to be doing as individuals, as industries, and as societies, and, obviously, of course, you know, for our audience, as companies. 

04:34   Augmented Intelligence

Gillian: I strongly agree. And in fact, one thing I'd say is that the reason we call it artificial intelligence is because the person on the wall behind me, and I'm sitting in King's College in Cambridge, which is where Alan Turing, was based, and that's actually his portrait up on the wall behind me and literally about a 100 yards away from me is where he used to live and did much of his work. It's called artificial intelligence because almost exactly eighty years ago, Alan Turing did the Turing test and, basically spawned the word artificial intelligence. 

But I often wonder how different it would feel if we called it instead augmented intelligence or accelerated intelligence because the way I see it is that it's not so much about replacement all the time, although sometimes it is, let's be honest. It's more about being an additional member of a team. And I say that because a few years ago, I gave speeches saying that there was one thing that AI could never do, which was to tell a really good joke, and therefore comedians had job security. And it turned out that I was totally wrong because the reason why I thought AI would never be able to tell a joke and challenge comedians was because the pre-transformers models of AI, which were basically path-dependent based on logic, essentially could only produce very basic, crude jokes like knock-knock, who's there jokes, or wordplay or Christmas cracker jokes, and they weren't funny. Post transformers, where, essentially, you're dealing with probabilistic observation, they can produce jokes that are funny about half the time. 

And the dirty secret of humor is that actually comedy writers for late night TV shows are only funny half the time. And the reason they know that is because those jokes are written not by individuals but teams who chuck jokes into the mix and they bat them around, and they finally produce a late-night television comedy. And adding an AI agent into that mix doesn't necessarily replace the humans. It simply adds to the jokes that are basically swirling around and gives more checks and balances. And humor is fascinating because humor in many ways is the very definition of cultural anthropology, which is what I studied my own PhD and I've done academic work in because humor can't be predicted by an algorithm because it depends on contradictions in culture, on ambiguity and silences that we don't like to talk about, and very tribal behavior. So the fact that AI can now master even that but can do that by being part of a team is really important as a parable for what we might see emerge in many professions. 

07:12  AI Assistants for All

Parker: Yeah. Let me double click on that, Reid, because you've talked about having teams, individuals, having assistants, teams having assistants. I'd love to hear you expand on that vision for what work might look like. 

Reid: So let me start with just a couple of near certainties to predict in the near future. And near future is like small number of years or medium number of months. Which is, one, there won't be such a thing as an individual contributor anymore because essentially, every person who's doing this will have a small to a large team of agents facilitating what they're doing, and they will be managing that process with those agents. That's one thing. So it's kind of like almost like the managerial skills, like, the kind of managerial skills you might exhibit today when you're using a deep research agent or other bots in order to do stuff. That's gonna get deepened. 

And another one, this one actually is a prediction I made in the MIT tech review a number of years ago, is every single meeting that we do, we will actually have an AI agent, not just for transcription and notes, all of which is happening now. But where that AI agent would be going, oh, you know, Gillian and Reid are talking about this. Do you realize also you should talk about Alan Turing in the following question, or you should refer to the following thing about the Turing papers in the King's library? You know? 

And so, you know, that kind of participation for information, for follow ups, for questions, you know, will then become like, it's almost like, wait, wait. Can we have this meeting? We don't have the AI agent turned on yet. We're gonna be so much less effective if the AI agent isn't here in the things we're doing. And, you know, part of this amazing transformation that we're about to go into, this is, like, only a few years into the future. And, you know, in fact, when you're doing white collar work, if you're not using AI tools, you'll be under-tooled. It'll be kinda like saying, I'm a carpenter, but I use rocks, not hammers. Or, you know, like, it's kinda gonna be a standard part of what is professional competence, and then that will spread through the entire team. 

So I think this kind of massive capability change is coming so fast that you need to get engaged and you can't, like, say, we're gonna set, you know, these three people to go off and study it and come back in six to twelve months and tell us about it. I think that's too slow. So I think what you want to do is what are the ways you can experiment quickly? So, you know, simple things that you can do that I've, you know, done with, you know, organizations that I'm on the board of and others to say, well, you know, make sure that there's kind of like a weekly, biweekly, you know, fortnightly review of where everyone says, here's what I've tried, here's what I've learned, here's the things that I'm doing. Right? And then you can also similarly go, and here's some of the things that we should be doing, you know, as a group. 

For example, when I'm working with my groups, one of the things I do is I take a transcript of the meeting, and I feed it in with some relatively standard prompts in AI that says, is there anything we missed? Was it an important question? Was it an important source of information? Was there a follow-up? And this set of different things because we just take the, we had the meeting, we did the transcript, we just put the transcript in, and it gives us a very quick response to that. It can even be before the meeting's over is how fast this can be, where you go, "Oh, right. Yeah. Yeah. We should do that too." And so anything to be doing to starting to be experimenting and seeing what is our company culture, our market position, the way that we operate in our groups and not just as, hey, you know, let's go assign Sue or Fred to go, you know, generate a report on this that, you know, we'll go look at in x months. 

11:12  AI Adoption Across Industries

Parker: Gillian, how are you seeing those adoption, sort of steps forward, step back, step sideways? How are those playing out either in the conversations you have with leaders or even potentially within journalism and the Financial Times itself? 

Gillian: Well, I wear several different hats in that I am both overseeing this college in King's College in Cambridge, where academia is potentially being very challenged by AI in many ways. Good news is that the life scientists and the other scientists that I deal with in the college are being given extraordinary wings all of a sudden to do the kind of research at speed that most have never dreamt to be possible. So they are totally positive about AI. 

Many people in the humanities are pretty negative about AI because they can see that it's basically either going to undercut their role as teachers, or, in their view, make, you know, a whole generation of students pretty stupid because they're cheating with AI and not using their brains. I mean, as it happens, Cambridge and Oxford are probably the most, ChatGPT-resistant types of education in the world because they rely so heavily on small face-to-face interactions and what we call tutorials and supervisions where they have to write essays and then talk about them for an hour or two. And that is AI-resistant in many ways, or rather AI-enabled because you can use AI to research your paper, and then you're forced to discuss it as a human being, using what you've seen from AI. So I actually expect that going forward, we may well see more spoken exams, more teaching patterns of the form that we have in Oxford and Cambridge. 

In terms of journalism, you know, I was actually meeting with the CEOs of most of the big British media companies yesterday and moderating a discussion with them all about this very topic. And the message is that they are very threatened by the fact that AI companies are scraping their data with no monetization or monetary reward, often no attribution. And they're basically demanding some form of compensation for journalistic content to be used to train models, which I think is entirely fair. What form it will take is unclear, but there needs to be some way to get the media ecosystem compensated. Otherwise, there will be no media ecosystem and content in the future to scrape. But when it comes to actually providing news, they're taking very different attitudes. I mean, the Financial Times is not using AI to write stories in any formal sense, maybe to do some research, but not to write stories. But it is to aggregate news headlines, for example. And I suspect you'll see a lot more of that going forward. 

And as far as CEOs are concerned, as Reid says, many of them have barely started thinking about it yet, but they need to quite urgently, because if they don't, they will get overtaken. And apart from anything else, they won't actually realize, you know, how to familiarize themselves in the way they see both the benefits and the risks around it. 

14:15  Personal Intelligence & AI Companions

Parker: Reid, I wanna pick up on something Gillian mentioned, which was the tutorial model, at those, you know, Cambridge and Oxford. It's famous. It's so successful. One of the companies that you cofounded a few years ago with Mustafa Suleyman was Pi, personal intelligence. Can you expand on that vision of having this idea of personal intelligence alongside you? 

Reid: So, one of the things that's another kind of startling prediction is that we will, within a small number of years this will be a little further down than the earlier predictions I made. But we will actually, when we have a kid, we will actually have them have an agent, that will go with them through their entire life, and, you know, learn and help and so on with them. By the way, we will adopt that as adults sooner because we won't have all the complexities around, well, what are what are the set of things around it around the child. And so part of that is having a essentially a companion. And part of our idea when we built Pi, you know, pun intended, but it's a personal intelligence, but, you know, apple pie, etc., is that, training for this. And in my, you know, earlier book, Impromptu, I called it amplification intelligence, although augmentation intelligence is good too. 

When we're gonna be amplified, you don't just need IQ, you also need EQ. You need conversational capability. And part of that is to, is to actually be a very good, you know, kind of companion in the things you're doing. And so Pi, I think, you know, kind of set the standard for all of the other GPT4-class models on how do you put an EQ, how do you have it, you know, have it be a conversationalist and ask questions, you know, and how does it help solve a variety of those kind of, like, you know, kind of life navigation. And it applies to work too because social intelligence is part of the meeting, part of kind of collaborating with teams that was important. But, you know, that that obviously, you know, people don't necessarily think of that in its kind of top role. But that's what Inflection and Pi are about. Now we've seen other agents beginning to do, you know, like, you know, Anthropic with Claude and others beginning to develop on the same in a similar line. 

16:33  Navigating Concerns About Job Displacement

Parker: How should company leaders, CEOs help their workforces navigate some of these natural threats that people will feel as they see parts of their job, not just the augmentation parts, which I think people will be excited about, but bits that they maybe spent ten, twenty years becoming an expert on, watching AI do parts of that as well as them. I think that's gonna be the crux of change management in companies. 

Gillian: The really interesting for the companies right now, I think, is actually, in many ways, the entry level jobs because what AI is replacing above all else are a number of the boring entry level jobs that graduate trainees in particular would do for a couple of years to have effectively an apprenticeship in a white-collar way into the wider world of work. And by that, I mean, you know, sort of early career engineers, early career lawyers, paralegals, or in the case of journalists, you know, your classic grad trainee journalist. And when I was running, you know, bits of the FT, you know, I used to make all the new entrants do really dumb, boring stuff like write the markets column, which frankly could have been done by, you know, automation twenty years ago. But we still had people doing that, often because it was a really good training ground for learning how to handle data information. 

So one of the questions is gonna be, how are we gonna train the next generation into apprenticeships and entry level jobs. The flip side of that though is that if we start talking, calling it augmented intelligence or artificial intelligence and start trying to train people how to use it to make their job more effective, we may actually start to see people not only using it to be more productive, but creating whole new categories of work that we haven't even imagined yet because that's been the story of calculators and computers. 

Reid: Actually, for young people right now, part of what I advise them to do is become as expert and possible in AI and come to the organizations going, I can be part of your AI transformation. Like, I will, I will front end, I will use this too. I'm an AI native. It's this, I'm part of generation AI and doing that. I actually think that's part of how the transformation is gonna happen. And by the way, when you get to, well, how are we doing apprenticeship and so on. AI is the, like, by just many, many miles, the best educational tool we have created in human history. Right? 

So the question around, like, it almost gets back to, like, we go back to what Gillian was talking about in terms of the Oxford-Cambridge model. We can now have essentially, you know, a kind of a quasi, not the same and better with an Oxford or, you know, kind of, Cambridge on. But we can have one that is interacting one on one with every individual and helping them, you know, kind of get better on, you know, kind of the way they're thinking, what they're doing. And so you say, well, we got, how do how do we get people up the curve? It's well, actually, in fact, using AI and learning from AI and then using AI to do the work is part of what I think is gonna happen. And then precisely as Gillian was gesturing, we'll start, we'll say, hey, you know, as opposed to having, you know, twenty lawyers, we only need four. But then we're gonna figure out other new things that we need to be doing or can be doing that are really good for how we do business, risk mitigation, analysis, contracts, etc. And then the work will expand in, you know, in different ways just as it has, you know, in every adoption of technology. 

While there's concerns and things to navigate, the human amplification is, like, just simply amazing. We have line of sight to a medical assistant on every smartphone running at under, you know, five pounds, five dollars per hour that is there 24/7 for everyone who has access to kind of a smartphone. We have a legal assistant, a tutor, etc. All of these things are part of that kind of amplification. And, you know, how do we get there? Like, here's, I'll end with that kind of, one of the ways that I think white-collar work will be changing, which is we already have coding copilots that essentially people with engineering mindsets are using. I think every white-collar job will have a coding copilot assistant that, part of how you're doing journalism, teaching, evaluation, analysis, accounting will be actually, in fact, having an AI system that's doing coding with you in order to be accomplishing those missions. 

Gillian: Yeah. The key question we face is that we know that, you know, like, any innovation can either unleash our demons or the angels of a better nature. That applies to electricity. It applies to guns. It applies to nuclear power. It applies to anything that we've created. And if we look at social media, the reality is it unleashed our demons for the most part. They overwhelmed our angels of our better nature. I do think that AI, agentic AI, does have ways to potentially unleash our angels, and the question really is how. And I would argue simply, to be totally biased, that mixing artificial intelligence or accelerated intelligence with anthropology intelligence, i.e., a sense of our own humanity, the other AI, is one way to go. 

Parker: I think that's a terrific ending, a terrific inspiration. The mission of our time is to ensure that we steer AI's adoption by humanity to unleash our better angels. What a terrific conversation. Reid, Gillian, thank you both so much for making the time to join us. We really appreciate it.

AI: The Untold Story

We sat down with LinkedIn co-founder Reid Hoffman and Financial Times editorial board chair Gillian Tett to go beyond the headlines and get a deeper understanding of the economic and workforce impact of AI. Their message for HR leaders: act now, because the change is already here.

Key Takeaways

1. AI is here to amplify human potential. Instead of focusing on AI as a way to cut costs or reduce headcount, Reid and Gillian see AI as augmentative: a new member of the team that changes workflows and unlocks capacity for human creativity.

2. "AI is the best educational tool we have created in human history," Reid says. There are real challenges around AI's impact on how young people learn and develop the skills they need to enter the workforce. But Reid and Gillian explore how AI can create new models of education and training that personalize instruction in a way that was previously only possible at the most prestigious universities.

3. With AI, everyone becomes a manager. Reid sees a near future where every employee has a team of AI assistants that they manage to get work done. "There won't be such a thing as individual contributors anymore." In this world, the same EQ skills that make people great managers and coworkers become the skills that make them AI super-users .

4. "If you're not using AI, you're going to be under-tooled," says Reid. From leveraging AI to run better meetings to reimagining what's possible to achieve with an AI assistant at every employee's side, Reid and Gillian outline concrete starting points for driving change at scale. Because, as Reid says, in six months, if AI isn't embedded in your workflows, "It'll be like saying, 'I'm a carpenter, but I use rocks, not hammers.'"

Reid Hoffman

Co-Founder at LinkedIn, Manas AI, & Inflection AI

AI: The Untold Story

Editorial Board Chair, Financial Times

HR When AI Joins the Org Chart

We keep hearing that AI is joining the org chart, but what exactly does that mean? In this roundtable discussion, Lucien Alziari (former CHRO, Prudential), Diane Gherson (former CHRO, IBM), and Larry Emond (Senior Partner, Modern Executive Solutions) explore how AI is becoming a new part of how work gets done, how the best leaders are onboarding AI into their organizations, and what the future of talent looks like.

Lucien Alziari

Former CHRO, Prudential

Diane Gherson

Former CHRO, IBM

Larry Emond

Senior Partner, Modern Executive Solutions

Key Points

00:00  AI Joining the Org Chart: A New Reality

Das Rush: Thank you so much for joining today, Larry, Diane, Lucien. We've had a lot of conversations about various topics. And this might be one of the biggest ones at the moment, at least, that I've been hearing, which is, you know, AI is joining the org chart in 2025. Maybe it already has for a lot of organizations. So I wanted to start there getting each of your takes with what does it actually mean for you when you hear somebody say AI is joining the org chart? 

Larry Emond: Well, you know, as you know, my central life activity is bringing CHROs around the world together in meetings. And I saw I had three meetings in May, Denver, New York, and Boston with about 45 CHROs total. And in all three of those meetings, of course, we spent time in AI and HR. And in all three of those meetings, we ended the meeting talking about what is the future of this thing and what are we gonna call the function because the term human resources or even people is already antiquated. Right? And so my point is that that all those CTOs have fully embraced the idea that that AI agents in particular are going are already becoming like parts of their workforce. And, of course, most of them are personified with names like Nadia, and they're becoming, you know, part of the team. 

Diane Gherson: You know, when I was at IBM, we had agents. We gave them names, but we weren't receiving emails from them. Now we're receiving emails from them. So, you know, so they're becoming a little more personified. But I think at the end of the day, you know, back to the point that Larry made, there is a need for us to think about how to fit them into how work gets done. And so we haven't really thought through when it's on the org chart, as you say, what does that, you know, what, they don't have to do this to us. We don't have to be sort of back in the industrial era where you organize work around the machines. Right? If you put the human at the center, then maybe the org chart would look very different than if you didn't. 

01:59  CHROs as Chief Work Officers in The AI Era

Das: Yeah. And, Lucien, I wanna get you in here because I think this leads in really well with something that you very presciently said at our last summit in November. And you had said, like, you know, CHROs are gonna become Chief Work Officers because AI is this new way that work is getting done. And for HR leaders, it's all about thinking how work, technology, and talent come together, and that you find most people aren't thinking enough about the work in that equation. How should CHROs right now be thinking of the work? 

Lucien Alziari: Yeah. I think the most encouraging thing is, is that even over the last few months, I see this notion of the work is really coming much more mainstream into the discussion. And I think catchphrases like, you know, there's an AI agent on the org chart. Okay. It's fine. But like you guys, I said, well, what's really underneath that? Because an org chart is basically how does work get done in the organization, how does an organization sort of organize itself to do the work. 

And I still feel that, if the future, and this is my view of the future, is it's basically an optimization game, which we can manage very dynamically between the people and talent that we've got, the capabilities that technology can bring, and many of those are now real, whereas before they were like a gleam in the eye. But now they're scalable. They're with us. And then the third part of the equation is understanding of the work. The big still void for me is, like, who's taking control of the work? And I don't want to be misunderstood because people have said, oh, okay. So it's about the work, not about the people. No. It's about both. Alright? And it's about the technology. And I don't see, the heads of HR or people, whatever you wanna call them, becoming the Chief Work Officer exclusively, but I just want us to kind of own that space. Nature fills a vacuum, and if we don't step up and sort of own this space, then somebody else will do that. And I'm not sure that's the right answer for the organization. 

04:08  Organizational Change with AI Integration

Das: So how do organizations change as AI joins the org chart? You know, Lucien, you made that point that the org chart itself is really just how you're architecting your organization to get work done. How is that architecture changing as AI comes on? 

Lucien: There has been a trend, but I think it's gonna accelerate fast now in terms of minimizing the number of layers in the organization because that drives speed and adaptability. I think AI supercharges that. So I think there will be fewer layers in organizations. I think there's gonna be a big rethink about the role of managers, as we get much more granular and forensic in terms of understanding how work gets done. And I think some of that, the work is gonna get managed and done with heavy technology use. 

So, I think there's gonna be quite a debate about what's the role of managers, what's the span of control that we should expect of managers. The old model of the middle manager who managed the work, coached the people, and communicated messages for the organization: I think that sort of, trio of themes is gonna get rethought. And so, fewer layers, fewer managers, probably bigger spans of control, but more focused on the development and communication than on the management of the work. 

Diane: Yeah. I would second that. I actually, authored an article in the Harvard Business Review this month about that topic of middle management. And I think, you know, more than anything, now that we've got AI, we have to be thinking about, reframing for our people what work is about and what we expect of them. What is the end game? I mean, you said optimization, Lucien. Maybe it is, maybe it isn't. But, you know, let's be clear. The old game was to reduce headcount and to outsource. That was an industrialization game. The new game has so many more possibilities, and work does not have to be fixed, right, into fixed jobs. We're looking for much more fluidity. Okay? So let's start talking about what that looks like. Maybe more variation in the kind of work that you do. Maybe it's your skills that matter more than your job, etc. 

So, I mean, having those kinds of conversations, that's what we expect of middle managers now. I thought, you know, the Shopify CEO, you know, throwing the gauntlet down to his employees was exciting, but there's a lot of fear, and middle managers have to think through what does this look like for you guys, you know, my organization, my piece of the organization, and, and what could you expect? And I think that reframing part, framing for people is a role that middle managers still need to play, often they're not, but that they don't have to do the coordination work and all of the stuff that you mentioned, Lucien, on that passing along of messages. You can go to a town hall and, you know, directly talk to your CEO from whatever country you're in. 

So things have changed dramatically for what middle managers do, and there will be fewer of them, but I think their jobs will become actually more important. 

Lucien: Many of these discussions that we're having now are very reminiscent of when the Internet came in. You just think about, how did the Internet change the world? And a lot of jobs went away, and a lot of jobs were created. And so, at a macro level, I'm actually quite optimistic. Now clearly, if you're caught in the transition, it can get very challenging, but that's why I think organizations need to be helping employees and potential employees adapt to this. And the adaptation isn't gonna be deep technical skills, because the half-life of any deep technical skill is getting shorter and shorter. But the human skills, I think, are gonna sustain. 

So, I would always encourage CHROs to sort of go back and say, well, what really matters in this debate? And then, you know, what are the things that you can do? Because I do think this is, as I said before, a huge invitation to just be creative and invent playbooks because there is no playbook at the moment, and that to me is really exciting. 

08:21  Driving AI Adoption Through A Culture of Experimentation

Das: And that might actually be the most important message for anybody to take away right now is, like, there is no playbook. Diane and Larry, I'm curious what you're seeing. What are the best organizations doing, and how are they getting creative? Especially when it comes to this moment, you know, our theme is, like, driving adoption. We're really in this moment where the task is to help people find their ways to these tools and to be in good partnership with them. What are you seeing the best organizations do? 

Larry: I think the ones that are leaning in the best, and I, you know, I know it through the HR organizations with all these CHROs I know. It's the ones who let's say their head of talent is literally like, their head of talent or talent management is like almost an AI first mindset. And you so you're testing every possible you know, you try Nadia. You try another one. You try another one. You mess with some stuff. You program yourself in ChatGPT Microsoft Copilot. You're technology first, AI first. And those are the companies that have gotten way out ahead on this. And it's interesting. 

There's a woman, a woman my firm, actually, my firm placed as the Head of Talent in a company. And she's been a guest a couple of times at CHRO meetings, and she does a little thing on some of the experiments, you know, she's kind of a mad scientist, that she's been working on. And every time she's done it, the reaction by the CHROs is: that's who, I need that persona as my Head of Talent. That's the future. And I think it's those that in, let's say all the COEs that have an AI kind of first mindset. They're just gonna get way out ahead of everybody else. 

Diane: I think it starts at the top, but it also starts at the bottom. Right? You need to have a culture of experimentation. So you're saying to people, we, you know, we want your ideas. And I've seen companies do a great job of crowdsourcing saying, hey, you know, what's the best use of AI for us? So I think that's something that helps. I think the other thing is that the more HR uses AI to give people agency, the more people start to understand that it's actually a really cool thing. Because, you know, in the old days, HR used to sort of do things out of being experts. Right? And then here's the program, and we're rolling it out. We'll do change management. But now we can cocreate with our people using AI. And even if we have, you know, a hundred thousand employees, we can get all of their responses, summarize the pros and cons that, you know, the different points of view by AI in a matter of minutes. 

And people can see the different points of view, then you could turn it into a poll, and people can respond and say what they liked and didn't like. And suddenly, you're using AI as a force for good in terms of people having a sense of agency, and it gets them excited. It's not something being done to them. It's something that they're part of. So I think that's part of creating a world where AI belongs to all of us, and, you know, let's us all participate and learn and get something out of it. And I think that kind of a headset is really important to get it going. 

Lucien: If I can sort of pile on, because I just so agree with what both Larry and Diane have said. I see two approaches with AI, and I'm not a technology person at all. But I think some of the focus is, okay, which jobs are they gonna replace? It's kind of an efficiency thing. It's a new way of driving productivity. And, look, we're business leaders. We can't deny those possibilities, and we should go after them just like any other business leader does. But the most interesting stuff is, well, what can we now do that we always wanted to be able to do but never could? 

So we're here because of our interest in Nadia. Now imagine five years ago if we'd have said, how about let's give everybody in the organization a coach. Alright? Or let's give everybody in the organization a work assistant so that they can get some of their work done more efficiently. And we'd have said, that's a brilliant idea, but we can't afford it. Now we can do both of those. Alright? And those are just the first thoughts that we've had. 

12:25  AI’s Challenge to the Talent Pipeline

Larry: I think one of the big challenges going ahead is how are we gonna develop senior people, and I'll use an example. I did a couple meetings last year with Chief Legal Officers, just kind of for fun as an experiment. I thought they were gonna be horrible. They were actually great meetings. Not as much love in the room as a bunch of CHROs, but they were actually they were still pretty warm and funny. I found them quite entertaining. We talked about how the law firms are openly saying, you know, that they buy their services. You know, we're not, we're gonna need a lot fewer junior lawyers because AI agents will do discovery and research and blah blah blah. To which, of course, the question is, well, how are you gonna develop senior lawyers? Right? And that's gonna be true across the board. There's gonna be the same thing in finance. It obviously is the same thing in HR. Those are fascinating challenges. I mean, opportunities, but also real challenges. 

Diane: You know, I think this question about the loss of the entry level rung on the ladder is probably the first most important question for HR leaders to be thinking about because it's real, it's here today. You know, maybe with some of the stuff you described, you know, we're not sure when that's gonna happen, but it's here already. And so the question that we've got to ask ourselves as HR leaders is that okay? Is that okay that we're having a whole generation of people graduate from university with no access to entry level roles because, you know, they're being taken over by AI, you know, over a shortish period of time. And what does that mean for our pipeline of talent as an organization? 

That is something we could design into our organization. We don't have to be victims of this. We can be in the driver's seat and say, this is how we want it to work. We are going to have an intake of this many people, this kind of training, these kinds of, you know, hands-on roles, and this is going to be the role of AI. We don't have to say AI is gonna take on all the, you know, entry level legal work or whatever. But I'm not seeing enough to, back to your earlier question, Das, I'm not seeing enough HR people actually say, I'll take that one on. Right? We're gonna design that. Don't worry. It'll be good. Right? It's more, you know, tell us what the technology can do and, you know, we'll hire around it. 

Lucien: Yeah. I think one of the worries, I think, and we're still in the very early innings of this debate. But, if you just take what AI does, you're basically just taking what anybody else, any of the companies that you compete with, you're just taking industry standards. So you're not building anything that's competitively advantaged. And so I'm still actually quite optimistic about this combination of humans and technology because, if you're just taking kind of the B-minus AI answer that applies across the board, and in many cases, that's fine. That's enough. But it's sure not gonna make your business win competitively. And so I think it does come back to the creativity, the critical thinking, the curiosity of humans to actually ask the right questions, pose the right problems for, and then the technology is there to help solve it for you. But the technology is not gonna, help your company figure out how to win versus others. I still think that's the human piece, and that's why I'm still overall quite optimistic about this. 

15:57  AI Coaching for Manager Development

Das: A lot of times we've got in the habit of doing something for the sake of doing it. Right? Like, we're gonna deploy technology to deploy technology. We're gonna do performance reviews to do performance reviews, and we've kind of lost sight of, like, why do we do performance reviews? It's to make us better at the work. Why do we deploy technology? It's to make us better at the work. And I think that's a common theme. And this disruption of AI is really kicking up a lot of that dirt. You can't hide anymore. It's really clear when you're doing work for the sake of work. 

So what are you seeing there with kind of especially AI coaching and the role of it in helping organizations both bring in these opportunities for entry level talent, but also get back to the purpose of, like, driving business impact and doing work that's actually meaningful and moves the needle? 

Diane: You know, the role of managers changed dramatically with the pandemic, and it also changed with the generations that are coming into the workplace. And the role of managers has changed dramatically. They need to have empathy. They need to understand the whole person. They are dealing with people who are working both remotely and on-site, so they don't have the same ability to pick up on things the way they might have before. They need to resolve conflict, not putting them in the same room, but actually resolve conflict in different ways. And so they need AI coaching, but the manager in particular needs AI coaching to help develop collaborative work environment and so forth. 

So I think it rises to all the new challenges that managers are facing today in a way that I was very concerned, you know, when I was looking at it through the pandemic, because so many managers were falling apart and just burned out, not able to do it, and people didn't want to be managers. And I think now what you've got are, you know, you've got a different situation because you're enabling them in a very new way. 

Lucien: I just think it's really exciting what Nadia can do, the whole field of AI coaching. And I think it started on the development side, and that is the right place for it to start. But I think over time, it's gonna be around coaching high performance, not just coaching as an end in itself. And so I think it is going to go into the adjacencies around the management of the work, the management of performance, all of those areas. And again, that's always been the work of HR, but I think this is a great new capability for us to become even more effective of that. 

18:31  Final Advice for HR Leaders on AI

Das: What's your kind of bottom-line advice? Don't lose sight of this in the next six months, twelve months. 

Larry: I'll just make one comment to CHROs and other senior HR leaders that might hear this. You have to take the time to meet these agents yourself. You have to spend time. You know, as you know, I've had Nadia as a guest in many CHRO meetings. We always do this kinda last thing in the day. Imagine you're a frontline manager in your company. What would you ask? And when they engage with her, I'm always shocked how many of them are like, "You're kidding me." Well, that's on you for not having already taken time to see what's possible with these agents. 

Diane: Well, I would double down on Larry's comment about, learn yourself, like, get your hands dirty. You know? Because it's, it's really important to be on top of the technology. So I would add to that by saying there's some really, really good, you know, Substack, LinkedIn newsletters that you can find that keep you on top of the latest technology and the thinking about it. There are way too many sort of well-curated ones in in in the HR/AI space, but I think David Green's, in the talent space, is exceptional. And if anyone isn't reading that every month, they should be, because it really does go quite broad in terms of looking at all the different research that's been done and the impact of AI on, on work and on people and so forth. So I highly recommend that. I just think it's an important part of every HR leader's day to be staying on top of what's happening in AI, in HR, and the work. 

Lucien: For kind of a 60,000-foot perspective, think about what really makes the best CHROs the best CHROs, and I think there are three things that come to mind. One is that they just have an intellectual curiosity about, sort of, what's really happening here, and they can get to the underlying cause of issues. The second is that they have a passion around the business and how they can bring all of the capabilities that are available to them to help their businesses succeed and remember that that is their fundamental purpose as an organizational leader. And then the third is they're always able to bring this kind of outside-in perspective so that you keep your own organization in the right context, and we don't become guilty of sort of wishful thinking. 

And so here's a great opportunity now to say, it's for you to show thought leadership because nobody else in the organization owns this right now. Right? So it's an opportunity for you to lead and to say, what's the potential of these amazing new capabilities? But your job is not to deploy technology. Your job is to help the business win, and I think the next path to help the business win is this really granular understanding of the work and how the work best gets done with this combination of people and technology. And it's unlocking a whole new game for us. So I think we just need to go there. And nobody's got a playbook, so think about it and then lead on it. 

Das: Fantastic note to end on. So, Diane, Lucien, and Larry, like, thank you so much, for joining today.

HR When AI Joins the Org Chart

We keep hearing that AI is joining the org chart, but what exactly does that mean? In this roundtable discussion, Lucien Alziari (former CHRO, Prudential), Diane Gherson (former CHRO, IBM), and Larry Emond (Senior Partner, Modern Executive Solutions) explore how AI is becoming a new part of how work gets done, how the best leaders are onboarding AI into their organizations, and what the future of talent looks like.

Lucien Alziari

Former CHRO, Prudential

HR When AI Joins the Org Chart

Former CHRO, IBM

Larry Emond

Senior Partner, Modern Executive Solutions

Beyond Chatbots and Search Boxes

Jeff Dalton, Head of AI and Chief Scientist at Valence, has authored more than 100 research papers and holds multiple patents in search, natural language understanding, and question answering. In this conversation, he shares his unique perspective on the pace of change in AI capabilities, lessons learned from a career spent breaking the barriers of what's possible with AI assistance, and a deep dive on the architecture that power's Valence's AI coach, Nadia.

Jeff Dalton

Head of AI and Chief Scientist at Valence

Key Points

00:00  The Excitement of AI Coaching & Virtual Assistants

Das Rush: To get started, like, what has made you so excited right now about this space of AI coaching and this potential we have to build, like, true virtual assistants and coaches? 

Jeff Dalton: What's really exciting to me, and I think a lot of people in the field right now, is just the pace of change is just absolutely amazing. Like, there's talk about, like, kind of the exponential growth in terms of the AI capability that we're seeing. So even what we the kind of pace of change that we saw, what we saw three years ago, and what we have now, it's like ten years of change just almost overnight. And so what that means is what we could do before, even a few months ago, suddenly, that was impossible is suddenly now possible. 

And what really makes that exciting is the fact that many of us have had this vision, as you mentioned, for a long time, twenty years, ten years, and some people for a lot longer, some people for fifty or even sixty years of having this AI assistant, that can actually be with us. And suddenly, many of these things are much closer to reality in a way that feels very tangible and exciting. My vision for what we've had for assistants is also the fact that we—assistants matter because they help us accomplish a goal that matters to you or to us. Right? So in coaching, whether that's dealing with a coworker who's struggling, whether it's helping us achieve your next career goal or your next big promotion or your five-year plan for what that looks like: what's a coach that's gonna be, like, help you along that path, right? 

01:27  Applying AI to Build a Virtual Coach

Das: Now that you've come over and are working on Valence and building Nadia, kind of what are one or two of the ideas that you worked on earlier that you're finding now are really applicable to building this AI coach? 

Jeff: Yeah. Certainly. So I've been around for a little bit. I did my PhD now probably, kind of fifteen years ago, really working at working on intelligent search systems. So how could they know more about us and know more about the world? And we did that using something called knowledge graphs, which were a structured form of a kind of a geeky way of, like, how machine we could encode facts for the machines to be able to understand and do question answering. And I continued that when I went to Google. I worked on health search. And so we take something that says, "I've got gunk in my eye," and we've turned that into something that we can give an answer to, leveraging a knowledge graph. Right? And, hopefully, it make your health a little bit more reliable in the process. 

And what we quickly realized is the fact that what was in the search box, that wasn't enough. We needed what was outside of the search box. We needed an AI assistant that had a plan that could talk to us, that could reach out and have, and then have that. And so I went to work on the Google Assistant, and we tried to build some of those technologies and tools. At the time, the technology is, like, just wasn't there. But the underlying element of having machines understand us, leveraging domain experts to be able to really deeply understand the world, are all fundamental parts of components of, like, the future AI assistance that we're building. 

02:48  Breaking (and Rebuilding) Virtual Assistants

Das: You know, you mentioned there, Google, like, your medical assistant. And one of my other favorite assistants that I know you built was a virtual kitchen assistant. Could you tell that story? 

Jeff: Yeah. So not too long ago now, around 2021, my research group competed in the Amazon Alexa task bot challenge. And our goal there was to do something that hadn't been done before. The goal was to have something that you could cook along to do something real in your kitchen. So not just talk to your assistant, but actually, see what your system was doing, use a screen, use rich interfaces to be able to do something in the world, and to have a coach there with you along the way. And along the way, we realized that pretty much everything that we had for the current assistant was broken. So we broke the speech, the voice, the real time voice. None of that existed. And so we had to build it, and we had to build a whole new open assistant toolkit. 

What was really a key challenge there that was really transformative for us was we were right on the cusp of the LM transition. So we just hit the first generation GPT three where we could take in and actually transform a recipe. So you're like, "I now live in Colorado, transform this for high altitude," and then that one could make that possible. So really kind of wow moments that really demonstrate just the potential for the assistant to be adaptable at the next level. 

Solving Key Challenges in AI Coaching

Das: It's, you know, you mentioned breaking the assistant. Right? Like, having to kinda break every piece of technology to build the assistant. And I think that ties into something that's been kind of a refrain at Valence, which is, you know, AI has a strong central promise of being able to personalize technology to us and to make it far more natural to use. 

At the same time, it's not fully formed, and there's a lot of rough edges that we're smoothing out right now and a lot of problems that need to be solved and worked on. What are some of the most important problems that you and your team right now are working on as you build Nadia? 

Jeff: Like any new technology, we still have a lot of rough edges. We're still working out a lot of the details of what the capabilities are, and I can talk to you probably for a whole hour just about the different kind of challenges and aspects of what we're going to need to do to build the coach of the future. So I'm just gonna give you a little bit of a taste for what that looks like and then maybe talk a little bit about how Nadia works today. 

So the first thing is how can we build an AI that you can build a relationship with, that you can trust over time so that Nadia can grow with you and adapt with you in the long term? Second, how can we scale Nadia so that Nadia is not just a coach for you, but it's a coach for everyone across all organizations, across all different types of domains? And the third challenge is how can we build an adaptive and proactive coach that's going to change and become more personalized with you over time so that your Nadia experience today is very different from your Nadia experience six months from now or a year from now in ways that are fundamentally different than what we have. 

And as we're kinda working on those challenges, I wanted to just kind of talk through a little bit about where we are today. Here's some information of the overall architecture of Nadia, a little bit more of a technical or conceptual level.

05:50  Nadia’s AI Architecture & Capabilities

Here at the base, again, we have foundation models. So those are the large language models that you probably have heard about. There's not just one large language model. It's many different types of models, small models, big models, state-of-the-art models, multimodal models, reasoning models, all these different models that, when you're using Nadia, you're using a suite of the best-in-breed, state-of-the-art language understanding building blocks that go into building the next-generation assistant. 

Next level that we have is memory. So Nadia has a couple different kinds of memory: memory about you, memory about your organization, and memory about coaching. So memory about you are things like that you expect the coach to remember. You expect them to remember your past conversations, the last time you shared your calendar with them, the documents and information that you uploaded, as well as information that you've shared and talked to about the people in the network that you have. 

For your organization, we—Nadia is custom for and bespoke to your organization. So knows your OKRs, knows your company values, your training documents. So the coaching that we have with Nadia is bespoke to your organization that's leveraging the right frameworks so we have the coaching be most effective for your team and for your overall organization.

And lastly, for me, really fundamentally, coaching. Nadia is nothing without her deep expertise and really differentiated expertise with expert coaches. So we work with a set of expert coaches to curate knowledge, to curate situational understanding that now you can use in future conversations, knowing the best frameworks to use so that it's not just a generic coach that you would have off the shelf. On top of that, memory alone is just not enough. Memory is only as good if you know how to use it. So on top of memory, we have an intelligence layer or hypothesis engine for just building a plan for this conversation and for future conversations, leveraging just the right elements of memory to be able to pull them in at the right time to be able to make that plan and to be able to have a successful conversation. 

On top of that, we have that interface layer. We have Nadia's core capabilities that we have to be able to execute that plan, whether today may be a skill building plan, tomorrow, it might be a reflection on feedback that you have from coworkers and have those orchestrated as part of the conversation, of course, in a rich multimodal interface experience. And around all that, of course, we have the safety and guardrails because one of the fundamental pillars of Nadia is the fact that this is an enterprise tool. We want people to feel safe and trust it, so you can talk about anything and Nadia will, if you can't talk about it, Nadia will stop you and prevent you from talking about things that aren't allowed in ways that are gentle, that are coaching approved, and that maintain a fundamental kind of trust and kind of safety of the coaching experience. 

And with that, I think we'll probably be talking about some of the new and exciting kind of product features that we're building on top of this architecture. 

Das: Jeff, thank you so much for joining us for this, and looking forward to seeing what you and the team build in the next coming months.

Beyond Chatbots and Search Boxes

Jeff Dalton, Head of AI and Chief Scientist at Valence, has authored more than 100 research papers and holds multiple patents in search, natural language understanding, and question answering. In this conversation, he shares his unique perspective on the pace of change in AI capabilities, lessons learned from a career spent breaking the barriers of what's possible with AI assistance, and a deep dive on the architecture that power's Valence's AI coach, Nadia.

Jeff Dalton

Head of AI and Chief Scientist at Valence

Beyond Chatbots and Search Boxes

WPP x Nadia: Transforming Employee Support

WPP, one of the world's largest creative agencies, isn't just adopting AI, they're building it with their proprietary platform, WPP Open. But Lisette Danesi (Global Corporate People Lead, WPP) knows that it's not just the platform that matters; it's an employee experience that makes people feel safe, supported, and seen. She shares how WPP's partnership with Nadia has made that experience a reality across their entire global HQ workforce, the remarkable results they've seen, and detailed tactics for driving adoption and change at scale.

Lisette Danesi

Global Corporate People Lead, WPP

Key Points

00:00  WPP's AI Transformation Journey

Lisette Danesi: Hello, everyone. I'm Lisette Danesi, and I lead the corporate people function at WPP. For those less familiar with us, WPP is a global creative transformation company. We work across marketing, advertising, communications, and consulting. We're home to 108,000 people around the globe in more than 110 countries, all connected through our London headquarters. 

Now, it's fair to say AI has moved from buzzword to boardroom priority. At WPP, it's reshaping the way we work, the way we think, and, yes, the way we lead. We know the business case for AI is compelling. It can drive efficiency, creativity, performance, but the real challenge is human. How do we engage our people in this transformation, especially at scale? Because without employee adoption, even the smartest tech won't deliver the results. 

00:59  Empowering Employees with AI

Lisette: At WPP, we're not just using AI. We're building it. We've developed our own proprietary platform, WPP Open, which powers how we create content and ideas for our clients with greater speed and insight. We also work with the big tech firms to extend our capabilities even further. But one thing has been true from the start: AI only works if our people are on board. And people don't come on board because of the dashboard. They come on board because they feel safe, supported, and seen. That's why we focus not just on the tools, but on the employee experience surrounding them. And that's where our partnership with Nadia has really come into its own. 

01:41  3 Steps For Piloting Nadia: Listen, Learn, Iterate

We started by listening, because we knew we couldn't make assumptions. And what we heard was clear. Access to support and coaching wasn't consistent, especially in our service centers and at the junior level roles. Employees wanted help navigating ongoing change, restructures, new leadership, new tools. And they wanted clarity. How do my goals connect with the bigger picture? How do I stay focused and resilient in my day? 

We also agreed upfront we wouldn't get everything right the first time. So we built our approach around three principles: listen, learn, iterate. We began our pilot involving 400 colleagues around the world across our Global People team and our Malaysia-based delivery service center. Malaysia was an intentional choice for us. It's operationally vital and culturally diverse, so it was a really strong test case for scale and localization. 

The feedback was overwhelmingly positive: 95 net promoter score, 82% engagement, and a strong uptake across all levels and all functions. So people describe Nadia as supportive, as relevant, and something that they wanted to keep using. They were surprised it was that good. That early trust gave us the confidence to keep moving forward. 

From there, we expanded and rolled out Nadia to all our global HQ people. We prioritized teams going through large-scale change, like our enterprise technology function, where new managers really would benefit from the coaching and the support to lead through the complexity of what was happening in the business. We also worked closely with regional leaders to tailor Nadia for local context: not just language, but tone, mindset, and behavioral norms so it would show up relevant to people where they are. Colleagues across functions up to C-suite are using it to rehearse challenging conversations, test responses to sensitive emails, and gain clarity, all in a private, judgment-free space. In just one of several onboarding sessions, over 800 of our 1,500 employees joined in India. That kind of response only happens when something is truly resonating on the ground. 

We're seeing the same impact elsewhere. In Japan, for example, our CPO there shared with me that she was using Nadia to prepare for sensitive conversations around change, how helpful it's been in slowing her down, reflecting, crafting more considered responses, all in local language. This shows us that it's not just a tool for early career talent. It's something that's supporting even our most experienced leaders around the world. 

One of the most impactful applications of Nadia so far has been goal setting. It's an area that often feels like an administrative chore that comes around once or twice a year, but we've framed it to make it useful, to help people feel energized about the goals that they're setting, feeling like they can align what they're doing to the business. Employees tell Nadia who they are, what they do, and it suggests goals tailored just to them. 

04:48   Driving AI Adoption: Lessons Learned

Lisette: So what have we learned about driving adoption? Firstly, relevance matters. We didn't roll out one generic version. We really worked with teams on the ground to tailor the experience. Secondly, leaders matter. Our senior leaders are using Nadia, and they're sharing what they've got out of it, and they're sharing how they're using it with their direct reports. It's giving others permission to try. And thirdly, storytelling. It works every time. We're showcasing and lifting up the real voices of our employees, real examples, and not shying away from the parts that may need adjusting as we move forward. 

We've also uncovered barriers. One of our future of work experts highlighted a growing gender gap of AI adoption in a recent conference. In early data, women already are less likely to be using AI tools, and that's concerning, often because they see it as cheating or they feel unsure how to engage with it. So we're really now focusing our work in our employee communities to address that, building confidence, breaking down myths, and ensuring we don't allow another digital divide to emerge. And we realized something else. A lot of people don't actually know what coaching is. So we're also focused on educating, not just enabling Nadia, because democratizing coaching means making it both accessible and understood. 

Another benefit: the insights. Because of the way Nadia works, we now have anonymized data on what our people are struggling with most and what they're actively working to improve. And what comes through loud and clear: it's not the technical skills. It's setting clear and measurable goals, communicating with clarity, and active listening. These are foundational human skills, but they're also critical leadership skills in an AI-enabled workplace. We're using this insight to fine-tune our L&D focus and support leaders where it counts. 

06:39   Democratizing Coaching for All Employees

Lisette: At WPP, Nadia has helped us make coaching accessible to thousands who never had it before, deliver global consistency with local nuance, and foster trust, where teams learn from each other and show up with curiosity. But we know adoption isn't the same for everyone. In The UK, we've piloted a generational bridge workshop, and what became clear is that different generations adopt AI differently. Older colleagues weren't resistant. They simply hadn't been given the space and the confidence to explore. Now that's a part of our focus, to show up where it matters across countries, across functions, genders, and generations. We're not waiting for perfect. We're learning in motion with feedback as our compass. 

As we look ahead, our ambition is to embed Nadia into everyday working life, not just the big events like performance management, but to help with the small, meaningful moments that define our work and our lives. But we'll try to listen, learn, and iterate every step of the way. This transformation isn't a one-time event. It's continuous relationships between people, purpose, and possibility, and Nadia is helping us make that relationship stronger and at scale.

WPP x Nadia: Transforming Employee Support

WPP, one of the world's largest creative agencies, isn't just adopting AI, they're building it with their proprietary platform, WPP Open. But Lisette Danesi (Global Corporate People Lead, WPP) knows that it's not just the platform that matters; it's an employee experience that makes people feel safe, supported, and seen. She shares how WPP's partnership with Nadia has made that experience a reality across their entire global HQ workforce, the remarkable results they've seen, and detailed tactics for driving adoption and change at scale.

Lisette Danesi

Global Corporate People Lead, WPP

WPP x Nadia: Transforming Employee Support

Nadia x AGCO: A Global Coach for a Global Organization

Colleen Sugrue, Head of Global Learning and Organizational Capability at AGCO, shares how her team brought scalable, sustainable, and personalized coaching to AGCO's global workforce with Nadia, supporting every employee around the world in over 29 languages. Colleen has priceless advice on driving global adoption of AI coaching:

1. Find solutions that scale, take the pressure off HR workloads and budgets, and are tailored to the needs of your workforce.

2. Find your team within the organization to drive adoption, from IT, to executive champions, to comms partners.

3. Find out what works by customizing AI to meet your people in the moments that matter in your talent cycle.

Colleen Sugrue

Head of Global Learning and Organizational Capability, AGCO

Key Points

00:00   Talent Development at AGCO

Colleen Sugrue: My role is all about talent development. Right? So, basically, it's ensuring that AGCO has a pipeline of people that can do the job both today, but also the job of the future. Right? And as we all know, the job of the future is kind of a challenge to keep up with these days. Right? So I develop people. And whether that's leadership development or functional trainings or whatever that might be, we are invested in making sure that our employees have the knowledge, skills, abilities, and mindsets to be successful. 

00:38   Scaling Coaching with AI for Global Impact

Colleen: The biggest way that I feel like AI's impacted my role is it's given me the ability to scale things that, historically, that just wasn't an opportunity. Right? I think about coaching. Right? That's the biggest piece for us. Coaching was this really small, niche kind of market that we used to have to heavily invest in. It was only available to a small number of people within the organization. But we've been able, through our partnership with Valence, to really scale this globally in a way that we can both sustain and offer great value to our employees. 

01:14   Why AGCO Explored AI Coaching

Colleen: It was this moment in time for me where we really needed to look holistically across our talent development strategy. Right? We had a lot of things in pieces. We had a lot of things in pockets, but we truly, truly didn't have this global strategy. So I was really looking at everything at the same time. Right? So today is the story. It's about coaching, but just know that it's, it was part of this broader story about how are we really, truly scaling and adopting talent development strategies around the globe. 

01:48   AGCO’s 3 Criteria for L&D Tools

Colleen: For me, there's kind of, like, three key categories that everything had to fit in. Whatever we were gonna do, it had to be scalable. So it had to reach a global audience, and it had to reach a large global audience. Right? So we're about 23,000 people globally. So we had to be able to hit that audience. 

It had to be sustainable. So it had to be something that, if we were gonna implement, it wasn't going to require more people resources, a whole lot of money. Like, it had to sustain itself long term for the future. 

And then the third piece of that is, it had to be about the employee. We had to put the employee first in the decisions that we were making and make sure that it was gonna be something that was applicable to that. 

Here's an AI solution that could speak all of the languages that I need because we have we have a global audience, and we do, we have 13 core languages within our organization, and we have other languages that people speak. Right? So we needed to consider that. We needed to consider this large audience. And with AI, well, this coaching tool was able to customize it to that employee. Right? And it wasn't just this class that I was gonna send people to. It was this always on, always available, 24/7 guidance that was going to be able to be the support for the employee. 

03:14   Designing the Nadia AI Coaching Pilot

Colleen: It was really important for me, though, that I pulled different groups of people in from around the world from different cultures that could also test in different languages. So we had a whole, I think there was about 50 of us in that trial that we did, and we set realistic goals in there. Please try to use it once a week, test out the language capabilities, and we gave some prompts at that point. So here are some conversation starters that that test with her and see how this goes. And then we would meet monthly, and we checked in. We said, how is it going? What are you learning? What are you hearing? At the end of the trial, then we asked everybody really basically sort of that MPS question. Would you recommend this to be used here at AGCO? In our case, the answer was absolutely. 

04:12   Encouraging Experimentation with AI

Colleen: We'd be talking and somebody would say, oh, you know, I really wish or I gotta think through that or I wanna talk through this, and I would say something like, hey. Why don't you give Nadia a try, you know? Why don't you try to have that conversation with Nadia, see if she can help or this, and without a doubt, they would come back to me with, via Teams, typically, I went and I talked to Nadia, and it was amazing. Thank you so much for recommending that to me. 

04:39   Global AI Coaching Rollout at AGCO

Colleen: So I kinda laugh a little bit at this point because I'm like, well, you know, the truth of the matter was, in my team, we're in a position. Like, we needed a win. Okay? Like, we needed a win that was gonna be global, that could have big global impact fairly soon because, like I said, we didn't have a global approach. We didn't have a global strategy just yet, and so much of our work was piecemeal. So we had some things in different regions and not in the others, and licenses were managed all over the place. So really took a moment and said, well, what's our biggest bet? And it was Nadia. 

So it was this moment for us where we felt like we could hit the biggest number of people. We had something globally. You know, we have our voices survey. We get feedback from the organization on what they want, and we knew they wanted more of these opportunities. So we tied straight into that. And we said, look, we heard you from our voices survey. We know that you want more development opportunities. We're working on it. Step one, here's something we have right now. And we went big with it for that reason. 

05:47   Integrating Nadia into Talent Initiatives

Colleen: So kind of two of the major things that we did, one, was implement Nadia as a way to help coach the managers based on the feedback we get from our employee engagement survey. So that was a direct partnership with my peer in that space where she, said, well, yeah, we could do this, and I think we should. And we worked and we said, well, here are the results of the survey. What would we want a coach to talk about? And then how can Valence partner with us to customize that. Right? 

So now it's really amazing. After the voices survey, when a manager gets the results, they can then say there's a, there's a link in the communication that says, would you like to talk about your results with Nadia and get some custom personalized feedback on what you should do with your team and how you should action plan for this? And they can just click it, and there's Nadia, and she's trained up and ready to support them on that. So just an amazing partnership there. Again, find your team, find your partners because they matter. 

You know, my other peer, Doug, on the other side of the fence, talent management, we started partnering on performance conversations and around, well, how can Nadia help coach the managers on how to have better performance conversations? And so same deal. We pulled in Valence. We said, this is what we're trying to do. How do we customize the tool? So now, anytime it's time for performance conversation, they go out there, and Nadia is the support that's in those communications to say, hey. Would you wanna prep for a conversation? Do you have one that's particularly difficult? You know, Nadia can help you role play for that.

Nadia x AGCO: A Global Coach for a Global Organization

Colleen Sugrue, Head of Global Learning and Organizational Capability at AGCO, shares how her team brought scalable, sustainable, and personalized coaching to AGCO's global workforce with Nadia, supporting every employee around the world in over 29 languages. Colleen has priceless advice on driving global adoption of AI coaching:

1. Find solutions that scale, take the pressure off HR workloads and budgets, and are tailored to the needs of your workforce.

2. Find your team within the organization to drive adoption, from IT, to executive champions, to comms partners.

3. Find out what works by customizing AI to meet your people in the moments that matter in your talent cycle.

Colleen Sugrue

Head of Global Learning and Organizational Capability, AGCO

Nadia x AGCO: A Global Coach for a Global Organization

How to Successfully Deploy an AI Coach

It’s one thing to buy seats for an AI tool, but it’s another thing entirely to successfully integrate it into your organization. In this deep dive on AI adoption strategy, Jonathan Crookall (Chief People Officer, Costa Coffee), Jennifer Carpenter (Global Head of Talent, ADI), and Maree Prendergast (Global Chief People Officer , VML) share the use cases that led them to roll out AI coaching with Nadia for their frontline workforces, tactics for successful onboarding, and lessons learned on the path to global adoption.

Jonathan Crookall

Chief People Officer, Costa Coffee

Jennifer Carpenter

Global Head of Talent, Analog Devices

Maree Prendergast

Global Chief People Officer , VML

Key Points

00:00  AI Coaching for Leadership Development and Recruitment

Das Rush: What are the AI initiatives that you've each been leading in HR? And then kind of what has the journey been specifically with AI coaching over the last twelve months? And then I wanna go to kind of what's changed specifically in the last six. 

Jonathan Crookall: So the journey that we've been on in Costa Coffee on AI has really started with using some of the normal tools like Copilot to help with efficiency of running general activities around meeting efficiency, supporting on generating documents. I mean, I think the Costa context is, you know, we employ just over 20,000 people across multiple markets, and where we found most, application of Nadia right now is in our UK store manager population. 

So we've got around 1,500 store managers, operating across stores in the UK. And their situation is, you know, they don't know in the morning when they're gonna come in and find out that their coffee machine's down or, you know, Sally hasn't showed up for work today or whatever. So they're so being able to schedule coaching support for a store manager in a, in a coffee shop is really tricky. 

So the, one of the significant advantages we've had of working using the Nadia tool is just that ability for store managers to take the time when they wanna take the time to get the support that they need to be a better leader. So we've had a big focus on leadership, and Nadia has been the big sort of unlock for that level of leadership in, in Costa. 

The other, the other use case that we've got for a different platform is helping us with recruitment. Again, in the high-volume areas, we're using AI-based tools to support our barista recruitment at the store level. And, again, we've only just started that, but it's already generating some great benefit both for the candidate, but also for the store managers who are doing the recruitment. 

Maree Prendergast: For us at VML, which is part of WPP, WPP obviously has a, you know, an enormous, interest in AI. We have our own, proprietary platform called WPP Open, which our employees work in and use as their operating model every day to deliver to clients, and, you know, that helps us deliver work, in a more efficient way, obviously, both operationally and creatively, etc. And so we wanted to find a way that we could use, AI tools within HR as well to, you know, help people take on that journey. It was part of an adoption process, not just for, you know, this particular tool, but across the board, like a cultural shift. But specifically, when we got introduced to Nadia, we're super excited about the opportunity to sort of integrate that really into our professional development, and I'd say leadership development strategy. 

So key for us was, and probably the best use case we have of Nadia is that we've embedded it in a program that we call Thrive at VML, which is all about career conversations. So it's those ongoing and annual career conversations. And, we have a bespoke tool that sits within Microsoft Teams, that our employees use, and we embedded Nadia into that app within Teams. So it's right in the middle of the workflow. So we didn't use it as, hey, here's a link that you can go and check out and get some coaching. We, from the, from the onset, wanted to embed it in the workflow. 

And I think that's probably, you know, the key takeaway from us. So, you know, as people are, as managers are looking at 360-review information, of course, they can use other AI tools to create summaries, etc., but Nadia really supports in how do I have this conversation around this particular person's development aspirations and some of the challenges, etc. Equally, for our employees, they can figure out how to talk about their own aspirations or talk about some of the challenges that they have or some of the goal setting that they have. And we found that to be, a really engaging partner. 

Das: Yeah. I love that in the flow of the work. And, Jennifer, how about you? ADI, what's been the journey of the last twelve months? As I know you spoke at the summit in last November, what's really specifically changed in the last six months for you? 

Jennifer Carpenter: At ADI Analog Devices, we are a deeply technical workforce. So we employ thousands of engineers. And over the last twelve months, we've been rapidly experimenting with a number of generative AI tools, Copilot 365, GitHub. We've actually customized our own agents within ADI to support things as simple as writing feedback in our annual performance cycle. What's really unique about Nadia and AI coaching specifically is we now have over 4,000 employees across 31 countries utilizing Nadia. And you know I'm all about measurement. 

So we introduced Nadia as just one more example of how employees can build up their skills and experience partnering with generative AI. And we've been asking them, why are you using AI coaching? What's in it for you? And what's interesting, two-thirds of that audience tell us they're using AI coaching for fresh perspectives throughout their professional or personal development, just having fresh perspectives. Another half are telling us they need a problem-solving partner. So they're looking to solve problems. 

And then a solid third, just as looking at Nadia or AI coaching in general as a leadership gym that they can visit anytime to help them develop their leadership skills. 

Das: Just like a few bicep curls on, like, active. 

06:07  Surging AI Adoption Over the Past 6 Months

Jennifer: Like, a few reps at the gym. But when you think about where we are over the last twelve months and even the last six months, is we're looking across all of these experiments as just that, reps in the gym to help prepare the workforce for what's coming next. In the last six months, specifically, I've just seen adoption kind of do a little bit of a hockey stick regardless of the tool that we're talking about. 

People are more willing to experiment and to use AI than they were even six months ago, and we're actually tracking sentiment within ADI. I think of this as their level of optimism that AI is going to help them improve the quality of their work or their own productivity, as well as their own confidence in their ability to use these tools. And in the last six months, we've seen a ten percent increase in positivity and agency, you know, that that personal belief in one's ability to navigate the tools. So we actually have seen that increase in in as little as four months worth of change. 

Das: Wow. Jonathan and Maree, is that similar to what you've also been seeing in the last six months? Or for you, what have you noticed is the difference? 

Jonathan: So I think in my, in our case, in Costa, the way I can track it is just by adoption. The idea of just looking at how people are adopting it, and as you've said, you've heard a couple of case studies that we've shared from some of our store managers that have, that have used Nadia with to great effect. So those stories we've obviously shared within Costa as well, and that in itself is generating that. 

I think you're right. There's this kind of sense of mystery and what is it gonna do and all the rest of it that's, that goes around the topic of AI because it's always talked about as kind of a cerebral theoretical construct. And I think the more you can get it into practical usage and people talking about those stories of, I used this tool and it's helped me to achieve this in terms of, you know, leadership problems that I'm dealing with, how do I generate better performance in my store, those are the stories that are getting people to get on board with, with trying it for themselves. 

Das: I have to say, I share the, more than probably any other story, the one of, I think, your store manager, Sharon. Is a store manager, went to a store that was kind of one of the lowest-performing within Costa's shop and, you know, one of their great managers, four weeks with Nadia as a coach, kind of daily sessions. How do I turn this around? How do I motivate people who are unmotivated? And within four weeks, had taken it in the peak holiday season from lowest-performing to top-performing store. And so just this idea of, like, on a very, like, local level, how this can transform the impact with a single manager. It's just one of my favorite stories. 

Maree, how about, how about you? 

Maree: I think similar to Jonathan, we've been looking at adoption, and we have about 2,500 people actively using Nadia on a daily basis. I mentioned, you know, the adapt different geographies around the world has been a huge game changer for us, but for us to be able to bring this at scale, to sort of, you know, countries, far away, has really been very beneficial for people. 

09:32  Driving AI Coaching Adoption: Strategies

Das: Yeah. That's amazing. And I think, you know, some of what I'm hearing in this, like, with this AI coach, you can suddenly, as HR, reach into parts of the organization that before used to be kind of outside of your reach. But even if you're reaching people, that's not necessarily the same as them adopting tools. 

Jennifer: So if you're all really pushing to reach frontline managers within different programs that you have, what's been effective in driving adoption? So what's interesting when you ask someone to try something new, some people are game and others are greatly skeptical. So one of the first tips that I always recommend, people to follow, because we did it the other way and it didn't work out so well, is start with an invitation instead of an expectation that someone will be required to use a tool that's new and different for them. 

We found that when we invited people to participate, we had much greater engagement. But we also wanted to listen to those that weren't game, that this wasn't their cup of tea, and we asked them why. And we found three patterns in our feedback on why people weren't adopting it. By far, the number one reason people are not adopting this is or so they tell us, is time. They just don't have it. And they view adoption of any new tool as one more thing that they have to make room for. And when I say it was number one, it was, like, seventy percent of the people we asked said time is the killer. 

So how we address that is in our marketing of these invitations. It saves you time. Find time back in your day by utilizing these tools to increase your productivity and to give you back time. So addressing that right out of the gates. The second reason why they said they're not adopting these tools is they're not sure how to use them or why they would use them. So if you can clarify how these tools can help, how AI coaching can help someone, that addresses the second reason why people aren't adopting. And the third is just trust. Would I trust a human to do this better than this tool? Is my data protected? Is this safe to use? So again, addressing time, what's in it for you, and the trust factor are the three largest barriers to adoption that we found that have been, that have made the difference for us in helping to increase our adoption. 

Jonathan: I was just gonna add a couple of other perspectives on it, and I think these are, so I think one of the drivers for us of adoption is the flexibility. So the possibility and the option to choose when you wanna use the tool, how you're gonna use the tool, not them to schedule in a third party to meet with you in a particular venue or at a particular time. And similar to Jennifer, I think one other thing that we've used in Costa is almost the sort of rarity factor. So, can I get signed up for this, for this program? So people kinda go, oh, I've not heard of this. What is it? How can I get on board? 

So I think that's, that's also driven adoption in some of our populations where people just get curious, and they wanna they're in your Jennifer, you're why not their cam rather than, I'm not interested. 

Maree: So I think there's probably three things that I can add. The first one was really actually leadership led credibility. So one of the things that we did with Nadia early on after we had done our own sort of piloting of the tool, etc., is that I gave it to our exec team and our CEO. And I said, you know, why don't you just try it out and see what you think? And, actually, it gave it a lot of credibility when we did introduce it because we had our CEO introduce it as something that, again, was very optional for people, but it was an available tool. But he really framed it as a strategic asset for people sort of rather than, as you said, something mandated or something that you have to adopt. And I think that overcame a lot of that potential skepticism. 

And in the local markets where we found people who are adopting and having them share some of their stories about how they're using it, etc., and then feeding that back to the population, that's created some really engaging moments. We recently had, you know, a global, what we call the career hack, like a career day where everybody could explore all sorts of different things. And AI was a big theme, and so we had a lot of people share their experiences in local markets. That sort of word-of-mouth, you know, adoption has helped as well. 

14:17  Integrating AI into Talent & Leadership Development

Das: One of the other things that we've seen being really key to, like, an effective rollout of AI within HR is that it's really tied into what you're already doing with talent development, with leadership development, with performance management. And that's something I think all three of your companies have done really well. 

So I'd love to click into some of these initiatives. And, Jonathan, I'm gonna start with you. 

Jonathan: I think the thing I would point to is we are a performance, you know, driven organization. We have numbers and data and facts flying around on an hourly basis, daily basis, weekly basis. So people are very honed into how can I perform better, how can I see the performance flow through? And I think that's where we've most connected use of Nadia, particularly with our store manager population to say, well, actually, in order to help you with your performance and your manager is gonna be part of that story. But, actually, we've got this additional tool that's gonna help you to drive performance in your store and drive a better experience for your customers, actually. 

So I think that's the, that's the biggest single hook that we've used within Costa. We are, we're in, we're on this journey actually, which I think is not an unfamiliar to many organizations where we have this kind of, post-COVID hiatus of lack of investment, and we've now been rebuilding all of our leadership frameworks off the back of that into, okay, we've now got some new tools that support leadership development in a very, across every type of leadership development. And, you know, this is one that's very specific particularly to our frontline teams. 

Das: Jennifer, at ADI, you know, you launched kind of globally to all employees and then followed up with some of your kind of specific integrations into talent moments. Talk a little bit about your journey there, with the performance management cycles and building into kind of ADI's value and action model. 

Jennifer: Sure. So, again, you can build it, but will they come? So what we've done is we have a monthly drumbeat of workshops that are timely and relevant to what employees and managers need to be doing. That might be performance discussions. That's already happening. Oh, hey. By the way, come to our workshop. We can show you how AI coaching can support you through that. We just rounded out our mid, midyear point. Let's reflect on our goals. Let's refresh our goals. Let's talk to our AI coach about how we can ensure our goals are relevant. We just wrapped our engagement survey. 

So our next workshop's gonna say, come talk to Nadia as you have conversations with your teams about your team engagement scores. We've also utilized Nadia's capability to make it as relevant to our employees as possible by training Nadia to speak about our culture and values in action. So Nadia knows the language in which we speak when we say you should be reflecting ADI values in action through your daily performance as we consider you for advancement, as we coach you for performance. 

So I do see Nadia as an extension of a preexisting talent strategy that just accelerates and allows it to hang nicely together.

17:42  Supporting Adoption Through Inclusive AI

And I think most importantly at ADI, we are a global workforce, and we have the diversity of culture and language. And something that I believe has driven adoption significantly is the ability to Nadia, to speak to employees in their native language. And where we see super users, and we define super users as using Nadia three times more often than the average, they are engaging with Nadia in local language. 

So I think it's removing that friction of engagement in the conversation that we know Nadia and AI coaching support so well. But the even better "if" is the fact that AI is being so inclusive to and comfortable for people to operate in their native language. When we widen the aperture of who's using it, we see about seventy percent are managers, thirty percent are individual contributors. But when we look at those super users, it's about fifty-fifty. Tools like Nadia are not just created for leaders or managers, but we can now open up access to leaders at all levels. And we're gonna continue to study these super users to understand through gender. We are seeing more women as super users than men, which is, again, another optimistic statistic because many studies are showing women are less inclined to engage and experiment with AI. Yet, as it relates to AI coaching, we're seeing the opposite. 

Maree: And I think for all the things that Jennifer was just saying as well, not just in, I mean, we are a global organization too. We're in, you know, fifty-three different markets. So we also have the language barrier, which has been very helpful because we haven't been able to, really be able to, as I mentioned before, penetrate those markets successfully in the past with something at scale. But I think the other thing and just touched on it then in speaking about women, not just women, but certainly people from diverse backgrounds and even not just diverse backgrounds. But that's where we kind of, when we did go through our pilot, we did test in those areas first, because we wanted to make sure that, you know, people felt extremely comfortable, that it was inclusive, etc. And I think that has actually supported adoption significantly because, you know, there is an intimidation sometimes talking to a coach or talking to a peer or talking to someone else. And Nadia has just proven time and time again, actually, that she's quite an empathetic coach as well. 

One of those aspects is let's have the conversation with your manager. Let's have the feedback from your peers. Let's make sure you really get that. Nadia has supported managers, right in the flow of that career conversation work within Thrive to not only help them, you know, use the feedback in a constructive way, and how to position that in a constructive way, but help them avoid misunderstandings. And then it's certainly and it's made it efficient. Right? I mean, before, that would take a lot of work for a manager to sit down and do all that. 

And, obviously, Nadia has created an efficiency there so they can get to more, of their direct reports. And it's the same with employees who might feel, I can't really talk to my manager about this, or I don't know how to talk to my manager about this. So sometimes it's very challenging for people to have difficult conversations, and Nadia has really supported in creating a constructive environment for people.

21:18  AI Coaching for Change Management

And then another place, we had a mandated RTO of four days a week. Obviously, in some markets, this is fine. In many, it was not. And we had a pretty short time frame of sort of three months to get people back and comfortable. And, I'm sure anybody who has tried to do this has met with the same level of resistance that we were globally, for all very valid reasons. And so we used, we had an idea to use Nadia to help people understand, and have a reflective ability or perhaps have a third party, if you like, to talk to them about how they might, you know, introduce this back into their day to day. And, also, if they were having challenges, how they could talk to someone either within HR or their managers about it. 

So we loaded Nadia with all of our RTO policies. We loaded her with our FAQs, etc. We taught her about that. In addition, obviously, she's already very familiar with the values of the company, etc. And so she was able to help, coach that and frame it in a way that when we messaged RTO four days a week, we message it with significant flexibility. Right? But that is lost in the translation of an email or a leader coming out and saying that all people: you have to be back in four days a week. And I think Nadia really helped give the perspective of there is flexibility. If there is a need to, you know, to have flexibility, it's there, and this is how you get it, in a constructive way, and this is how you approach it so that people, can understand your particular individual circumstance, etc. 

So that was really helpful as well. It really helped people with sort of sensitive transitions, which we hadn't anticipated originally, but she worked really well for that too. 

Jonathan: There's a kind of fairness and consistency in the approach that that the tool can be used for, which I think really helps to drive trust in in the, in the, you know, the technology rather than a human who is naturally gonna have some biases and inconsistencies in the way, in the way that they operate. 

We've got the message across to people that this is actually a way of reducing bias. You can't remove it entirely, but, actually, you know, people are worried about, you know, is this gonna create some bias or a difference of view that's coming through the tool. And, actually, we've demonstrated to people that, actually, this is a way of reducing that level of bias that we naturally will carry with us. So that's one of the key messages that we've got across to help with trust. 

24:00  Future Vision for AI in the Workforce

Das: We've talked a lot kind of about your journey, what it's taken to drive adoption. I wanna look forward now. Twelve months from now, we're all back together again, either virtually or in person. We're having this conversation again. Where do you hope that you are within your organization with AI? 

Jonathan: So in terms of where Costa would like to be on AI in twelve months' time, I'd love to love to see that we've got some of the integration, that's been talked about on this call happening so that there's a, there's an interchangeability between different mechanisms and different processes and different tools to support people on their development, whether that's in person, whether that's virtual, whether that's AI. And I guess the biggest single thing will be, we're seeing performance improvement as a result of adopting the tools. And whether that's, you know, my career performance or if that's actual business performance because, ultimately, that's the goal that we all have. 

Maree: For me, you know, in AI tools in general, I think, you know, we just, we hope that they'll just become the normal part of workflow, right, that they're not something that's just sitting out that you might adopt here and there, or be an on-demand tool if you like. They become more habitual, and that's across the board regardless of what AI tool it is. I think specifically with Nadia, you know, I would hope that she becomes more of a habitual growth partner for people. 

Jennifer: I would just add to, the colleagues today on the call that Nadia is gonna be joining our org charts. So what I see in the first six or twelve months is we see a lot of individual use cases, and I'm monitoring team use. I think success will really be where people see Nadia as a part of the team interaction, not just an individual coach, but a resource for the team. And we'll see performance and productivity and engagement increase as a result of that. 

Das: Yeah. From experimenting to adoption and now from adoption to impact and kind of results in the next twelve months. Jonathan, Jennifer, and Maree, thank you so much for joining and kind of sharing your stories. It's been just a fantastic conversation. So thank you so much.

How to Successfully Deploy an AI Coach

It’s one thing to buy seats for an AI tool, but it’s another thing entirely to successfully integrate it into your organization. In this deep dive on AI adoption strategy, Jonathan Crookall (Chief People Officer, Costa Coffee), Jennifer Carpenter (Global Head of Talent, ADI), and Maree Prendergast (Global Chief People Officer , VML) share the use cases that led them to roll out AI coaching with Nadia for their frontline workforces, tactics for successful onboarding, and lessons learned on the path to global adoption.

Jonathan Crookall

Chief People Officer, Costa Coffee

How to Successfully Deploy an AI Coach

Global Head of Talent, Analog Devices

Maree Prendergast

Global Chief People Officer , VML

Why AI Impact Starts with Managers

If you want to maximize AI’s impact at your company, start with managers. Hein Knaapen (former CHRO, ING), Linsday Pattison (Chief People Officer, WPP), and Paula Landmann (Chief Talent and Development Officer, Novartis) unpack the new AI toolkit to support managers, unlock capacity, and ultimately drive organizational performance.

Hein Knaapen

Former CHRO, ING

Linsday Pattison

Chief People Officer, WPP

Paula Landmann

Chief Talent and Development Officer, Novartis

Key Points

00:00  The AI Journey: Excitement to Delivery

Das Rush: Last twelve months, where are we right now? Where are most organizations or the organizations you've been working with? 

Lindsay Pattison: I would say for the last twelve months, we've had three stages of our journey at WPP. So WPP has about 108,000 colleagues around the world working, and we provide marketing services. So the first stage was excitement, optimism, experimentation, all of which is good because, actually, I do know some other companies where, creative companies where some of the content producers are very nervous still about AI and worried about IP, whereas our industry is actually super optimistic. So I think having that optimism is good. So then become very focused on mass adoption. So "forced fun," but we would shut offices for a whole day and really push the training of AI. 

So I would say the first thing was excitement, experimentation, but kind of how people are assisted by AI. The second stage was to move it to more habitual use because it's fine to experiment and be excited, but we need to then be very specific about use cases by function and creating a workflow for our client work across WP that's enabled by AI was one. And then thinking from a business function perspective, so people, legal, finance, how do we think more specifically about functional roles, how they could be augmented by AI? 

So I would say we then moved into enabled by AI, so assisted then enabled, and now it's really about delivery by AI. So, actually, fundamentally, pieces of work, even org charts being delivered by AI and baking that into our absolute offering. And as some people talk about a wave of AI, but I think it's a tsunami of AI about to hit us all. And we are moving into the delivery phase. 

01:56  Mass AI Adoption & Change Management

Paula Landmann: So I would say at the start, we first understood which would be the tools we wanted to give in the hands of everyone. And, of course, we created then our own internal ChatGPT. We agreed that Copilot would be one of them. We also assessed which type of AI coach we also wanted to proceed with. And then, of course, there were more, the specialized tools that we agreed for specific groups for particular needs that they had to solve. 

So I think from GitHub to others that many of you probably use. And then I think our journey was understanding how would we get to real usage. And we started by very intentionally giving licenses to people who used heavily, for example, Word documents or prepare PowerPoint presentations. So we went by use. And that was very interesting because we could see that those people became very quickly our change agents as well. And that evolved into a network now of over a thousand change agents across the company. So starting with that. And then there intentionally, we also onboarded our executive committee members and top leaders of this company and started using in critical meetings. And when we had leaders together, we would use it. We would have sessions around it, persona based. 

So started with a very, very intentional change management, heavy investment in change management with support also of an external partner because we believe that that the start that was really needed for people to realize the benefits and really start using it. And what was interesting is we had a lot of gen AI immersion weeks and all those sorts of things. From the start, we could sense people were trying to understand and intrigued, and we slowly saw a huge change to real adoption in the day to day. And we track usage, right, of tools like Copilot. So we see how we became suddenly the number of hours people use per week and how slowly we became a company with a heavy usage, one of the companies with which, with more active users a week. 

So I think that was the journey. And then, of course, I think as people saw the benefits in business cases, is all, it also became part of how we work. So we see in development, for example, in research that this is part of how we solve day-to-day tasks and work. And that, I think, as more as more people saw the impact and the benefits, everybody wanted everybody wanted access to the tools. Everybody started using in meetings, and then it really became a bit more viral. 

So I would say we started by trying to understand the appetite, leveraging key cohort population to drive to now, like, I would say is really part of the way we work, and there's great use cases in every single area around the company. So it's a big shift. 

04:25  North Star: AI for Business Performance

Hein Knaapen: I think Paula and Lindsay are probably a little ahead of the curve. It's, and I cannot totally understand how that, yeah, how that is impacted by the sort of work you do. So that's really interesting to see. And what I'm seeing with solutions, whatever solutions that are available on whatever part of company's processes, we are often excited about new options, about new stuff. And that's great because once we lose our curiosity, we're going downward. But that, doesn't always make it easy to keep to keep company performance as a North Star. 

And so the interesting thing is and, of course, you don't think your way into new acting. You act your way into new thinking. So if you don't try out, you don't know. I totally get that. And I like everything that Lindsay and Paula gives as examples. And how are you sure or how are you evolving to a point where you are clear, here are parts of our processes where it works and it has value and here where it's only nice to have. I just, what I'm curious about. 

Das: Yeah. And it's so, it's like, a couple things up here in that, and I kinda wanna come to this question too for Lindsay and Paula, like, what you held as a North Star through your initiatives. 

Lindsay: I think the North Star, back to Hein's point, is performance and both, you know, Paula and I work in very competitive categories. So, actually, much more simplistically, it will come to managers with the majority of our workforce. We need a competitive advantage, and by getting ahead in adopting and using AI is gonna help us win and have the business succeed. Simple as that. 

06:06  Game-Changing AI Coaching at Novartis

Das: Paula, you've embedded Nadia within your align initiative, which is explicitly an initiative for managers. And so I'd like to hear a little bit about, like, why that initiative and why an AI coach within that, before we kind of talk about change management. 

Paula: Yeah. So we actually have, they're separate, right? Nadia and Align, but we have both. So Align is a tool that we use for team effectiveness, and it's a super simple diagnostic tool. And what I love about it is it really allows team to rate themselves on habits shared by high performing teams, and then it triggers the right conversations, right, in the in the areas that the team needs. So we are using this now across the enterprise for all sorts of team conversations together sometimes with the perspectives tool, which is also an additional tool. Both are Valence tools, and I would say very, very helpful for us. 

Now coach Nadia, which is the AI tool that we have originally piloted with a few hundred people and now we're expanding to 5,000 people at Novartis, is really a game changer for us. And we've had experience now for a few months, I think almost a year with it across Novartis. And where we see is, and that's part of our North Star. This is really focused on individual development. And in essence, it helps any person who needs support in the moment they need it. So they don't need to wait for a next coaching conversation if they have a coach. They have it. It's really at their fingertips. It's pretty democratized. It creates a safe space for people. They don't have to worry about what they're asking, if somebody on the other end is judging or will do a face of any sort. 

So it's really, the feedback is really excellent. And we've surveyed with measure impact of those managers over time and also people using it. We now even embedded it, for example, in some of our leadership development interventions. We recently had a mentoring retreat for ECN executive leadership minus one, and we had an executive coach, we had Nadia, and the business leader. The three would coach individuals. And the feedback from participants was that many times Nadia was the most effective coach. 

So I think it, and then, of course, people who go through it see the benefit, they want their teams to have it, they really talk about it, and then you see the effect it really creates. And it can be as much as an opportunity for people to stop, reflect, learn, get advice as just truly a tool that hit nudges you to say, have you thought about that today, sending you an email. So that has been one of the most impactful tools, I would say, that we're currently using. 

Lindsay: And I just thought to pick up on that, Paula, because you mentioned it, I think, before. But it's as we think about adoption at scale, it's really FOMO. Right? It's fear of missing out once. And I think it's so clever that you started with your EXCO using the tools because everyone else need to understand that AI isn't cheating. AI is enabling. It's assisting. It's helping you. And then everybody wants to well, hopefully, in a high-performing organization, be better at their job. So whether that's coach Nadia really helping you think about how you have challenging conversations or you develop your career skills or whether that's Copilot, how can you functionally, you know, simplistically curate documents. It's really a way of being better at your job, and who doesn't wanna be better at their job? 

Paula: Exactly. And Lindsay, just to this point, what I found fascinating was we very intentionally also put out some videos of our EXCO members talking about where they use it. So one of our EXCO members said he used to create his own objectives. He actually used Copilot for that and got some hints from Nadia. The usage of the tools after the video went out just went up drastically. It was quite a big deal. 

Hein: Oh, beautiful. 

Paula: So that just shows how role modeling really plays a role in the day to day. Right? 

09:40  Supporting Overwhelmed Frontline Managers

Hein: And how to nudge people. Really how to nudge people. That's very, very nice. Yeah. But here's a perspective. I totally get how you guys, for your primary business, can make use of AI a lot. I was a bit or even a bit obsessed with the role of the middle manager. And so eighty-five percent of our people, they report to our full-time managers. And there's this beautiful book that I can really advise you to read. It's from last year, Bob Sutton. He's a professor at Stanford. It's called the Friction Project. And he says we are here, the leadership of the company, to be the guardians of the time of our people, for them to spend their time to being relevant for the customer. And then he says, in actual fact, we're the robbers of that time because we burden them with all kinds of pet projects. 

And then look at your, at your frontline managers. I mean, dignified, respectable people, often hardly more advanced in anything than the people they lead, and we just, we leave them alone. We often leave them alone with all kinds of practical stuff they need to drive performance today and tomorrow and day after tomorrow. And what I've also experienced over the past forty years is the most overlooked single one, most important driver of company performance is the skills of the manager. And it's from that perspective that I look at Nadia and at AI, and I'm an apprentice and a starter. I'm amazed with the functionality of Nadia, but I can also see how powerful it is because it creates a sort of safe space, if I may, a psychological safety for the manager to ask any questions they may be afraid are too stupid to ask other people. And that builds their skills, and, as a result, that builds their confidence to steer performance. 

Lindsay: And to build on that point about why and how managers are so, they're overwhelmed. Right? And they're on, I think you've talked about them being on the front line. And, actually, what we found, we looked at our use of Nadia, which thousands of people use Nadia across WPP. It's piloted first by VML, who are also speaking at the summit, the brilliant Maree. But, we looked at more senior-level users, mid-level managers, and junior colleagues. fifty-five percent of the use was by managers. Thirty two percent was, senior managers, and then the small minority were junior colleagues because they're overwhelmed with asks of them. And what was also interesting is the amount of things they use Nadia for was the most diverse because they're we're throwing stuff. They're grappling. They're trying to get up the corporate ladder. They've got lots of super tech, tech-savvy, ambitious, Gen Z below them. And these are the millennials, really, struggling with Gen X's on top who've earned their place, super literate tech-savvy people below them who can superficially become very good at everything. 

So I think there's a lot, there's a big burden on managers, and it's our job to really support them and help them move through the organization. 

Paula: If I can build on that, I, yeah. It's interesting because when we also look at topics, things that managers bring to Nadia, we also get a lot of insights of what we can do to support managers much better. So, of course, we don't see any individualized data, but on the aggregate, we see, wow, managers are really trying to understand how to influence that scale. How are we supporting them build this capability? So I think it gives also insights in in those directions as well. 

13:10  Pragmatic AI Tools for Managers in the Flow of Work

And what I also heard a lot from our managers is we're doing this gen AI weeks to support people understand what to do. And the sessions that are really persona-based focus on tools and tips for managers. They have the highest uptakes. Thousands of managers are joining. I think in in total, we had 15,000 across the company. And really, because they are overwhelmed and they want to understand how to best leverage the tools available to them. So if things are pragmatic, practical, they really appreciate. It becomes hard if it takes a lot for them to learn because it takes the time they don't have to learn. And I think that's one of the things we're really taking into account as we build the capabilities and skills of managers, how to really leverage the time they have to ensure they get what they need and can be more effective in their roles. And, of course, they end up being then the role model for their people. Right? So they're absolutely critical for us. 

Hein: Yeah. I get that. And there's the point, I guess, Paula, those are, relatively speaking, micro interventions. So you don't need to go to a three-day course. So you can take ten minutes, and that is, and that is wonderful. 

Paula: Yeah. Correct. In the flow of work and the power of nudges. In addition to the role itself, the way they have to lead their people and what we expect them to do in leading their people has drastically evolved. And this is where we hear from managers that having tools at their fingertips that increase their productivity, help them unlock hours for them to be more innovative is really helpful. And I think tools that help them, again, on the self-reflection, help them prepare, for example, for development conversations. And at times, it's the simple things. 

But I was talking to another manager this, you know, yesterday, and she told me, listen. I had to sometimes think, where do I find again the tool to help me have a difficult conversation or host development conversation? You guys now have not only Nadia who supports me, but also in Copilot, all these prompts that I use. And I'm so ready for it because it's all automated. And then I have it in front of me when I'm having set conversations. 

Lindsay: The key factors and the reasons why we're using Nadia very specifically as a tool to help managers is a democratization of coaching, and everyone talks about the value of the safe space, the testing. There's no better, you know, the reason why senior people often get to the top is simply due to experience, that they've had experiences, and Nadia allows you to shortcut and role play experiences. And so it's really the democratization of what's, of something that's really been a tool that's only been available for very senior people, which is why we love it. Speed and democratization. 

15:37  Measuring AI Impact on Workforce & Performance

Das: What does it take to go from, you know, AI is going to transform your workforce, these general vague terms, to this is how we did it, this is how performance management changed with x impact? 

Lindsay: We paid a lot of attention in thinking about strategic workforce, planning the shape of the organization, the number of colleagues we'll have in the organization, and trying to be very specific about what exactly AI will do to unlock. And I think the more specific you can be, the better. So we've created our own LLM. We put in 6,000 different roles. I mean, way too many. I know. And then we broke down and looked at every single role to say what would be based on our AI tools available now, what would be the capacity unlock? Because otherwise, people talk in very, very vague terms. Someone will say ninety-five percent of marketing can be done by OpenAI. Obviously, I would say that's nonsense. But we did a lot of very detailed work, and we understand there's a capacity unlock. So not necessarily time saving, but capacity unlock from AI of twenty-three percent. But that varies wildly from, say, sixty percent for somebody in payroll to below one percent for a clapboard operator at the start of a shoot because we still need a human to do that. 

So actually being really specific has helped us. And we did another study just very recently which showed it was about twenty, twenty-one percent. And that's great knowledge to have, but actually what we need to move to is then directing and guiding targets against that capacity unlock, one, and then thinking about what do we do with this time saved, the capacity unlock. What are these mysterious high-level activities that we're gonna enable our amazing colleagues to do because we've taken away some of the drudgery of time. So versus having we got the specifics. We have very directional data. We now actually need to be actually slightly more prescriptive on exactly gaining that unlock back, turning that into a commercial model, driving performance, and then thinking about what else we can do. Because if nothing else, we're always super entrepreneurial. So what can we now do versus what can we do more efficiently? What do we do now? What's unique to the human? What's creative? 

Paula: I think for us, what we're also trying to understand is the impact in different parts of the business. In our space, for example, the way operations or technical operations uses AI is quite different than the way somebody in sales uses or somebody in research and development. So being quite specific to measure the impact in the different areas. And I can say, so far, what we've seen is it diverts. It's not the same. On average, what we get reported is that people say, and again, it's self-reported, that they save at least four hours a week with the tools they're provided. And, of course, we see it started by two hours. Sorry. I said four hours a week. Did I say four hours? I hope so. So it's four hours a week. They are, they save with the tools we provide. And we start by two hours a week, now it's four hours a week. So we want to see a trend and see how this increases. 

And to Lindsay's point, I think what we're also focusing a lot is helping our people understand that it helps with productivity, but it also should help them really save this time to do other work, to innovate. And when we did some surveys, so we have now almost 40,000 users of Copilot, for example. And we survey people and we ask a few questions that eighty-nine percent say they feel more productive, seventy-six percent feel more creative, which is something that is also very important for us. And it's part of our culture aspiration. We want people to feel inspired at work. So that is also something that we believe is really important. 

And the last thing I would call out is we're really tracking some of the use cases in the business and very intentionally investing in use cases where there's a clear business need. So for example, in our development space, what we're doing with Copilot is you're really using it to summarize complex clinical trial data. Or we're also using research, so taking raw research inputs and making a very nice, polished presentation out of it. So being very intentional in those use cases and seeing them actually blooming in so many parts of the company is the other piece that I would say we're heavy on. 

So happy to say that moving from AI saved me time to start hearing in the surveys that the AI helped me lead my team more effectively as well. That's the journey we're in. 

19:50  How Will the Best Companies Be Using AI in 12 Months?

Das: What are some of the things we we're observing now that kind of hinted where we're gonna be twelve months from now? And then I'm gonna fold into that, you, where do you hope to be twelve months from now? 

Paula: So if I would describe, the future twelve months from now, I would say that people will stop calling it AI and just call it the work, the work we do. So I think the best companies won't treat AI as a separate stream, as a work stream. They'll really embed into everything that the company does from onboarding, performance, coaching, leadership, but on the day to day of the business areas as well, no matter which part of the business it is. And I think the companies that get it right will really be the ones where we hear from managers that they just don't use the tools, but they really felt and it helped them have everyone around them believe that they could also use and actually use it as part of their day to day in the flow of work. 

Lindsay: I think what companies, including ours, and I think anybody in the audience needs to really consider, not to bring a downer, is not what can be done by AI because almost everything can be done or enhanced by AI or augmented in some way, but what should be done by AI? So I think the other the larger macro piece is the ethics, actually, of AI and thinking about the values of the company, the values of the company that you work for and thinking responsibly about how you use AI in terms of the goods and the products and the services that you offer out. 

So not worried about tools like Nadia are just amazing and super effective, and I—we really hope are gonna help people be better managers, better leaders, and create a happier, productive, efficient, and effective workforce. But there are other aspects of AI that I think we should just be a little attendant to and think about the values we've always held as a business and how that's, how we use AI going forward, not what can it do, but what should it do for our business. 

Hein: Yeah. Yeah. And I'm not sure whether twelve months is possibly a bit too short. In the slightly longer run, I think the companies who will have done better are those that have tightly linked it to value, business value metrics. And, of course, we are now in discovery, and it's great that people get to know it and feel familiar, and that is a necessary and very useful step. In the longer run, if it's not linked to well-defined company performance metrics, it will have been another fad. That's what I'm a bit afraid of. 

Lindsay: It'll be another Metaverse, if we remember that. Do you remember that? I don't think so. I think this one's here to stay. 

Das: Yeah. I think so. And I think we've traced a nice arc here a little bit maybe along the Gartner Hype Cycle for where we are, and we'll see you where we are twelve months from now. Wonderful. Lindsay, Paula, Hein, thank you so much for joining.

Why AI Impact Starts with Managers

If you want to maximize AI’s impact at your company, start with managers. Hein Knaapen (former CHRO, ING), Linsday Pattison (Chief People Officer, WPP), and Paula Landmann (Chief Talent and Development Officer, Novartis) unpack the new AI toolkit to support managers, unlock capacity, and ultimately drive organizational performance.

Hein Knaapen

Former CHRO, ING

Why AI Impact Starts with Managers

Chief People Officer, WPP

Paula Landmann

Chief Talent and Development Officer, Novartis

What Leaders Need to Understand About AI

What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

Key Points

00:00  Personal Assistants as Proxies

Parker Mitchell: When we last chatted in, I think it was November or maybe October, you were two weeks into the university giving you a personal assistant. Can you share more what it's been like having that and we may be able to extrapolate to everyone having the equivalent of that? 

Geoffrey Hinton: So fairly recently, I woke up earlier than usual. And up until that point, I've been thinking, maybe I don't need the personal assistant anymore because when I look in my mailbox, there's only about sort of things to be dealt with. But the morning I woke up early, I discovered there were hundreds of things to be dealt with because my personal assistant was just dealing with them. That was kind of essential. 

00:48  Learning Personal Preferences

Parker: And when you look at how she has learned about you and the way you might answer questions or how you would assess a situation, what has it been like if she's gotten to know you personally better? 

Geoffrey: It's been good. She's getting much better at knowing which questions I want to answer myself, which talks I might be interested in giving, and which talks I'm definitely not interested in giving. She can pretty much recognize my former students. 

To begin with, one of my former students would send me mail, and they'd get a very polite answer saying I was busy. And I remember talking to students. I got this answer from you. Didn't sound like you. And so now I tell my students, if you ever you get a really polite answer, that's not me. 

01:38  AI as a Proxy & Specialized Assistants

Parker: Don't have time to write the full polite answer. And so in some ways, she's acting as your proxy. She's learned how you see the world and is placing that as a first filter. How would AI develop that ability to do the proxies to help people navigate how work and life might change as AI is able to automate more things? Do you see a world where people will have many different specialized assistants or just one that knows them? Any thoughts on that? 

Geoffrey: It's a very good question. Why do you need a train one being neural net to do everything? Because that's more efficient in the long run, because you can share what different tasks have in common. So there's always this tension between, having a small neural net specialized to one thing, which doesn't have much training data. If you got enough training data, that's a sensible thing to do. And so we have huge amounts of training data. 

It's quite sensible to have many small neural nets, each of which is only trained on a tiny fraction of the training data, and a manager who decides which neural net should answer each question. If you don't have that much training data, it's typically better to have one neural net that's learned on all the training data. And then maybe after you trained on all the training data, you might fine tune it to be a specialist in different domains, and that seems to be a good compromise. Train one neural net on everything, and then in particular domains fine tune it for that domain. 

Parker: I mean, it sounds like if I look through the history, people said, you know, it might do this, but it won't do a, b, c. And I think your answer sounds like it could be just a matter of time and scale, maybe data. 

Geoffrey: Go back ten years and take anything that people said it couldn't do. It's now doing it. 

03:18  How AI Helps Doctors & Patients

Parker: And so if we fast forward now ten years in the future, obviously, the implications for society are huge. But on the positive use cases, health care is one. Tell us a little bit about how why that is so personally important to you and how that could evolve over the next, let's say, five years. 

Geoffrey: What a family doctor does, the sort of first line. The family doctor knows quite a bit about you, maybe knows something about your family, maybe even knows a few things about your genetics. But she's only seen a few thousand patients. I mean, almost certainly, she's seen less than a hundred thousand patients in her life. There just isn't time. An AI doctor could have seen the data on millions of patients, hundreds of millions of patients, and so and also could know about a lot about your genome, a lot about how to integrate information from the genome with information from tests. So you're gonna get much better family doctors with AI. And we're gonna get all sorts of things like that for CAT scans and MRI scans, where AI can see all sorts of things that current doctors don't know how to see. 

Parker: I brought up that example to a doctor who had looked at the interaction between radiologists and AI, and there were a few different scenarios. 

So one is, you know, AI is confident and the doctor is confident. Same diagnosis, obviously easy. 

Geoffrey: But not, I would trust the AI. 

Parker: So they were doing a study on this. But what was interesting is if a doctor is, you know, confident it's x and AI is confident it's y, the doctor chooses to go with their own diagnosis. 

Geoffrey: Fair enough. 

Parker: Now if the doctor is not confident and the AI is also not confident, the doctor chooses the AI solution with the sort of human thinking of, like, well, if I'm not sure, I'll blame it on the AI for being wrong. I just thought the human nature of that feels so real and dangerous at the same time. 

Geoffrey: Yeah. I think that's telling us more about human nature than about what the optimal strategy is. 

Parker: Absolutely. And ways that we might misuse AI in the human-AI interaction. 

Geoffrey: The thing I know a bit more about from a paper that's more than a year ago now is you take a bunch of cases that are difficult to diagnose. So this isn't scans. This is, you're given the description of the patient, and the test results. And on these difficult cases, doctors, get 40% of them right, an AI system gets 50% of them right, and the combination of the doctor and the AI system gets 60% right. 

And if I remember right, the main interaction is that, the doctor would often make mistakes by not thinking about a particular possibility, and the AI system will raise that possibility. It'll have a list of possibilities. And when the doctor sees that possibilities, the doctor will say, oh, yeah. The AI system is right there. I didn't think about that. That's one way in which the combination works much better. The AI system, doesn't fail to notice things in the same way a doctor often does. But there, it's already the case that, and this was more than a year ago, with the combination of AI system and doctor, it's much better doing diagnosis than the doctor alone. 

Parker: And what it sounds like the AI is doing is generating a scenario-specific checklist. Here are a range of different things, and it could do that very quickly, and a doctor can just look at that and go, no. No. No. Oh, maybe this. And it sort of allows it to do a little more system one intuition on those and then pay more attention to the ones that it thinks is important versus difficult system two thinking across every possibility. 

Geoffrey: Yeah. So that's certainly one of the things that's going on. The other thing that's going on, of course, is you get the ensemble effect. If you have two experts who work very differently and you average what they say, you'll do better then.

07:13  Personalizing Education & Healthcare with AI

Parker: Anything that's processing vast amounts of data, finding patterns and similarities, and then identifying sort of promising candidates, for humans in that sort of collaborative model you mentioned, that's gonna power things. 

Part of that leads to my next topic, which is around personalization. And so we're in a, we'll be in a world where your biology is different from mine, it's different from someone else's. And so that intervention on the medical side can be more tailored to each of us. Is there research currently going on around, how that might, you know, how that might change sort of health outcomes? 

Geoffrey: I believe there is. I don't know as much as I should about this. But for example, in cancer, you'd like to use your own immune system to fight it, and you'd like to sort of help your immune system recognize the cancer cells. And there's many ways of doing that. I think AI is already being used to choose which things to mess with. 

Parker: Are most likely to work for your particular area. 

Geoffrey: So that would be individual therapy based on AI. And then, obviously, in education, AI is gonna be very useful. And, again, it's gonna be individual therapy for misunderstandings. An AI system that's seen thousands or millions of people learning about something and there are different ways in which different people misunderstand, that will be very good at recognizing for an individual person, oh, they're misunderstanding in this way. It's what a really good teacher can do. They're misunderstanding this way, and here's an example that will make it clear to them what they're misunderstanding. 

AI is gonna be very good at that, and we're gonna get much better tutors. We're not there yet, but we're beginning to get there. And I I'm now happy to predict that in the next ten years, we'll have really good AI tutors. I may be wrong by a factor of two, but it's gonna, it's coming. 

Parker: You mentioned on the AI tutor side of things for students. I think there was a study that you referenced about how much better the outcome is when people get individualized tutors. 

Geoffrey: Yeah. I can't, I don't have the citation for it, but the number I remember quite well, and I've seen it quoted elsewhere too, which is you learn about twice as fast with a tutor as in a classroom. And it's kind of obvious why. First of all, you don't, your attention doesn't lapse. You're interacting with somebody, so your attention stays on it. You don't just stare out the window and wait 'til the lesson ends. I spent a lot of my time at school doing that. 

Secondly, the person's attending to you and can see what you're getting wrong and give you, correct it. And in a classroom, you can't do that. So it's sort of obvious why a human tutor is gonna be much more efficient than a classroom. An AI tutor should be better than a human tutor eventually. Right now, it's probably worse, but getting there. And so my guess is it will be three or four times as efficient once we have really good AI tutors because they would have seen so much more data. 

Parker: There's probably another element too, I would guess, which is around motivation. And what we found is if, you know, I'm sure for you and many students, if it was an interesting topic, if it was framed in such a way that captured our curiosity, we'd pay more attention. I guess AI tutoring will be able to do that at mass scale. 

Geoffrey: Yeah. So for most of us, interacting with other people is the most important thing there is and the most motivating thing there is. And I think AI tutors, it'll be pretty motivating. Even though they're not people, you'll get the same kind of effect of someone paying attention to you and telling you interesting things. It will be very motivating. 

Parker: And 30 kids in a class might have 30 different things that are quote, unquote interesting to them, and AI tutoring will be able to tailor it to them. 

Geoffrey: Yeah. 

10:52  AI & People Symbiosis

Parker: So as you know, what we're doing at Valence is we're building a, an AI leadership coach. And so the goal is to help personalize that learning and guidance at work. One of the things we're talking to an education company, and they said it's such a shame that everything we've learned in education about how to help people learn concepts seems to fly out the door the moment they step into the work world, and they're just left mostly on their own to learn. 

So we're excited about that. Can you share how you can see that thread of learning continuing throughout someone's career and not just ending when school's over? 

Geoffrey: So I would relate this to the longer-term development of AI. AI is gonna be used everywhere, and it's gonna get to be very intelligent. If we can reach a situation where we get a symbiosis between people and AI, AI is gonna make the world much more interesting for people. Mundane things will just be done by AI, and this symbiotic relationship will allow people to learn much faster, have much more interesting lives. That's the good scenario, and I'm hoping we can get there. 

12:01  AI's Economic Impact

Parker: How should policy makers and CEOs be thinking about and paying attention to the wide range of outcomes that could emerge? 

Geoffrey: This very quickly gets you into politics because what's gonna happen is mundane intellectual work is gonna be done by AI, and that's gonna replace a lot of jobs. In some areas, that's fine. In health care, for example, if you could make doctors and nurses more efficient, we could just all get more health care. There's a kind of more or less endless capacity for absorbing health care. We'd all like to have a doctor on the side who you can ask questions about all sorts of minor things you wouldn't bother your own doctor with, but you're quite interested to know why does your finger hurt today and stuff like that. 

Health care is great because it's elastic. You can absorb huge amounts of it, so it's not gonna lead to joblessness there. But there's other things where, there's just only so much of it you need, and it's gonna lead to joblessness there, I believe. Some people think it won't. Some people think it'll create new jobs. I'm not convinced. I think it's gonna be more like, people used to dig ditches with spades, and now people who can dig big holes in the ground with spades aren't in much demand because there's better ways of doing it. 

The worry is you'll get a big increase in productivity, which should be good, but the increase in goods and services that you can get from that big increase in productivity won't go to most people. Many people will get unemployed, and a few people will get very rich. That's not so much a problem with AI. It's a problem with AI being developed in the kind of society we have now. 

13:47  Techno-Optimism: Competing Views for the Future

Parker: So what would you say to the techno optimists? Because I think that everyone can see a scenario in which AI can make, you know, take the mundane off your plate, give you personalized learning, personalized tutoring, support you as you navigate this transition. And it seems like our social and political setup is not going to lead to that outcome. So how would you square that circle? What advice would you give to people who just say it's gonna work out? 

Geoffrey: Yeah. My first piece of advice would be, do you believe that because it's convenient for you to believe that, or do you really believe it? Now, people are very good at believing whatever is convenient for them. I've seen a lot of that recently. I just think they're being very shortsighted. 

Parker: And if someone was self-aware enough to say, okay. I recognize that this might be convenient for me, and I'm willing to ask myself a question or two. What question would you want them to ponder? 

Geoffrey: One big question is, should AI be regulated? And I think regulation is gonna be essential, if we're gonna avoid some of the really bad outcomes. 

14:56  Media Coverage of AI

Parker: If you think of the media, what's one, if you had a magic wand, what's one change you would make to how they portray or cover AI? 

Geoffrey: It's interesting. I haven't thought about that because I don't have a magic wand. But I wish they'd go into more depth so that people would understand what AI is. People have used ChatGPT and Gemini and Claude, and so they sort of have some sense of what it can do, but they understand very little about how it actually works. And so they still think that it's very different from us. And I think it's very important for people to understand it's actually very like us. 

So our best model of how we understand language is these large language models. People, linguists will tell you, no, that's not how we understand language at all. They have their own theory that never worked. They never could produce things that understood language using their theory. They basically don't have a good theory of meaning. And these neural nets use large feature vectors to represent things. It's a much better theory of meaning. So I wish the media would go into more depth to give people an understanding. 

16:11  AI & Policy

Parker: If people did understand that, how do you think it would adjust the lens through which they view AI and the policy importance of regulating it? 

Geoffrey: I think they'd be much more concerned and much more active in telling their representatives we've got to regulate this stuff and soon. And in fact, people have talked a lot about will AI be able to regulate AI? I think that's wishful thinking. I think that's about as hopeful as having the police regulate the police. 

Parker: We've talked to some scientists who've been part of trials who have AI generates concepts and scientists evaluate which ones seem to be the most promising. And it seems like it's a more effective way of making progress.

Geoffrey: Right now, yes. Right now, having AI suggest things and people make the final decision seems pretty sensible. I don't think it'll stay like that. 

17:07  Superintelligence and Creativity

Parker: Then it will continue to just go up the ladder and just get better capabilities. And what is superintelligence? Explain that to a layperson.

Geoffrey: More or less everything intellectually is just better than us. If you have a debate with it about something, you'll lose. 

Parker: And what about creativity? What about those things that we consider essentially human, just as good at us? A thousand Picassos? 

Geoffrey: Maybe it'll, that'll come a bit later. Many people have suggested that because it's not mortal and they have a different view of things, the idea that it's not creative, I think, is silly. I think it is creative. It's already very creative. It's seeing all these analogies, and a lot of creativity comes from seeing weird analogies. 

17:48  AI & Subjective Experience: A New Model

Parker: Is the LLM or the AI that we have today conscious? 

Geoffrey: I would rather answer a different question. I know this sounds like being a politician, but there's three things people typically talk about. Is it sentient? Is it conscious? Does it have subjective experience? They're all obviously related. There are a lot of people who say very confidently, it's not sentient. And then you say, what do you mean by sentient? And they say, I don't know, but it's not sentient. That seems to be a silly position to hold. 

I would rather talk about subjective experience because I think it's clear there that almost all of us have a wrong model of what subjective experience is. When I, suppose I have a lot to drink and I say, I have the subjective experience of little pink elephants floating in front of me. Most people think the words subjective experience of work like photograph of. And if I have a photograph of a little pink elephant floating in front of me, you can ask where is the photograph and what's it made of? 

So if you think subjective experience of works like photograph of, then you can ask, well, where is this subjective experience and what's it made of? And a philosopher will tell you, it's in your mind, which is a kind of theater that only you can see and an inner theater. So let me give you an alternative model of what the word subjective experience of me. I believe my perceptual system is lying to me. 

So I say to you, my perceptual system is lying to me, but, what it's telling me would be true if there were little pink elephants floating in front of me. Okay. So I just said the same thing without using the word subjective experience. And what I'm doing is trying to tell you how my perceptual system is lying to me. We think there's this inner theater. There is no inner theater. The inner theater is as wrong a view of how the, what the mind is as the view that the Earth was made six thousand years ago is of how the real world works. Almost everybody has this wrong view. They think that there's an inner theater with funny stuff in it that only I can see. That's just rubbish. And once you see that, you see that these chatbots, a multimodal chatbot already has subjective experience. 

So I'll give you an example. Suppose I have a chatbot that can see and has a robot arm and can talk, and I train it up, and I put an object in front of it. Let's say point at the object, points at the object. Then I put a prism in front of its camera when it's not looking. And I put an object in front of it and say point at the object, it points off to one side. And I say, no. The object's not there. It's straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh, I see. The prism bent the light rays. So the object's actually straight in front of me, but I had the subjective experience. It was over there. 

Parker: Fascinating. 

Geoffrey: That is the chatbot using the word subjective experience in exactly the way we use them. It's saying, my perceptual system was lying to me because of the prism. But if it hadn't been lying, the object would be over there. 

21:04  The "Manhattan Project" for AI Alignment

Parker: If you had a Manhattan-style project to try to address some of the challenges, either socially or from a research or regulatory perspective on artificial intelligence. What would that Manhattan Project be? 

Geoffrey: Oh, I think too there's one really essential question we need to figure out in the long run. There's lots of short-term things we need to do, but in the long run, we need to figure out, can we build things smarter than us that never have the desire to take over from us? We don't know how to do that, and we should be focusing a lot of resources on that. 

Parker: You know, alignment is a core a core of the sort of Manhattan Project. Is there any KPI? I know that's gonna sound sort of, you know, mundane, but any KPI that we could track to say, are we making progress on these alignment questions that you have? 

Geoffrey: Well, my worry, my main worry about alignment is how do you draw a line that's parallel to two lines at right angles? It's kinda tricky. And humans don't align with each other. 

22:10  AI Concepts: Probability Distributions

Parker: Is there a concept that is really important for people to grasp that is hard for you to explain in a way that a layperson can viscerally understand it? 

Geoffrey: I think often it's to do with probability distributions. The whole idea of a probability distribution people find hard to understand, think of it as a thing. And so in a large language model, you give it a context and it's trying to predict the next word, and it has a probability distribution over words. And people find that hard to grasp. 

Parker: And crucial because that's—that science.

Geoffrey: And it's perfectly straightforward if you understand probability. But unless you understand the idea of a probability distribution and changing what you're doing when you change the weights within the neural, the connection strengths is changing the probabilities that will assign to all the various words or word fragments. That's a concept ordinary people find difficult to grasp. 

23:02  AI is Underhyped, Not Overhyped

Parker: What would you say is the most overused buzzword in AI right now? 

Geoffrey: Well, the most overused buzzword by critics of AI is definitely hype. So for years and years, we've been saying AI is overhyped, and my view has always been that it's underhyped. 

Parker: I think that's a, that is a very important message to get out to people. I've seen that same thing. Oh, there's hallucinations. AI is never gonna catch up. Exactly. And there's, we've talked about sort of the rough edges of technology. There's always rough edges, but you have to look at the central sort of engine of it and the possibilities are so powerful there. 

Really appreciate the conversation. It's enlightening. I enjoy it so much and I know that our viewers and listeners and watchers will as well. So thank you. 

Geoffrey: It was a lot of fun.

What Leaders Need to Understand About AI

What’s the most overused word when it comes to AI? According to Geoffrey Hinton, one of the inventors of the modern LLM, it’s hype and AI is under, not overhyped. He explains the power of AI coaches and assistants in healthcare, education, and the workplaces and what leaders most need to understand about the technology.

Geoffrey Hinton

Nobel Laureate, "Godfather of AI"

What Leaders Need to Understand About AI

HR Is Now R&D

In this powerful conversation with Ethan Mollick — Professor of Entrepreneurship at Wharton and best-selling author of Co-Intelligence — one thing became clear: AI is already reshaping work, organizations, and leadership—and HR is now the R&D lab for that change.

As companies race to adapt and adopt, Ethan argues that HR leaders have an unprecedented opportunity (and responsibility) to guide the workforce through the AI transition. That requires urgency, but it also requires a clear vision for how to bring AI into the workplace. And the only way for leaders to set that vision is real, hands-on experience with frontier models and AI work coaches like Nadia.

Ethan Mollick

Professor of Entrepreneurship, The Wharton School

Key Points

00:00  AI's Immediate Impact on Work & Organizations

Parker Mitchell: Ethan, it's such a pleasure to welcome you today. I've been an avid reader of your book, Co-Intelligence. You've got it next to you. I've got it in front of me. And I think that topics that you are raising are such important topics for our CHRO audience. So first, welcome and thank you. 

Ethan Mollick: Thanks for having me. I'm thrilled to be here. 

Parker: I think the first idea that I want to explore is this idea of how much change we think AI is going to bring to the work that we do and the organizational structure that has been our construct for, you know, the past generation or two. What's your take on what an, how much change is this going to bring, both to work and to organizations? 

Ethan: So I think, people tend to think of AI as like a future thing. My argument would be, and I work with lots of companies and do lots of research and talk to all the labs, is that everything we need for massive changes of work are already there with the current AI models. Like, we don't need more advancement, and I think we're gonna see changes in all aspects of work. It's not gonna happen instantaneously. It'll happen over time, but it's gonna happen, I think much quicker than people are expecting, which is a lot of what, calling it analytical work is already going to change deeply. That means we're going to change organizational structures, how we approach management, how we approach coaching and helping and tutoring. There's just a wide variety of changes. We've never had an intelligence prosthesis before, and now we do. We can summon a form of intelligence on demand. That changes how organizations work. 

01:31  Overcoming Hurdles to AI Organizational Change

Parker: And so if we're at a stage where the base technology, the foundational models, and the LLMs have the capabilities now, what are the hurdles to the bigger changes? What are the steps that are gonna need to happen for this to begin to impact the organization? 

Ethan: So the issue is that, I have a sort of three-part model that no one should take too seriously about how you think about AI and organizations, which is you need leadership, lab, and crowd. And so leadership is that you actually need senior leadership to articulate a vision of what the world looks like with AI. How do they wanna transform their work? They need to be users of AI to understand enough of how to operate this. They need to get a sense of how these systems work, but they also need to think about what the reward systems for using this are. Why would people wanna show that they're using AI, you know, and how are they doing it? 

Then we talk about the crowd, which is how you're getting everybody using AI system. So that could be whether or not we're talking about the kinds of things that you guys are building, right, at Valence, which is like actual deployed, you know, like, package systems or whether they're using chatbots on their own, but how we incentivize them to use this because people are hiding AI use everywhere. People are turning to AI all the time as a coach, as a help with work. They're just not telling, you know, management about it because they don't want people to know they're using AI. They're worried about the job outcomes if they use it. They're saving time, and they don't wanna give that time back to their companies. So you wanna incentivize the crowd to do stuff. 

And then you finally need a lab. You need to be experimenting about what the future looks like in building new tools and bringing in nontechnical people from the crowd who are really good at AI to do transformation inside your organization. 

03:05  Leadership's Role in AI Transformation

Parker: And I wanna double click on each of those areas. So on the leadership side of things, one of the when I talked to CHROs, the, memo from the CEO of Shopify, which I know you cited in there, was one of the things that has gotten a lot of attention about saying, you know, we just need to move from, you know, let's explore it to I'm gonna mandate it. How influential do you think that memo's been? And do you think it's missing the mark in any ways? 

Ethan: So, if you look at that, and Duolingo was the other one, I think, you know, I think it helped show people a model. Everyone always wants a model for what to follow, which is tough in a revolution because you kinda wanna carve your own path out while following in some areas, revolutionizing others. I kinda don't like the memo. Right? I think what it does is establish urgency that we have to do something about AI, but it doesn't articulate a vision. There's no what to do about it. It's just use AI and everyone becomes more efficient, and then that, you know, changes how we do things. 

I expect leadership to articulate a vision of the future. I want them to, to be able to show you why you, as an individual worker, would want to transform your work with AI. What does my job look like? How do I get rewarded for using AI? What happens if I use it badly? Am I punished? What is this, what does my job look like a year from now when I come into work? And I think just saying urgency is great, but I'd hope for a vision as well. 

04:28  AI & Employee Motivation: Beyond Efficiency

Parker: So on that individual motivation note, I think there's, you know, there's probably potentially two different paths. There's like a positive, and there's a side that might not be as great. And we talked to one of our people deploying our coach in the CHRO, it's an 80,000 person company, and she said one of the challenges is that, they've experimented with it, is people spend more time verifying output than they do creating something. It's much more efficient, but it's not as motivating. And I heard a similar study with researchers in the drug lab world and sort of, pharma companies. They don't wanna just review the output of a hundred different ideas that AI will generate. They, their value is coming up with their ideas, their identity. 

Ethan: So I actually wanna push back on a little because we actually don't know that much about the motivation piece. The study, there there's a bunch of early studies on radiologists and other people using AI, but they were using old school machine learning forms of AI. Right? It's really important, even though the name is the same, they're very different, and that created algorithmic aversion because what would happen is the AI would just give you an answer to a question, and you'd feel deep, you know, as a doctor that it was overriding you. 

Interestingly, the other study, which was, I, which was on scientists using AI turned out to be fake. There's a big scandal about this. It's the MIT study, and that's the only one we have of high-end AI use showing it's demotivating in the kind of way that we're talking about here. That doesn't mean that isn't. Right? And I think there's reason to worry about constructing work and meaning, but I think we don't know a lot. What we actually find in some of our studies is the opposite of algorithmic conversion, which is that people who use AI are actually much happier because they delegate out the stuff they don't wanna do rather than the high-end stuff. 

So I think that this is where it really falls on things like the CHROs to think about how do we think about motivation in the world of AI rather than view it as like an on or off switch. There's a lot of different ways to interact with these systems that might be motivating or demotivating. 

06:35  CHROs as AI R&D Leaders

Parker: I think that's such an important point. So it's really unpacking what it is that people find motivating about their work. How do we amplify that? How do we use AI to take away probably some of the things that are drags on their motivation? And using that framing feels like a much richer approach to articulate, as a leader. 

Ethan: And it fits into a much larger issue that I think really lands on the laps of the CHROs overall. I think that you are the place where you're the place for somebody to stand and the lever to use to change the world. Right? Where is innovation coming? I the HR is R&D now because the people in your organization are gonna figure out how to use AI. Where is this, the skilling problem we're all about to hit, which is that everyone's using AI to do their work, so they're not learning anymore at an interim level. That's going to land first in the laps of CHROs. The idea that now, you know, that we have this new set of capabilities for education, for coaching, that's landing first in CHROs. 

So, like, I think that that the leverage point for organizations is the HR function at this point. And the imagination about what to do with that is really a challenge for leaders in that space. 

07:36  Leaders Have to Use AI Regularly, Not Delegate It

Parker: I love that lens and that take. So sort of, you know, the company is the R&D lab and the CHRO can be helping try and coordinate that. When you check the CHROs, how many of them grasp that and how many of them have embraced that? 

Ethan: So I think what I'm finding at the C-level is that there's a very easy answer to this question about who uses it, who panics, and who feels urgency, and it's have they used AI a lot? Like, and the number one piece of advice I give in the book that turned out to be most accurate. Right? Because you never know when you write these things, but it's the most important piece of advice is you just have to use these systems. If you have not tried to use it for every business decision you have that legally and ethically can, you don't know what they can do. And you need to use a frontier model. You need to be using o3 or Claude 4 or Opus or Gemini 2.5. Use one of those three and use them for everything for a day and get ten hours in. And half of what I find I have to do is just show people, hey. Yeah. AI can do that. And then they start to be like, oh, wow. There's a real thing here. 

So what separates us is CHROs who delegated using AI to other people and getting reports back or just reading about it, aren't getting it. And by the way, there's nothing to be ashamed of. One of the big, baffling things that I've been thinking about a lot, and my wife who runs the AI lab with me and does a lot on AI training and, you know, we've both been trying to figure out why are smart people avoiding using these systems? It's sort of a bigger kind of issue, and I think one that valence has been thinking about in really interesting ways, which is, like, some people just gravitate toward using these things, and some people want more instructions, and they get scared, and they get nervous, and they just walk away from them. 

So I think, you know, the big issue is how do we get people to actually use these things? Because once the CHRO uses it for a bunch, or HRO uses it for a bunch, they start to get the idea pretty quickly. 

Parker: One of the things that we talk about is sort of that personal epiphany moment. And I have found the people who are most likely to be vocal advocates for this have basically a personal epiphany moment once a month, if not more. And that's from personal usage with new models and the epiphany moment from twelve months ago might not be, you know, the same one as now, but it's so important to have that. 

Ethan: Yeah. I mean, I talked in the book about three sleepless nights. Like, unless you've had an existential crisis, and I really mean this, you haven't got AI. Because you have to have this moment of, like, oh, no. Why is it doing this? This is so good. Right? Like, you know, if you haven't had that, if you haven't started worrying, like, what does this mean for work? What does this mean for my job? What are my kids' jobs? The people working under me, the state or nature of a company. If you haven't had that concern, then you haven't used these systems at all. 

And I, you know, and one of the things I like about the conversations that we've had before this also is that I think you've got a deep sense of this, and, you know, people need to have this kind of crisis. And I think a lot of software vendors out there and consultants try and insulate you from this moment of epiphany. Right? They're like, just use the system. It'll do this stuff for you. But I think what makes the AI so interesting is, you know, to work with it, you have to have this sense of, like, oh, it is actually pretty smart. I do trust it to do stuff. 

Parker: It is. I have had probably three sleepless nights, probably every two weeks for the past. That feels about right. Eighteen months. And I think, I couldn't agree more that it is not about saying, hey. Don't worry. We've got this. It's, we're in this together, and there are not clear answers, and we have to have more clear conversations. That's one of the reasons why we're bringing the CHROs together. 

Ethan: Like, use voice mode and ChaptGPT 4 on the app, but just have a conversation with the AI about a problem that you're having. I talked to a Harvard quantum physicist who told me all his best ideas come from talking to AI, and I was like, is it really good at quantum physics? Like, no. No. But it's really good at asking questions. So I think having a conversation with it. Give it a bunch of documents you're having issues. Have it turn a, you know, create the documentation for this. Here's some documentation. Read this as a naive reader. Give me feedback on it. You know, create a PowerPoint for me on this. Sometimes it could just do things you wouldn't expect. Do this analysis. Generate 400 ideas for how to solve this problem. Have that conversation. Push it to do work with you. Push back, like, I think you can do more than this. Make it better. Like, this you know, like, I don't think you're taking into account the fact that the industry is changing so rapidly or you're not taking into account tariff risks. And, like, just push it and see what happens. 

I think too many people use it like Google and ask for query, get a result back. The more context the AI has, the more converse the deeper the conversation, the better the answers are gonna be. 

Parker: One of the things we've always said, which you mentioned there is using voice mode, and we say go from question to situation. Describe your situation in just stream of consciousness, voice mode, and then give it feedback, say, hey, exactly as you said, this is not how I would think about it. I would think about it that way, and the results are often just extraordinary. 

Ethan: You'll be very surprised. I mean, every measure we have of the quality of these models is, you know, is much better than the average human across any, you know, whether you could like emotional intelligence, people vastly prefer talking to the AI versus talking to doctors. Right? And, you know, reading the mind in the eyes of, you know, theory of mind stuff, very good, creativity, very good. I mean, these are really good systems. 

12:29  The Urgency of AI Adoption & Literacy

Parker: So we're gonna dig into adoption at scale in a moment, but let's talk for a moment about urgency. You've been advocating a position that we very much agree with, which is the time is now, even though everything will be changing and six or twelve months from now, it'll be different. Why is it urgent to get AI tools and AI awareness and AI literacy widespread in workforces today? 

Ethan: I mean, there's a lot of reasons for this. First of all, again, the change is baked in. You're not, your job is not going to the same five years. Right? I mean, I think AI people sometimes think the world changes overnight. And even if we have super intelligent machines and people have argued that o3 already can do almost all human tasks if we just did it properly. That's a model from ChatGPT that's already out. It doesn't matter because of exactly the reasons that everybody I'm talking to knows, which is that organizations are complicated and contingent and all kinds of things happen slowly over time. 

But I would argue the change is already baked in. Right? You know, you talk to anyone in a profession who's doing detailed work day and they're like, this already does a lot of work. We just have to figure out how to make it work inside our companies. So I would argue that you have to be aware of this change, the shockwave that's already happening inside the systems. 

The second is there's huge advantages to getting this right. Like, there's unsolved problems, especially in HR, that we can solve. I mean, you're addressing coaching. I'm thinking a lot about tutoring and mentoring and teaching. These are things that were impossible to do at scale before. Now we can do them at scale. What does that mean? How does that change what we can do? There's crises that are already gonna be happening you have to deal with. The training crisis is already going on. Right? 

This summer, if you've, you'll notice that all of your interns are using AI to do all their work because they're not dumb. They wanna show people that they're smart, so they're gonna use AI because AI is better than an intern. And all your middle managers are gonna stop turning to interns to help them and, you know, and learn, having the interns learn their jobs because AI does a better job and never complains. 

So, like, you have crises that are already built in the system that you've addressed. There's opportunities. There's crises. There's change already in the system. I cannot emphasize enough how much things are already changing. And I think the worst thing you can do is assume somebody knows the answers to all these problems. I think we're trying to solve some of these issues in education. You're trying to solve some of those of the coaching, but I don't think we can or we can argue that there's any one person you go to solve AI for you. Because I talk to the AI labs all the time, and they tell me they use my Twitter feed to figure out what AI can do for you. Like, nobody has answers. There's no instruction manual. 

So if you're waiting for someone to hand you a fully built out instruction on how to use AI, you're gonna be waiting until everybody else already figured this out, and that's too late. 

14:51  Habits of Early AI Adopters

Parker: And the people that are doing this, you know, the outliers that have adopted this mentality, what are some of the habits or rituals or maybe how they allocate their time? Are there any patterns that you'd say, oh, these are what the people who are paying close attention to this. This is what they're doing. 

Ethan: So that's that lab portion that we talked about. You don't need to be the person doing this all the time, but you absolutely need people in your organization doing this all the time. And it and people from an HR perspective doing it, by the way. One of the other mistakes we will make is they hand this to IT or even more general counsels. No offense, general counsels in the room. But, and it becomes a thing that is that gets wrapped around a technology or legal background. This is inherently like working with people. 

The best users of AI I know are good managers, are good teachers. Like, the skills that are make you good at AI are not prompting skills, they're people skills. And so you need to be taking that approach of, like, having people experimenting with it on a people skill basis and trying to figure out what give me a feedback and, like, oh, you could do better than this. It honestly responds really well to the kind of feedback that you give a human to, the AI responds really well to. 

So part of this is about experimenting personally, and I think you need to be experimenting more personally than you think you are. And by the way, it'll quickly become from time sink to, like, oh, I'm gaining time back. But then even aside from that personal skill, you need people who you think are very smart, who are very good, who are inside your domain, who are also experimenting with this and showing you what they learned excitedly on a regular basis. 

To me, the easiest way to find those people is in the crowd. The people who are already using AI, and by the way, your organization is riddled with AI use. Like, all that's happening is unauthorized AI use because we know from a, like, the number of people reporting they're using AI at work in America in a representative survey went from thirty percent to forty percent over the course of two months. Like, everyone's already using it in your organization. So your idea is to destigmatize use and then, of the people, there's probably people who are already trying to evangelize people. You probably had to already had meetings with these people who are like, oh my god, AI does all this stuff. And I don't know whether you've, you know, pushed them off somewhere else or whether they're working with you, but those are the people who you, become part of your lab. 

Parker: And I'm trying to recall a stat that I believe I read, in one of your posts, which is that, regular use of internal GPTs sort of, sort of plateaus at twenty percent. Is that, is that the number that you.

Ethan: Yeah. That's, anecdotally, twenty to thirty percent is the maximum I find from internal system use. 

Parker: And so, anecdotally, if we're at, say, call it twenty, forty percent in a survey, that means at least twice as many people are privately using it than publicly. And it's just important. That's an interesting concept for a CHRO to think about. Every person I hear using it, there's actually a second one who's not sharing it. 

Ethan: Yeah. That's a good way to phrase it. Right? It's probably at least double. And then of the rest of the people, some of them are just waiting on training, and some of them just need clear instructions. Like, not everybody is gonna be an innovator. They need to get ideas, either for the kinds of, you know, well-built products that, you know, that we're talking about here or else from things that come out of the lab. 

17:50  Overcoming Resistance to AI Adoption

Parker: And I wanna spend a moment on people's motivations. I've sort of joked with someone that, you know, once you become, you know, senior enough in a company, your private job is almost to become a monopoly. You kind of wanna have a unique perspective, a unique network, a unique something that makes you valuable. I mean, people don't do that consciously, but I think unconsciously, they don't wanna give up everything that they are good at. If AI is a superpower for them, is there a real conflict of interest between people finding ways to make themselves way better and not necessarily wanting to share that with five colleagues down the hall? 

Ethan: Absolutely. I mean I mean, it's worse than that. Right? Like, that's one of, like, six reasons you don't wanna share AI use. Right? Because, like, not only are do you not wanna give away your competitive advantage, but also people think you're brilliant right now. And, like, because you're using AI, you don't wanna show them you're not brilliant. And then also, like, look, people and companies are dumb. They know efficiency gains translate into headcount reduction. So they don't wanna show you the efficient, higher efficiency because there might be a reduction. Even in companies where they don't worry about that, the efficiency gains mean I'm expected to do more work than I did before. Why would I ever wanna do that? So I don't wanna show you that I'm showing efficiency gains. Right? 

So there's layer upon layer of reason why people don't wanna show you why they're using AI. 

Parker: And what are some of the techniques that either CHROs or leaders use to overcome those natural hesitancies? 

Ethan: So there's a few things. There is, on the most basic level, it is that clear vision. Right? Having an executive level vision of, like, what does job, what does work look like with AI? Why should I feel safe that I'm using it? Right? Having executives role model their use. Another really important reason for CHROs to use the system is if you use it all the time for everything, other people will start to see it being used all the time, and that will let you do things differently. But then there is actual change in reward systems. I've seen the pretty crazy things ranging from, like, you know, one company I spoke to, I don't recommend this, but they got, the CEO really realized how big, he did this was six months or eight months after GPT four came out. He gave ChatGPT to everybody in the company and then fired everybody who didn't use it by the end of the month after arguing they should use it many times. And then gave $10,000 prize at the end of every week to everybody, to whoever came with the best AI idea. 

I've seen CHROs who say, before you hire anyone in a company, you have to, the team has to spend two hours trying to automate that job with AI and then rewrite the job proposal or do the same thing with it when you're asking for money. I have seen, people whose model just is all channel AI use. Like, I, if you turn in a report and you haven't told me how you're using AI to do this, I'm gonna reject the report. If you have, if you are, you know, you everyone has to go through not just training, but that training is about hands-on use and you have to certify at the end of every class that you built something with AI. Like, it is hard to emphasize if you think it's as big a deal as I think you and I think it's going to be, it's hard to overdo your push for adoption. 

20:43  Co-Intelligence: AI Augmenting Human Work

Parker: I think that could be the, you know, the headline. It's hard to overdo the push for adoption. We've, you and I have talked about the importance of, like, co-intelligence versus AI to automate. And obviously, AI will automate a number of parts of it. How did you come up with the concept of co-intelligence, and why is that such a central piece of the work that you promote? 

Ethan: So right now, and I mean, really, I do have to put a caveat here. Right? I wrote Co-Intelligence, you know, a year and a half ago or so. I still think it's a really, it's like, it describes today's world really well. But I think models are getting better. There are jobs that the AI is doing that is better than human. But for the vast majority of work, the AI is a supplement to human work. Right? We've never had a way of making people smarter, right, on a general-purpose basis, and now we have the ability to do this. 

So, for example, take, you know, a deep research report. There's, deep research reports are very powerful. They're really good. When I talk to lawyers about them, for example, they say they save forty hours of work producing it and then maybe takes an hour to have someone junior check the results. So, like, but now everyone has an analysis on tap. How does that change your job? We know code, like, we know AI fills in gaps that you have. 

So we just completed a study at Procter & Gamble where we had 776 workers. This was my colleagues at MIT and Harvard University of Warwick. And, we had them work individually or in teams of two. Cross-functional teams on real work tasks. Individuals who have worked with AI performed as well as teams. Right? Absolute co-intelligence stuff. And they were, by the way, as happy as teams, just to make that, refer to the happiness point earlier. We like, these performance improvement things are very real, but they had come from working with AI, where humans use AI to check to do work for them, but they also have enough expertise that they know when the AI is weak at something or making something up. 

22:29  AI's Impact on Entry-Level Roles & Apprenticeship

Parker: That level of expertise brings, sort of, it to mind the question that I think you were talking about earlier that the, the idea of some of the threats. The bottom rungs of the ladder are obviously from a career perspective, the most easily replaceable with AI, interns, entry-level roles. And many of those entry-level roles are where people learn that taste, that judgment, that ability to assess quality. How do you see some of that playing out if AI can do the things that early career people used to be focused on? And what recommendations would you give to CHROs on that, you know, that issue? 

Ethan: Think about how the education system works. I teach at a great business school. I teach people to be generalists. Right? I don't teach them to work for Procter & Gamble or J.P. Morgan. I teach them to be, to do product development or to think about business analyst cases. And then they learn to be specialists by working the same way we've taught specialists for four thousand years, apprenticeship. Right? They work under people and they do junior work, and then, in return, mid-level people get help on how to do the junior work, and the young younger people get correction over and over again 'til they learn how to do something. They learn expertise, which is very hard to teach. That all is breaking right now as we talk. Right? Because every junior person is dumb if they're not using AI to do the work. Right? Why would they not use this to do all the work that they're doing? And meanwhile, more senior people are realizing that, you know, working with people is often really annoying. Right? Like, especially junior folks. Most people are not trained to be good mentors. They're not trained to be good coaches, so they have to figure this out on their own. And some people are good, some people are bad at it, they’re stuck doing this too. 

So that training pipeline that was always implicit has broken. Right? The coaching pipeline has broken this summer, if it, or last summer. And it has to be reconstructed at the CHRO level. Like, we have to take this deliberately. That means taking L&D seriously as a thing that we have to do. That means thinking about what skills people need to learn in a deeper and more serious way. 

Parker: We did a report with Charter. We sponsored a report with them on this apprenticeship model and asking whether an AI coach like Nadia could perform some of those functions of how apprenticeship works, which is providing you a framework, giving you a challenge that meets your level, having you get feedback on how well you did, giving you escalating that challenge and sort of role modeling in some cases what that looks like. And it was a fascinating idea. Nadia does a bit of that, but there's opportunities that AI can solve some of the issues that AI might cause as well. 

Ethan: I mean, I think, you know, part of this is we haven't like, so much about AI is teaching us what we don't know about humans very well. It was good enough. Right? Like, if you were actually designing a way to build experts, you would never design the method we have right now. We hope you get paired with a good manager who has time to do it and build a relationship with you and is a good coach. I mean, you know, you wouldn't have teaching be done in lecture halls in the way we, like, there's a lot of broken systems already that were good enough because there was no alternative. 

I think AI is both, you know, poison and cure. Right? It's, in a lot of ways, what it is a very bright light on organizations. And I think we're gonna see this, by the way, everywhere. Like, let's take one very narrow thing that is the first thing everyone does with AI in organizations, which is they do their performance reviews with AI. Why do they do performance reviews with AI? Because doing performance reviews is really annoying. And so the AI will write a performance review, and then I, they brag about this. Right? But then what was the whole point of the performance review now? If the AI is doing it, how can you not go and change performance review? So it shines a light on this issue. And in doing so, forces you to think about the process of what you should be doing better, how good you are at the job. So I mean, I think this is both poison and cure. 

Parker: We talk a lot about, we call that the second bounce of the ball. I don't know if that's quite the right term, but the first bounce of the ball is, let's automate the things that systems that we put in place. But we put in place systems that are inherent compromises to the ultimate goal that we have been limited by the technology and performance reviews is an example that we talk about. 

So now if, for example, if you're being coached by Nadia as a manager, you can have the performance review framework kind of built in and you can be nudged and say, hey, you haven't talked about how Ethan's done on this particular topic in this past month. Let me just ask you a couple of questions. I'll just keep that in mind. And so it becomes like an ongoing process that is woven in versus something you do on, you know, November 20 and you try to get it done as quickly as possible because you need to like check mark it. 

And so those ideas of, like, how will we reinvent the talent function for first principles given the power of AI is an important topic. Do you have recommendations to chief talent officers on how they might begin to rethink the talent function? 

Ethan: I think that's, you're gonna see the same crisis across function after function, which is we need to sit back and say, why are we doing this? The point of the performance review is not to do a performance review. It is to provide an opportunity for, you know, management to reflect back on how someone's doing, to sort people, to give people feedback to improve their job, all of those things. Maybe five other things inside your organization that it does. Right? Allocate who's a good manager or not. Maybe it serves some secret point value thing, right, of, like, you know, that tells that lets you win political games. We need to start exposing what these things actually do to rebuild them. 

So as a chief talent officer, part of what you wanna do is think about, what are my goals here? Right? And process becomes goal in a lot of cases inside organizations. And that can't be the case anymore. So the really smart people, and this is, by the way, why you can't turn entirely to external consultants. You can bring in the smartest consultants in the world. They cannot solve this problem for you because it's an internal problem about how your organization operates at a deep level. It requires deep expertise in the subject. And I think this can be uncomfortable. Right? In some ways, it's more uncomfortable than realizing AI is really good, is realizing, like, oh, why am I doing this? Like, how do I think about this again? Or is the way I'm doing talent development only because of the way that it's allocated you know, talent development is, you know, learning as a reward rather than a tool. Right? Well, we have to change that or do we wanna keep that the same? So I think there's a really profound set of questions to ask. 

28:43  Customizing AI Models to Organizational Nuances

Parker: One of the things you mentioned is that experts can't come in from the outside because you know your business the best. There's unique elements of your business. Would, could you make that same argument about AI models? AI models are almost like, if the expression is a zip file for the world, they have incredible IQ, but they don't know your organization. How would you help AI models understand the nuances of an organization so they'd be even more powerful in helping answer those types of questions as a partner to a chief talent officer? 

Ethan: Okay. So let let's do this in kind of layers. The easiest way, not talking about sort of the deeper applications like the kind of thing you're building. The easy way to do this is just context. The AI thrives on context. So pace, like, the other thing that is really interesting that we haven't even talked about what chief talent officers and CHROs are thinking about is that, actually, the best instructions for AI are the manuals you use for onboarding employees. The best thing you can give an AI is an example of the best piece of work you do. You don't want to have the AI read every piece of work you've done. You wanna actually give it, here's an example of the perfect report that you should do and why it's great. If you can train people really well, the AI actually thrives on exactly all that training material. 

So your starting point is really just starting to think about, like, you know, what do I have in hand already that I've spent all this time building for humans, and you literally just paste that in. So context, instruction manuals, rubrics, examples. Those are things the AI thrives on that change it from generic to specific. Because it's not that it's a zip file, but I mean, it is in some ways. It's a zip file with the entire web is a rep of that, everything. But, like, that latent space the AI contains, some of that is trained on your content. It's certainly on your industry. Like, it is it knows a lot about this. You can often tell it, write this in the style of, you know, and pick your favorite HR executive, and it knows how to do that. 

So I think we have to, you know, you have to realize, like, it actually has not just the genericness in it, but specificity you have to invoke. 

Parker: One of the things that is a feature that we began really rolling out for a couple of pilot companies that they're very excited about is take the best practices of your top performers. So Nadia will actually go interview top performers. How would you handle this situation? How would you give this kind of advice and build a bank of, exactly as you described it, best practices, but then, you know, that everyone's coach can access. And that is very, very powerful because it's not just the knowledge in the, in the, you know, in the rubrics and the reports, but it is the knowledge in the top performers’ heads about how they try to help situations specific to a company. And exactly as you said, the more context that, you know, an LLM can access, the richer that answer is gonna be. 

So I think there's elements of that. How do you pick your best practices, curate those, and then make those available to the LLM AI instances that people have access to? 

Ethan: Yeah. I mean, I think curation ends up being a really important point in all of this. Like, the idea that you, like, don't push the responsibility off to others necessarily as much as you wanna think through these issues yourself. Right? Curation is one of the most important things. I think people don't understand enough about how AI provides abundance. Like, you don't need 10 ideas. Ask for a thousand ideas. Ask for 500 ideas. Seventy-five slogans for this. I like 12, slogan 12. Give me 17 variations on this. I don't like the first paragraph. You know? So curation, taste, all that matters a lot, and at the executive level, there's a lot of that available. 

So my wife is probably one of the best, product engineers on the planet. She is, like, Google uses her prompts as the gold standard to compare their new models to. So those prompt versus her but, like, she's never coded a day in her life. She has a doctor in education. She worked at, you know, in HR and training. Right? Like, that's her that's her background. And it turns out that if you're really good at theory of mind, if you're good at understanding what someone might be confused, what they're good or bad at, if you're good at breaking down tasks into steps, if you're good at troubleshooting when someone goes wrong, if you're good at thinking through the kinds of problems someone might mentally have and why they might be confused about a topic, you're going to be good at AI. 

I think one thing that I like about the approach that you and a few of your other organizations are taking is trying to think deeply about one area that you, and realize that the models are very smart, they need scaffolding and help to accomplish something, but, like, we don't have broad-based expertise in this yet. 

33:04  A 3-Part Model for the Changing Nature of Work

Parker: I know you've talked to a lot of C-suites. What are some of the things that resonate most with them when you have a conversation with, with these leaders? 

Ethan: So I think that people are excited by the idea of, that this is a way to change the nature of work and change the game. And leaders who are seeing this and realizing, wow, there are advantages beyond ROI. Like, the thing that worries me most is when people start with a pure ROI perspective, which inevitably leads you to the same thing, which is tech, the same problem people always had, technology used to cut costs. Right? And I think when people start to realize, wow, this changes what we can do. It changes how we serve customers. It changes what we do to what we can provide to people. It's not a one to one replacement, for a person. It is an expansion. It is, it lets people do the impossible. 

I have sort of a three-part test that I ask people for their internal AI use. Right? I say, what thing that was important to you have you realized AI now does for everybody on the planet and you no longer need to do? If you don't have anything in that category, you've made a mistake. There is something that you thought was valuable that's no longer valuable. What's that jettison? What impossible thing are you doing now that you haven't done before? And then, what are you building that doesn't work yet, but might work as AI gets better? And if you don't have an answer to those three questions, I don't think you're far enough along yet. 

Parker: I mean, it's a fascinating future. I imagine every six months, the prognosis would change. Ethan, I just wanna say thank you. This was just such an energizing and invigorating conversation. I know that many of the CHROs in the audience are gonna find it really thought-provoking, and thank you for taking the time. 

Ethan: Great. This was a lot of fun. Thank you.

HR Is Now R&D

In this powerful conversation with Ethan Mollick — Professor of Entrepreneurship at Wharton and best-selling author of Co-Intelligence — one thing became clear: AI is already reshaping work, organizations, and leadership—and HR is now the R&D lab for that change.

As companies race to adapt and adopt, Ethan argues that HR leaders have an unprecedented opportunity (and responsibility) to guide the workforce through the AI transition. That requires urgency, but it also requires a clear vision for how to bring AI into the workplace. And the only way for leaders to set that vision is real, hands-on experience with frontier models and AI work coaches like Nadia.

Ethan Mollick

Professor of Entrepreneurship, The Wharton School

HR Is Now R&D

AI is reshaping the workforce.

The leaders of the Fortune 500 are already moving.

At the 2026 AI & the Workforce Summit, we brought together leading AI researchers and the most forward-thinking HR executives across the Fortune 500 to answer one question:

How will AI transform performance, leadership, and the way work gets done?

Three themes became clear.

01

|

AI Agents Are Entering the Workforce Faster Than Expected

one

AI agents are entering the workforce faster than expected.

AI adoption inside enterprises is accelerating rapidly. AI is no longer a side experiment. It’s becoming embedded into how decisions are made, how work is executed, and how performance is managed.

“It's easy to be a doomsayer when the rate of change is so exponential, but because of its power of hyper-personalization, AI can actually help us connect with what is best in us.”

Arianna Huffington

CEO & Founder, Thrive Global

“When we think about AI, instead of thinking about how we can do things 10% faster or better, we try to think about things that we can do that we've never been able to do before.”

Aria Finger

Aria Finger, Chief of Staff to Reid Hoffman

Highlights

02

|

AI Coaching Is Transforming What HR Can Do Across the Fortune 500

two

AI coaching is transforming what HR can do across the Fortune 500.

The largest companies are deploying AI at scale, and nearly 100 Fortune 500 companies are using Nadia and discovering AI coaching isn’t just a tool, it’s infrastructure for performance.

"There is no judgment with an AI coach. Nadia helped a factory manager think through alternatives at 3am in the morning."

Melissa Werneck

fmr CHRO, Kraft Heinz

"The institutional knowledge that Nadia absorbs and learns and then feeds back is going to be a huge game changer for our capabilities across managers. It already is."

Holly Tyson

Chief People Officer, Cushman & Wakefield

"A 1% increase in stores is worth $1.5 billion in topline sales. That's why we got excited about bringing Nadia to the frontlines."

Tim Hourigan

fmr CHRO, The Home Depot

Highlights

03

|

The Transformation Starts with Leaders

three

The transformation starts with leaders.

Technology doesn’t transform organizations. Leadership does, and the most innovative HR leaders are putting AI directly into employees’ hands, rethinking workflows, and helping their workforce adapt to a new human+AI era of work.

"One of the biggest problems with AI is it gets put under IT a lot. This is not an IT solution. HR is R&D now."

Ethan Mollick

Associate Professor; Co-Director of the Generative AI Labs

"There was an aha moment where we also realized this is actually taking away the fear of AI and agents for our entire workforce. They are loving it."

Jordana Kammerud

fmr CHRO, Corning

“The people doing the reps are the people who are jumping up from low or medium performers to high performers. Power users of Nadia were 28% more likely to move up a rating category. And when managers are power users, the people on their teams are two times more likely to be users of Nadia.”

Jennifer Carpenter

VP, Global Head of Talent at Analog Devices

Highlights

three

The Transformation Starts with Leaders

Technology doesn’t transform organizations. Leadership does, and the most innovative HR leaders are putting AI directly into employees’ hands, rethinking workflows, and helping their workforce adapt to a new human+AI era of work.

"One of the biggest problems with AI is it gets put under IT a lot. This is not an IT solution. HR is R&D now."

Ethan Mollick

Associate Professor; Co-Director of the Generative AI Labs

"There was an aha moment where we also realized this is actually taking away the fear of AI and agents for our entire workforce. They are loving it."

Jordana Kammerud

fmr CHRO, Corning

“The people doing the reps are the people who are jumping up from low or medium performers to high performers. Power users of Nadia were 28% more likely to move up a rating category. And when managers are power users, the people on their teams are two times more likely to be users of Nadia.”

Jennifer Carpenter

fmr CHRO, The Home Depot

Highlights

EXPLORE THE SERIES

Explore the Full 2026 Summit Library

Highlights from Past Summits

2000

+

HR leaders

27

M+

Employees Represented

800

+

Companies

Speakers

Our Summits bring together the leading AI minds and most innovative HR leaders.

Aria Finger

Chief of Staff

Reid Hoffman

Kerry O'Keeffe

Senior Director Employee Growth & Learning Enablement

Gilead

Tina Mylon

Chief Talent and Diversity Officer

Schneider Electric

Jennifer Carpenter

VP, Global Head of Talent

Analog Devices

Clea Kanelos

Global Director, People & Culture

UPS

Scott Belsky

Author of "The Messy Middle" and "Making Ideas Happen"

Prasad Setty

Founding VP, People Analytics

Google

Tim Bartl

CEO

CHRO Association

Neha Parikh Shah

Director-Workforce AI & Org Strategy

Microsoft

Jordana Kammerud

fmr SVP & Chief Human Resources Officer

Corning

Diana Scott

U.S. Human Capital Center Leader

The Conference Board

Ethan Mollick

Associate Professor; Co-Director of the Generative AI Labs

The Wharton School

Holly Tyson

Chief People Officer

Cushman & Wakefield

Larry Emond

Senior Partner

Modern Executive Solutions

Alan Murray

President

The WSJ Leadership Institute

Arianna Huffington

CEO & Founder

Thrive Global

Amy Reichanadter

Chief People Officer

Databricks

Lindsay Pattison

Former Chief People Officer

WPP

Diane Gherson

Former CHRO, IBM; Board member

The Kraft Heinz Company

Melissa Werneck

Former Global Chief People Officer

The Kraft Heinz Company

Tim Hourigan

Former EVP of Human Resources

The Home Depot

EXPLORE THE SERIES

Explore the Full
Summit Library