Making Managers Excellent
Video Transcript
Key Points
Making Managers Excellent: Lessons from Google's Project Oxygen and Project Aristotle
Prasad Setty — Former Vice President, People Analytics and Compensation, Google. Prasad founded Google's people analytics team and was the principal architect behind both Project Oxygen — which proved that managers matter — and Project Aristotle, which identified psychological safety as the defining driver of team performance. He has spent decades at the intersection of behavioral science, data, and organizational design.
Parker Mitchell — Co-Founder and CEO, Valence. Parker leads Valence, the company behind Nadia, an AI coach deployed across dozens of Fortune 500 organizations to support leadership development at scale.
Prasad Setty helped build two of the most influential studies in modern management: Project Oxygen, which proved people managers have a measurable impact on team performance, and Project Aristotle, which found that psychological safety — not talent or composition — is the primary driver of team success. In this conversation with Valence CEO Parker Mitchell, Prasad connects those foundational findings to where AI is taking people analytics, knowledge work, and leadership development next.
Key Takeaways
- Project Oxygen proved managers matter — with data. Google originally believed managers did more harm than good and even experimented with removing middle management entirely. Project Oxygen set out to disprove their value but found the opposite: teams with the best managers had higher performance and lower attrition. Critically, the research also showed that great management is learnable — it comes from behaviors and practices, not innate traits.
- Team dynamics beat team composition. Project Aristotle analyzed 400 variables across Google's engineering and sales teams over 18 months and found that how a team interacts consistently outweighs who is on the team. Psychological safety — the belief that you can take risks without fear of punishment — emerged as the single most important factor in team success.
- AI unflatens people analytics. Traditional people analytics reduces each person to 20 or 30 data points. AI changes that by engaging people in natural language, surfacing rich context about how they work, who they work with, and what situations they're navigating — dimensions that structured data simply cannot capture.
- The future of knowledge work is about judgment under uncertainty. As AI handles more tasks with limited degrees of freedom, the work that remains for humans will involve navigating complexity, making decisions with incomplete information, and applying skills in situations where outcomes are genuinely uncertain. That is where development investment matters most.
- Measure experience and effectiveness before efficiency. Prasad's framework for evaluating AI tools uses four Es — effectiveness, efficiency, experience, and equity — in that priority order. Organizations that jump straight to efficiency metrics tend to generate resistance. Starting with experience and effectiveness builds the adoption and trust needed for long-term impact.
- Additive AI lands better than subtractive AI. AI applications that help people be better at their jobs generate far more organizational acceptance than those framed as cost reduction. Every day an organization delays deploying additive AI tools is a day its people are missing learning and development opportunities.
Questions This Session Answers
What did Google's Project Oxygen find about managers?
Project Oxygen set out to prove that managers at Google did more harm than good — a widely held belief in the company's early engineering culture. Instead, the research found that the best managers had teams with meaningfully better performance and lower attrition than the rest. Equally important, the study showed that great management is not a fixed trait but a learnable set of behaviors — anyone can become a better manager by following specific practices.
What is Project Aristotle and what did it find?
Project Aristotle was a Google study that examined 400 variables across engineering and sales teams to identify what drives team success. After 18 months of research, the findings were clear: team dynamics matter more than team composition. Who is on the team matters far less than how the team interacts. Psychological safety — the belief that team members can speak up, take risks, and make mistakes without fear — emerged as the single strongest predictor of team performance.
How does AI change people analytics?
Traditional people analytics collects 10 to 30 data points per person — survey responses, HR system records, performance scores — and flattens a complex human being into a small set of variables. AI changes this by engaging people in natural language conversation, which surfaces rich context about the situations they're navigating, the teams they're working in, and the specific challenges they face. This gives organizations a far more complete picture of performance than structured data alone can provide.
What is the future of knowledge work as AI advances?
Much of today's knowledge work involves applying skills in situations with very few degrees of freedom — predictable tasks that AI will increasingly be able to handle. The knowledge work that remains for humans will involve navigating high uncertainty, making good decisions with many competing options, and applying judgment in situations where outcomes are genuinely unpredictable. Developing that judgment — through tools like AI coaching and better simulations — is where organizations should be investing now.
How should organizations measure the ROI of AI leadership tools?
Prasad Setty's framework for evaluating AI tools prioritizes four factors in order: experience, equity, effectiveness, and efficiency. Organizations that jump immediately to efficiency metrics tend to generate resistance because employees feel reduced to output metrics rather than helped to grow. Starting with whether the experience is good and whether access is equitable builds the adoption and trust needed for long-term impact — and efficiency gains follow naturally from that foundation.
Why do people managers remain so important in an AI-powered workplace?
Even with the rapid advance of AI tools, people managers remain the most important leverage point in any organization of meaningful scale. The lived experience of every employee — whether they feel supported, challenged, and developed — flows directly through their manager. Investing disproportionately in making managers better, through tools and coaching that help them exercise better judgment, is still the highest-return activity available to HR and talent leaders.
Full Session Transcript
What led Google to invest so heavily in understanding managers and teams through Project Oxygen and Project Aristotle?
Parker: Prasad, we are very grateful that you flew in from California yesterday to join us. And I know that many of the folks here in this room, if they're not familiar with you personally, I'm sure they are familiar with your research. I was first exposed to it in The New York Times Future of Work magazine that came out in 2016. But you were a driver behind Project Aristotle and Project Oxygen. Can you tell us a little bit more about why you and Google invested so much in trying to understand managers, leaders, and teamwork?
Prasad: Thank you, Parker. Great to be here. It's always fun talking about Oxygen and Aristotle, even to this day after several years. At Google, there was a notion in the early days that that is the place where convention went to die. And there was a strong belief — and this will be anathema to this particular audience — that managers did more harm than good, that they would become bureaucrats, that they would stand in your way, that they would slow down innovation. In a company where innovation was a lifeblood, it didn't seem like the right thing to invest in them. In fact, early on, they had all the software engineers report to the head of engineering and removed all the middle management layers. Wayne Rosing, who was the head of engineering, inherited all of them. Wayne retired soon after that, so I don't think it was a great experiment — but it was a sentiment that really persisted.
With Project Oxygen, we wanted to actually prove that people managers don't matter. And what we found was that they do. Teams led by the best managers performed better and had lower attrition. That was a revelation for our software engineering community because it was based on proven data from Google. And as we carried out the research, we were also able to show that you don't need managers to come in with some inherent quality of greatness. Anyone can be a better manager than they are if they follow certain behaviors and practices. That is really what we were able to codify — and that led to a ton of acceptance and engagement across the organization to invest in better managers.
Parker: It's wonderful to fight an uphill battle and then prove at the end that you're right. And it's great that the data was unequivocal in supporting that. The next version after that was teams. What is the role of teams? And I know there was probably similar skepticism on the team side that your research reversed. Do you want to share a little bit more about that?
Prasad: As a next step, I don't think it was as much skepticism about teams but an acknowledgement that we all work in teams and that is how we get work done. But there were several beliefs and heuristics about what makes for successful teams. With Project Aristotle, we broadly looked at 400 different variables across our engineering and sales teams to ask: what drives team success? These input variables were broadly in the bucket of either team composition — who is on the team — versus team dynamics — how does the team interact? What we found at the end of 18 months was broadly that team dynamics trumped team composition. And particularly within team dynamics, the notion of psychological safety — which Amy Edmondson and others have studied in great depth — came to the forefront. The aha for us was that in the Google context, any team could become successful if they followed some of these principles of team dynamics. And so then we started educating people on what it takes to improve psychological safety.
It's now almost 2025 and AI is playing a huge role in leadership and team dynamics. How would you update these studies to include AI?
Parker: We are now almost 2025 where AI is playing just a huge role in all parts of this — whether it's leaders, whether it's team dynamics. How would you add to these studies if there was an AI dimension added to them?
Prasad: I think one of the big issues I have with how we think through analytics is that we flatten people. We collect 10, 20, 30 data points about them — state variables about things you know from HR systems, resumes, work history, survey responses — but we are still flattening a whole bunch of rich human experience.
With AI, you actually are able to uncover that richness. And I think that richness comes about in asking people to interact with you in language and in understanding more about the context they operate in, as well as the people and teams they interact with. When we think about performance, it's really about whether people have the skills to engage in behaviors appropriate to a particular situation with the teams they're working with. There are multiple variables at play. We didn't have the signals or the capability to understand that entire richness without AI. Now we do.
Parker: I'm hearing even more dimensions from you — it's not just the work context, but the team context, the moment in someone's career. As you say, we're unflattening the individual and adding new dimensions that sound revolutionary for people analytics. How do you see that advancing in the next few years?
Prasad: Our first instinct, coming from the people function, is to think about individual people — and we should absolutely do that. The value of humanity shouldn't be lost in the middle of this. But as it relates to organizational context, I think before we think about workforce planning and skills, we need to think about work planning. A lot of knowledge work today is about skills being applied in certain dimensions but with very few degrees of freedom. That's the kind of work people don't like to do — because it doesn't give them too many options. It pigeonholes people and all of their capacities into things that are perhaps easy for AI to do in the future.
Knowledge work itself will evolve to skills being applied where you have many degrees of freedom and a lot more uncertainty about outcomes. Knowledge work really becomes: can you make consistently good decisions when you have multiple degrees of freedom and lots of uncertainty? That is where humans will have to excel. And that, I think, is what we as humans will need to use tools like Valence's Nadia to improve our capabilities on.
How do you measure performance as knowledge work evolves — and is there a tension between expanding human choice and defining ROI strictly?
Parker: I'm picturing the aperture widening — more degrees of freedom, more choices that are empowering for people. And on the flip side, we're also trying to measure performance. How do you calculate ROI? Where will the impact of AI show up? Do you see any contradictions between the expansion of options and the need to define and measure performance quite strictly?
Prasad: It is going to be evolutionary. When I talk about performance, I think about four Es: effectiveness, efficiency, experience, and equity. When you put in these kinds of systems, first you want to focus on making the experience really good and making sure there are no inequities. You get people starting to use them, and that becomes your first activity measure of performance. But then over time, what you really want to see is whether people are becoming more effective at their jobs. That's where the conversation about augmentation rather than subtraction is going to be important. The efficiency measure — I'm sure all the CFOs are excited about it — but I think that becomes the least important of all of these.
Parker: So there are certain phases as new technologies are introduced, and the measure of effectiveness should change depending on where it is. Early on, it's experience and uptake. Later in its evolution, it's efficiency and harder numbers. Is that right — that there should be an evolution?
Prasad: I think so. Otherwise, the criticism I typically have is that people very quickly jump to the efficiency side, and it just becomes something organizations resist because they don't want to be seen as just cranking out the wheel faster. They want to actually be better at their jobs and result in much better decision-making.
I really loved what Tim at Delta was talking about in terms of the kinds of signals they're capturing, because one of the signals that is very hard to capture but will be useful is: what is the quality of decision-making happening across your manager teams or your leadership teams? How do you see if that is consistently getting better because of the application of this technology? That, to me, is going to be the real productivity improvement in the long run.
Parker: I'm assuming Google is a company of engineers — very numbers-based. How was the point of view that you don't have to jump to efficiency, you don't have to jump to ROI right away — how was that received within the broader Google community?
Prasad: Google was very expansive in its nature. The thought for many years was that innovation trumped efficiency, and so anything you could showcase as helping people have the autonomy to think creatively — critical thinking, imagination — was always seen, at least a long time back, as more important than efficiency.
Are there early glimpses of knowledge work already changing — where the return on judgment is starting to show up?
Parker: When knowledge work evolves and the return on judgment goes higher, are you seeing any early glimpses of types of work where that change is already happening?
Prasad: One of the things I'd love to see in terms of whether these capabilities are improving is in using AI for better simulations. In the learning community, it's well established that as adults we learn by doing — which is why new job experiences are so much more valuable than classroom education. But what do you actually get from experiencing something? You get to experience multiple contexts and you get to repeat your behaviors in different interactions. If you could simulate that, and if you could accelerate that kind of development using AI, you're furthering everyone's development.
If you look at the sports analogy, in Formula 1, race car drivers sit in simulation engines before they get onto the racetrack — that's how they practice. What is that equivalent for a daily manager? The equivalent of a game in sport is, in the corporate world, meetings. We all go through millions of meetings in our careers. Are we getting better in each meeting throughout our careers? And if not, why not — and how can AI help us be better?
Parker: I'm tying together a few threads here: if AI helps us be more productive in our jobs, it can save some time. And if that time is freed up, it gives us more time to practice and refine these skills — and then bring better judgment to bear. Rather than just being busy for 40 hours a week, we can be thoughtful for 10 hours a week and high-value for 30 hours a week. There's a world in which the efficiency gains can free up time for these refinements in judgment.
Prasad: You summarized it beautifully. That is the arc that I think we all want to get down to. You're even seeing it in the development of large language models. We'll always think about latency — how quickly is the technology responding? When you typed something into Google Search, you wanted the latency to be as small as possible. But now you have GPT-4o, where the thought is to spend as much time as needed on inference so that it can reason better. That is exactly the equivalent on the people side too — how do we get to slower, better, good judgment and therefore better decisions in the long run?
What's one piece of advice for HR leaders trying to act on all of this — even in the fog of uncertainty?
Parker: You and I have had a chance to interview a number of CHROs and other talent leaders over the past six months. One of the things they always say is they just don't have time — even those embedded in Silicon Valley don't have time to think through how the technology is changing. What's one piece of advice you would leave for folks in the audience about the importance of doing something even in that imperfect fog?
Prasad: It is certainly a hard challenge being in any senior operating role, particularly on the HR side. But as Bill said right from the beginning of this morning, this is on top of every board conversation. I'm sure everyone is thinking through what the right activity is.
I would go down to a couple of fundamental principles. After all these years and with all of this technology, I still believe that people managers are an incredibly important leverage point for any organization. The moment you have more than 30 or 40 people managers in your organization, that has got to be the place where you invest disproportionately more of your resources and attention — because that is what is going to define the lived experience of your employees, and therefore you'll get the best return from that.
And there are two ways to think about AI applications right now. One is: can AI do things that you don't want to do, or that your people don't want to do — a subtraction-oriented application of AI. Or second, you think of it as an addition-based view: can I deploy AI in a way that is going to help my people be better at their jobs, grow, and learn better? Valence's Nadia certainly falls in the latter category. I don't think you should choose only additive technologies — you have to look at subtractive ones too. But additive technologies are likely to land better with your organization because people don't feel like they're being displaced. And any moment you're waiting before you deploy these kinds of additive technologies, you're robbing your people of learning opportunities. That, I think, is a waste of their potential.
Parker: The urge to move fast despite the uncertainty — we've heard that over and over again. Thank you for joining and sharing those thoughts with us. We really appreciate it.
Prasad: Thank you, Parker.

.png)