Virtual Highlights with Live Q&A
Register Today
Prasad Setty led Google's original research, Project Aristotle and Project Oxygen, into what makes teams and leaders effective. In this fireside chat from Valence's AI & the Workforce Summit, he discusses the power of AI to provide personalized leadership development at scale.
Parker Mitchell: Prasad, we are very grateful that you flew in from California yesterday to join us. And I know that many of the folks here in this room, if they're not familiar with you personally, I'm sure they are familiar with your research. And I was first exposed to it in the New York Times Future of Work magazine that came out, I think it was 2016.
But you were a driver behind Project Aristotle, Project Oxygen. Can you tell us a little bit more about why you and Google invested so much in trying to understand managing, managers, leaders, and teamwork?
Prasad Setty: Thank you, Parker. Great to be here. And yes, it's always fun talking about Oxygen and Aristotle, even to this day, after several years.
At Google, there was a notion in the early days that that is the place where convention went to die. And there was a strong belief that, and this will be anathema to this particular audience out here, but there was a strong belief that managers did more harm than good. That they would become bureaucrats, that they would stand in your way, that they would slow down innovation.
And so in a company like Google, where innovation was our lifeblood, that it didn't seem like the right thing to invest in them. In fact, early on in Google's experience, they had all the software engineers report to the head of engineering. They removed all the middle management layers. And so Wayne Rosen, who was the head of engineering, he inherited all of them. Wayne retired soon after that, so I don't think it was a great experiment. But it was a sentiment that really persisted.
And so with Project Oxygen, we wanted to actually prove that people managers don't matter. And what we found was that they do. They actually have, you know, their teams perform better, had lower attrition, the best managers.
And so that was a revelation for particularly our software engineering community because it was based on sort of proven data from Google. And as we carried out the research, what we also were able to showcase is that, you know, you don't need managers to sort of come in with inherent qualities of being great, but anyone can be a better manager than they are if they followed certain behaviors and practices. And so that is really what we were able to codify, and that led to a ton of acceptance and engagement across the organization to invest in better managers.
Parker Mitchell: I mean, it's wonderful to fight an uphill battle and then to prove at the end that you're right. And it's great that the data was unequivocal in supporting that.
So the next version after that was teams. So, what is the role of teams? And I know there was some probably similar skepticism on the teams side that your research reversed. Do you want to share a little bit more about that?
Prasad Setty: Sure, as a next step, I don't think it was as much skepticism about teams, but an acknowledgement that we all work in teams and that is how we get work done.
But there were several beliefs and heuristics about what makes for successful teams. And so with Project Aristotle, we broadly looked at 400 different variables across our engineering and sales teams to say, what is it that drives team success? And broadly, these input variables were in the bucket of team composition, who's on the team for instance, versus team dynamics, how does the team interact?
And what we found at the end of 18 months was broadly that team dynamics trumped team composition. And particularly within team dynamics, and certainly this isn't stuff that we came up with. You know, the notion of psychological safety, which Amy Edmondson and others have studied in great depth. Those things came to the forefront. So again, the sort of aha for us was that, in at least the Google context, any team could become successful if they followed some of these principles of team dynamics. And so then we started educating people on what does it take to improve psychological safety, etc.
Parker Mitchell: So this is in the late, sort of, 2015 to 2020 time period. We are now almost 2025, where AI is playing just a huge role in all parts of this, whether it's leaders, whether it's team dynamics. How would you be thinking, how would you add to these studies if there was the AI part of the equation that was added to it?
Prasad Setty: And I think we have touched upon this throughout today. In your opening, you talked about context, you talked about personalization. And certainly I'm a big believer in analytics. I founded Google's People Analytics team. I've been in this space for a lot of time. But I think it's, with our experiences, we start recognizing some of the things that are limiting as well. And one of the big issues that I have with how we think through analytics is that we flatten people, right? We collect 10, 20, 30 data points about them, or in some cases maybe it's much bigger than that, whether it's state variables about things that you know about them from HR systems, or from their resumes, from the work they've done, or from survey responses. But we're still flattening a whole bunch of rich people experiences.
And I think with AI, you now actually are able to sort of uncover that richness. And I think that richness comes about in asking people to interact with you in language and to understand more about the context that they operate in as well as the people in the teams that they interact with. So it is that dimension that I think is new because, when we think about performance, it's really about how people have the skills to engage in behaviors that are appropriate to a particular situation with the teams that they're working with, right? And so there's like multiple variables out here. And so I don't think we had the signals or the capability to understand that entire richness without AI, and now we are able to.
Parker Mitchell: It sounds similar to what Anna was talking about, with the idea of sort of skills being a little bit too one dimensional, and that the addition of context, the skill application in the context is going to be important. And I'm hearing even more dimensions from you. It's not just sort of the work context, but it's the team context. It could be the moment in someone's career. And so, as you say, we're unflattening the individual, and we are adding new dimensions. That sounds revolutionary for people analytics. How do you see that advancing or evolving in the next few years?
Prasad Setty: One of the thoughts for me, we are from the people function, so our first instinct is to think about individual people, and we should absolutely do that. I absolutely agree with the humanness and the value of humanity that shouldn't be lost in the middle of this. But as it relates to organizational context, I think, before we think about workforce planning and skills, we need to think about work planning. I think knowledge work today, a lot of knowledge work is about skills being applied in certain dimensions but with very few degrees of freedom. And I think that's the kind of work that people don't like to do because it doesn't give them too many options, right? Like, you're just like pigeonholing people and all of their capacities into things that are perhaps easy for AI to do in the future. It wasn't possible with existing technology, but with AI that will be possible.
And so I think knowledge work itself will evolve to skills being applied where you have many degrees of freedom with work and in instances where you have a lot more uncertainty about outcomes. And so then knowledge work really becomes about, can you make consistently good decisions when you have multiple degrees of freedom and lots of uncertainty? That is where humans will have to excel. And that, to me, is what will then result in the compound interest effect of good decision making. And that, I think, is where we, as humans, will need to use tools like Valence’s Nadia, etc., to improve our capabilities on.
Parker Mitchell: I mean this is fascinating, I'm hearing this concept of sort of the aperture widening, and more degrees of freedom in the choices that people are going to have, and that that's empowering for them. That the horizon that they could see is going to be a little bit shorter because there's so much complexity. And then on the flip side of it, we're also trying to measure performance. We've talked a lot on this stage today about how do you calculate ROI? How are you going to assess performance? Where will the impact of AI show up in that? Do you see any contradictions between those, or how will they be married? This sort of choice and expansion of options and the need to define performance and measure it, you know, quite strictly.
Prasad Setty: It is going to be evolutionary. I think when we talk about performance, I think about four Es: effectiveness, efficiency, experience, and equity.
Parker Mitchell: Effectiveness, efficiency…
Prasad Setty: …experience, and equity.
Parker Mitchell: Experience and equity.
Prasad Setty: And I think when you put in these kinds of systems, first you want to focus on, as many of the speakers today have talked about, make the experience really good, make sure there are no inequities in them. And so you get people starting to use them, and that becomes your first sort of activity measure of performance.
But then over time what you want to really do is to see if people are becoming more effective at their jobs, right? And so that is where a lot of folks were talking about augmentation, rather than subtraction, I think is going to be important. And then the efficiency measure. I'm sure all the CFOs are excited about it, but I think that becomes the least important of all of these.
Parker Mitchell: That's interesting. So I think what I'm hearing is that there's certain phases as new ideas are introduced or new technologies are introduced. And the ROI, or the measure of effectiveness of a new intervention, of an experiment that we've talked about should change depending on where it is. Is it early on, and is it experience and uptake based? Or later in its evolution, when it's more efficiency and maybe harder numbers. Is that right, that there should be an evolution?
Prasad Setty: I think so. Otherwise I think people, the criticism that I have typically is that people very quickly jump to the efficiency side of it. And then it just becomes something that your organizations resist because they don't want to just be seen as people who are just cranking out the wheel faster. They want to really be better at their jobs and sort of result in, you know, much better decision making and so on. And so I really loved what Tim at Delta was talking about in terms of the kinds of signals they're capturing for their sort of the fourth category of work. Because I think one of the signals that is very hard to capture, but it'd be useful, is what is the quality of decision making that is happening? Right? Across your manager teams or your, you know, your leadership teams. And so, how do you capture that, and how do you see if that is consistently getting better because of the application of this kind of technology? And that to me is going to be the real productivity improvement in the long run.
Parker Mitchell: I want to jump to that in a moment, but I'm picturing, or I'm assuming, Google is a company of engineers, very numbers based. How was the point of view that you introduced, that you don't have to jump to efficiency, you don't have to jump to ROI right away, how was that received within the broader Google community?
Prasad Setty: You know, Google was very expansive in its nature. And the thought for many years out there was that innovation trumped efficiency. And so anything that you could showcase as helping people have the autonomy to think creatively. We heard a lot about critical thinking, we heard a lot about imagination in the previous session. So anything that furthers those elements is certainly going to have a better chance of resulting in innovation. And so that was always seen, at least a long time back, as more important than efficiency.
Parker Mitchell: So you've talked about just the nature of work changing, the nature of knowledge work changing. And when it changes, as it evolves, the return on judgment is going to be higher. Are you seeing any early glimpses of types of work where that change is already happening, where knowledge work is changing?
Prasad Setty: This is one that I would love to see, but I loved hearing some of the experiences that Lesley and Jennifer and others are talking about, right? I think they are truly leading this kind of thought. And then Anna spoke about the world of athletics. One of the things that I'd love to see in terms of whether these capabilities are improving, is in using AI for better simulations. In the learning community, I think it's sort of well established that, as adults, we learn by doing. And so that is why new job experiences are so much more valuable than perhaps classroom education.
But what exactly do you get from experiencing something? You get to experience multiple contexts, and you get to repeat your behaviors in different interactions. And so if you could simulate that, and if you could accelerate that kind of development using AI, then I think you are furthering everyone's development. And so, if you look at the sports analogy, for instance, in Formula One, race car drivers sit in sims, sit in simulation engines before they get onto the racetrack. And that is how they practice.
And so what is that equivalent for a daily manager? And I think the equivalent of a game in any kind of sport, the equivalent in the corporate world is meetings. We all go through millions of meetings in our career. Are we getting better in each meeting throughout our careers? And if not, why not? And how can AI or other things help us be better?
Parker Mitchell: I mean, it's interesting. I'm tying together a few threads here, but if AI helps us be a little bit more productive in our jobs, it can save some time. And if that time is freed up, it can give us more time to do some of this practice, to be able to refine these skills, and then be able to bring better judgment to bear. You know, rather than just be busy for 40 hours a week, we can be thoughtful for 10 hours a week and then be high value for 30 hours a week. And so I think there's a world in which the efficiency gains can free up time for some of these refinements in judgment.
Prasad Setty: I think you summarize it beautifully. That is the arc that I think we all want to get down to. And you're sort of seeing that even with the development of these large language models, right? The initial versions of, you know, I've worked in technology for a long time. We'd always think about latency. How quickly is the technology responding back to your query?
And we'd always want to reduce that. When you type something into Google search, you wanted the latency to be as small as possible. But now you suddenly have GPT4.0, where the thought is, let us spend as much time as needed on inference so that we can reason better. And so that is exactly the equivalent on the people side too. How do we get to slower, better, good judgment, and therefore better decisions in the long run.
Parker Mitchell: You and I have had a chance to interview a number of CHROs and other talent leaders over the past six months. And I think one of the things they've always said is they just don't have time. Even those who are embedded in Silicon Valley, they don't have time to think through how the technology is changing. What's one piece of advice that you would leave for folks in the audience about the importance of doing something even in that imperfect fog that you've just mentioned?
Prasad Setty: It is certainly a hard challenge, being in any kind of senior operating role, particularly in the HR side, and there are lots of things that are important. But as Bill said right from the beginning of this morning, this is certainly on top of every board conversation. And so I'm sure everyone is thinking through what the right activity is out here.
I guess I would go down to a couple of sort of fundamental principles that I strongly believe in, particularly for this audience. I think after all these years, and even with all of this technology and with all the pressures that we see, I still believe that people managers are an incredibly important leverage point for any organization. The moment you have more than, let's say, 30 or 40 people managers in your organization, that has got to be the place where you invest disproportionately more of your resources and your attention to that community. Because that is what is going to define what the lived experience of your employees is, and therefore you'll get the best sort of return from that. So that's got to be one of the most important use cases.
And just to echo everyone else's views out here, there are two ways to think about AI applications right away. One is, can AI be capable of doing things that you don't want to do or your people don't want to do? So it's very much a subtraction-oriented application of AI. Or the second is that you think of it as an addition-based view of AI, which is to say, can I deploy AI in a way that is going to help my people be better at their jobs and grow and learn better? And I do think that Valence’s Nadia certainly falls in the latter category.
I don't think you should choose only additive technologies. I think you have to look at subtractive ones too, but I think the additive technologies are likely to land better with your organization because they don't feel like they're being displaced. And any moment that you're waiting before you deploy these kinds of additive technologies, you're robbing your people of learning opportunities. And that, I think, is a waste of their potential.
Parker Mitchell: I mean, the urge to move fast, I think, despite the uncertainty, we've heard that over and over again. So thank you for joining and sharing those thoughts with us. We really appreciate it.
Prasad Setty: Thank you, Parker.