Virtual Highlights with Live Q&A
Register Today
In this condensed version of a popular internal presentation at Microsoft, Brent Hecht, Director of Applied Science at Microsoft, dispels common AI misconceptions, including why AI isn’t all hype, why AI’s best use isn’t to replace people, and why we should be shaping the future of work instead of trying to predict it.
Jeff Dalton: Thank you very much for that introduction. It's my honor and privilege to be able to introduce our next speaker that we have here. So we have Dr. Brent Hecht. Apologies. Dr. Hecht is a distinguished kind of colleague of mine. He inspires both my academic research at the University of Edinburgh. He also inspires the work that we do here at Valence on the applied side.
I first encountered Brent's work from the Future of Work study from Microsoft. Maybe some of you are familiar with it. And that really looked at what we could do from hybrid work and how that was transforming work in the age of COVID-19. More recently, there's been work on looking at the new future of work that's focused on the future of AI.
There was one published last week, or last year, and there's also one coming next week. So, coming soon, so look out for that. One thing that stood out to me in one of the recent reports was the role of AI as critical systems thinking. So, being a provocateur to be able to allow us to enhance our people's ability to work effectively and to think critically.
And so he's going to be doing that for us today. He's going to play that role of that kind of provocateur to help us think about where the future of AI is going for work. I want to give you a little bit about Dr. Hecht. He's a director of applied science at Microsoft Research, an associate professor at Northwestern University, where he leads the People, Space, and Algorithms Research Group, focusing on how gen AI can be a positive influence on society.
He's a prominent figure in responsible AI with over a decade of human-centered work, over a hundred publications in top journals. His research has helped lay the groundwork for developing equitable and transparent AI systems. His foundational work on algorithmic bias and methods for measuring and improving AI systems has really led to the fact that we can now measure AI with satisfaction, as well as kind of essential to building useful, trustworthy AI systems, and fair AI, which are core pillars that guide what we're doing with work on Nadia.
Brent's research has been featured in major publications, the New York Times, NPR, as well as major venues like Wired. So we're honored to have him here. He's also a founding member of the ACM Conference on Fairness, Accountability, and Transparency, a leading venue for AI research in responsible AI, and today he'll be addressing AI misconceptions. So I look forward to hearing what he has to say for us. Please welcome Dr. Brent Hecht. Thank you.
Brent Hecht: Hey, folks. Thanks for welcoming me here. And I'm excited to talk to you about some misconceptions about AI. And I will try this clicker now. Alright. So I joined Microsoft from Northwestern, where I was a professor, as was mentioned, in 2019. And the pitch was come, you know, invent the future of work.
I thought, great, Microsoft's a wonderful place to do that. Little did I know I would have such an amazing opportunity to hopefully help change the future for the better, thanks to really generational changes in work that happened and are happening over a five-year period. And the first was the switch to hybrid work, which is stabilizing a little bit, but it's still going on, and I'm sure you all deal with that all the time.
And then language models, which is something that I studied for many, many years, they became good enough to become practical for a number of applications and unleashed AI into the workforce in a way that folks had predicted for a long time, but it's here now. So when the first disruption happened, we all remember, I started at Microsoft in, roughly speaking, September 2019.
So by March 2020, I was already doing a slightly different job than I was expecting. One of those was trying to corral Microsoft's amazing research capacity into helping leaders at the company, our customers, and of course product leaders understand what hybrid work is, what it might be, how we can make it better.
And one thing we did in that process was develop this presentation called “The 10 Misconceptions About Hybrid Work” and delivered it all over the place. It was a really fun presentation to give, talked about things that people might've heard in the media, people might've assumed based on first principles, and why the science suggests that might not be correct, and what we might do to correct that misconception. It's also—this format is just a really fun way to talk about a bunch of cool science. So it was a really fun presentation to give. So when the AI boom hit, we decided to put together something similar, in this case, five misconceptions.
So, maybe afterward, when we're chatting, you can nominate a couple more. The five misconceptions that are in this talk, and I have to sadly tell you, we won't have time to cover all of them today. This is a 45-minute talk in its full glory. If you're a Microsoft customer, I'm happy to come by and chat with you all sort of more directly.
Even if not, too, happy to chat as well. Today we're gonna be actually covering, one, two, and five. So the first misconception we'll talk about is that generative AI either is going to make my workforce or my company a zillion times more productive, or it's basically useless, basically just hype.
Second misconception is that the best use of generative AI at my company is to replace people. That is a misconception. We are really lucky that's a misconception and that the science points to that being a misconception, but it is one. The two misconceptions that we're gonna skip, we'll talk about how text is not likely the—we would talk about how text is not likely the future of computer interfaces.
And one that's close to my heart, close to a lot of my research too, is that for organizations and people that make content, generative AI can be a huge boon instead of something that's threatening. So the full version of the talk goes over that. But then the fifth misconception is a carryover from the hybrid work misconceptions talk, and the misconception is that we can predict the future of work, and we should spend a lot of time trying to predict the future of work. So, we'll talk about that at the end.
So, let's jump in with the first misconception here. And that misconception is that generative AI is either going to, you know, as soon as I install Copilot, I'm going to be a zillion times more productive. My organization's going to be a zillion times more productive, or it's just hype and I could ignore it. And the reason this is a misconception is that generative AI is—all the evidence is pointing toward, or most of the evidence is pointing toward generative AI being what's known as a general-purpose technology. And we know a lot about how general-purpose technologies affect productivity individually in organizations and across the whole economy.
We know a lot because the analogies to other general-purpose technologies are, you know, proving out to be, at least the evidence suggests right now somewhat accurate. So other general-purpose technologies are electricity, the automobile, the internet, these types of things. And simplifying things really dramatically, and if there's an economist in the room, I apologize, but I think you'll agree this is directionally correct.
Simplifying things dramatically, general-purpose technologies affect productivity in the economy in a two-phase process. The first phase is when we get the general-purpose technology, but we have our existing complementary technologies and our existing workflows.
So, for example, when electricity became widely available in the United States, all manufacturing was steam-power based, or most manufacturing was steam-power based. And I don't know if we have any mechanical engineers in the audience. I'm not one of them, but I think we all can agree that electricity probably, we can imagine, useful for manufacturing things.
At the time though all the factors were laid out to take advantage of steam power. All the processes were designed to take advantage of steam power. So they looked and they said, “hmm, this electricity thing seems cool, but I don't know what to do with it.” One thing they did was, instead of having someone run around and light candles to keep things going at night, they replaced those candles with electric lights, and that does increase productivity.
The former professor in me really wants to use a laser pointer here, but that's that linear part of the growth there. That's a real productivity increase, and that's what we see in phase one, is sort of incremental productivity gains. Twenty or 30 years later, which, not coincidentally, is about the work life of, you know, it's roughly a career, or roughly a generation at least, someone figured out how to lay out a factory, and there were a bunch of new technologies developed to take advantage of electricity that allowed them to radically increase manufacturing productivity.
And that's where you see that hockey-stick growth that folks want to see right away, but it's unrealistic, in almost every case, to expect that. Another really good example comes from automobiles, also a great general-purpose technology. We could take, you know, a Toyota Camry and put it in 1910, and that would be pretty impressive.
But actually, it wouldn't be all that useful. We wouldn't have roads, we wouldn't be—just think about the number of inventions that had to be developed to implement gas stations. We didn't have gas stations, you know, anywhere we would need it. Those needed to be developed over time.
So that's a really good example of people having to invent a lot of new complementary technologies to take advantage of that general-purpose technology. So you need those changed workflows and those new complementary technologies to unlock that hockey-stick growth.
I mentioned 20 to 30 years. Historically, that's about right in terms of how long it takes. There's reason to expect it'll be a lot shorter this time. One reason is that a lot of the infrastructure we need to build out particularly with those complementary technologies, is digital infrastructure, and we have it already. So, I have some colleagues at Microsoft Research that I work with very closely that suggest that, you know, maybe five years might be a reasonable thing for us to expect before we see this hockey-stick growth.
The other thing to mention is there's an argument that because of the way the generative AI tools are built. I think a lot of folks know they take data from the internet; they're inherently dependent on that data from the internet. You could argue that this is actually the moment of the internet's hockey-stick growth.
So it's actually the internet as the general-purpose technology. But that's a conversation over cocktails. I want to go back to that first phase though, and tell you why I'm so excited about it, and that we shouldn’t ignore it. So, one thing I do at Microsoft is help to coordinate tons of studies that look at how much Copilot, specifically, can increase productivity on specific targeted tasks.
And the results there are pretty good. It makes it easy for a scientist like me to come talk to folks like you about the results. It's not very—there aren't a ton of complex stories in those results. Almost all of these are lab studies, and they're showing, you know, roughly 25-50% productivity gains, across the board. There’s some exceptions, you know, in these types of things, but that's pretty good for a phase-one productivity increase.
And it's not just Microsoft that's putting that. OpenAI has a bunch of studies, we have partners at Harvard that are putting out studies with similar results, and these types of things. The good news is, even if you assume that these tasks only apply to about 2% of what people do every day, which is conservative, but it's not 20%, it's not 30%, and you take that lower end, that 25% productivity increase for most people, you're going to be creating enough top-line or bottom-line value to be able to say, hey, I think we're selling a value-creating product, which, again, for me as a scientist makes me feel good and comfortable being able to talk about these things.
So even though we're not going to be in phase two for a bit, those phase-one productivity games are important and valuable for companies, and companies that leverage them are going to be more successful than companies that don't.
The second thing I'd flag, and this is called—we just put this out two or three months ago. Another thing I do on my team is put out reports to help people sift through all the science that's coming out about the ways that work is changing, and hybrid, and now generative AI. And we put out our second AI and productivity report, and in this report, it's the first public mention of an incredible study that some of my colleagues at Microsoft have done. Sixty customers signed up with them to do a randomized, controlled trial of Copilot being deployed in their organizations. This is almost medical-quality information. So, you know, they randomized the seats that got access to Copilot initially.
And we're able to see how work changes by comparing people who had access to Copilot to people were randomly selected not to have access to it, and we're seeing really good phase-one style productivity increases, like we would expect from the lab studies. So, we're seeing 10% more document creation, roughly the same order of magnitude or the same effect in email time. Meetings, really interesting. Some companies see a significant drop in meetings, some companies see an increase in meetings. Looking into the increase, it's that Teams Copilot is becoming so effective that people are using meetings to, for instance, write documents.
So, you know, hey, let's talk about this memo together, and let's have Copilot write the first draft of that memo. So, pretty cool stuff, and very impressed with the study that my colleagues have done here.
Okay, moving on to the second misconception, which is an important one, and one that I'm sure a lot of people are thinking about for their organizations, their personal lives, themselves, and their kids.
When people first see generative AI, they often think, “wow, this is going to replace this set of jobs, my job, this organization within my company.” And I've anecdotally found this to be a very widely held misconception. But it does run against a key principle in the literature on how technology has changed productivity and, quite frankly, improved living standards.
And broadly speaking, that literature rolls up to betting long on human labor. So, it's a good bet for the last 300 years, since the beginning of the industrial revolution. Specifically, betting that humans will become more—the time of humans doing work will become more valuable with advancing technology has been a really good bet.
And those who place that bet have generally won. And those who’ve placed a short bet or assumed that labor saving technologies are mostly substitutional, to use slightly technical terms have, broadly speaking, no pun intended, come up short. So, my colleague at Stanford, Erik Brynjolfsson, wrote a great piece.
I'd really recommend folks check this out. It's a very much-needed cultural critique of my field, as it is a discussion of the topics associated with this misconception. It's called the Turing Trap, and it sort of critiques computer science for using—many of you have probably heard the Turing test as the goal of the entire field.
It's an inherently substitutional test. Like, how can you trick someone into thinking that they're talking to a human instead of a computer, rather than thinking about new, incredible things that humans and computers can do together. One thing he talks about in this essay, which I find—or an anecdote he has in the essay, which I find very powerful, is he talks about apparently there's an ancient Greek myth—must be a long-tailed one because I don't remember learning about it in school—where someone had invented some magic device that could do anything that a human can do.
And he thinks about, okay, what would happen if that were deployed? Well, you know, no one would have any work to do, but we'd still be stuck with latrines. We wouldn't have vaccines, you know, these types of things. All of those computer or technology plus people, quality-of-life advances—and another way to understand quality-of-life advances in this context is productivity improvements wouldn't have happened, so it'd be stuck in that era.
And so, you can imagine, if you just replace all the people at your company with generative AI, you'll have the same potential, simplifying things, selling the same amount of stuff. If you take generative AI and make your people more powerful, you might be selling 100 times, 200 times, 300 times more.
So, for those of you who are stock market folks, we are in New York. If you take a short on human labor, the most you can save is the cost of human labor. If you take a long bet on human labor, the potential upside is infinite. So, implications for leaders like yourselves. I would shift, if you have the instinct, many do, it's okay, don't feel badly about it.
This is a labor-saving technology, but who can I replace—how can I reduce costs in my company with this labor-saving technology? Shift that first question to how can I make my people more productive using this technology? How can I do more things and do different things than I used to do? There are some complexifiers here, to use a term from a former president.
The first is the nature of demand. So, let's take software engineering as an example. Many software engineers are concerned right now that these technologies will replace them. I'm less concerned about that because my boss's boss runs all—he is in charge of all of Microsoft's productivity tools.
And he never says, can you make the same number of tools, at the same quality, at the same speed. He wants more tools, better quality, faster, right? And that is an implicit statement of a lot of unmet demand for software engineering. So if these tools make us 100%, a 1,000% more productive, there's still a lot of demand that will be there to, roughly speaking, use that productivity gain, right?
Where the demand is capped, things get more complicated. A lot of people talk about customer service as an example. I'm less convinced about that. But if there is a fixed demand for customer service at your firm, and these tools do make someone 50% more productive, you might think, then, that labor substitution might be something you would consider.
However, having dealt with some customer service, particularly within the insurance industry lately, I can say there's a lot of unmet demand for high quality customer service, at least from this customer. So we can think about how to improve the quality of the customer service instead of laying people off initially, and that will present, potentially, some very significant business gains for you folks.
So, the bigger caveat, I think, is actually on this next slide, which is that everything I've talked to you about is industrial revolution economics, and the goal of many people in my field and many large organizations in my field, most notably OpenAI, is actually to end those economics.
We don't know if they'll be successful. They have not been successful yet, but the goal there is effectively to create a technology that is so productive, that no amount of unmet demand really matters. And this is very much an explicitly stated goal in OpenAI's case. Their charter, the second paragraph, says their goal is to create an AI technology that can do most people's jobs better than they can.
That's how they define AGI. It's the implicit goal of a lot of people in the AI field as well. It's a goal that we should be discussing more as a populace, because there are a lot of implications from that that we don't have time to talk about now. But if they are successful, then a lot of what I said is not as relevant. But they haven't been successful yet.
Alright, let's jump into the fifth and last misconception here, and that's that we can accurately predict the future work, and we should spend a lot of time doing this. So this is where I tip my hat to you folks who know how to manage people. I'm a computer scientist, my computer science PhD advisor would frequently tell me, Brent, the social sciences are the hard sciences. We know much less about how people work than we do how computers work, and people in my field sometimes forget that and make predictions that turn out not to be correct.
So on the right side of this slide, you can see a whole bunch of very, very famous computer scientists making very, very inaccurate arguments about when we'll have, effectively, AGI, roughly speaking, technology that could do anyone's job, as per what I was just saying.
And so we have to be careful about listening to those and making decisions based on those, in part because oftentimes they're either subconscious or conscious attempts at self-fulfilling prophecies. So we have to be careful about, oh, someone said this is the future, then everyone says, okay, this is the future, and they make it the future.
So instead of that, I'll actually skip ahead a bit here. And suggest that—you folks are all business leaders. I work at a very large tech company. Instead of trying to predict the future, our energies are better spent trying to create a future that we want. So when I hear at Microsoft saying, “what's the market going to be? What's the technology going to look like in 2025? What's the market going to look like in 2030? Will we have AGI by date X?”
Let's think instead about what we'd want if we had that, or what type of technology we'd want to create. Our time is much better spent doing that. We are very poor at understanding one person, let alone the complex dynamics of how a technology and a society will work together.
So, beyond that, to see if I know how to move back, I don't know how to move back here. So I will make an attempt to speak over slides that we just skipped through. I do want to say this doesn't mean that we just say, okay, let's ignore. You know, you folks are business leaders.
You have to plan as well. So, first and foremost, we want to be creating. What do we do when we need to plan ahead? This is again where I turn to your expertise. Business leaders know how to handle uncertainty. You diversify. So instead of making a big bet on a single potential outcome—AGI will arrive by 2028, you know, AGI will never arrive, these types of things. Diversify your expectations.
Hopefully the presentation today walked through some of the higher probability potential outcomes. But planning for a low-disruption outcome, a medium-disruption outcome, and a high-disruption outcome is reasonable for your organization, and actually for your personal lives as well.
The long version of this talk has been unreasonably popular internally at Microsoft. I rolled out of bed, fed my three year old, and got stains on a sweatshirt. I was like, “maybe I should take the sweatshirt off before I give this talk that I thought 20 people were going to attend.”
I had 1,200 of my colleagues attending one of the times I've given this talk. And one reason is they all kind of want to know what to do with respect to, for example, a question, “what should I tell my kid to major in? What should I think about for my own future?”
For your personal lives, too, I recommend think about a low-disruption outcome, the internet arriving, standard general-purpose technology, high-disruption outcome, maybe electricity, a big general-purpose technology, and then the very high disruption outcome, which is sort of the OpenAI successful outcome, and plan for each of those rather than trying to make a big bet on one of them.
And, with that I'll close with this slide. You know, one thing I do at Microsoft is help out with a lot of our responsible AI work. And this is a very simple slide that I use to help guide that work within a company. We are limited in doing stuff that's great for the world that's outside of our self interest.
But there's a ton of stuff that's great for the world and is in our self interest. And getting this stuff right, getting generative AI to land well in the workforce, is definitely in that center, so that everyone feels like they are benefiting from generative AI versus it's something that's happening to them.
I feel very passionately about that. I suspect many of you do as well. With that, here's a list of the misconceptions again and a link to where you can learn a little bit more behind the science of what I've been talking about, the annual New Future of Work report. And I will take questions when we have a chance to mingle.