AI is changing the way we work.

Sign up for our newsletter and be the first to know about exclusive events, expert insight, and breakthrough research—delivered straight to your inbox.

Submit

Please share a few additional details to begin receiving the Valence newsletter

By clicking submit below, you consent to allow Valence to store and process the personal information submitted above to provide you the content requested.

Thank you! Your submission has been received!
Please close this window by clicking on it.
Oops! Something went wrong while submitting the form.

AI: The Untold Story

We sat down with LinkedIn co-founder Reid Hoffman and Financial Times editorial board chair Gillian Tett to go beyond the headlines and get a deeper understanding of the economic and workforce impact of AI. Their message for HR leaders: act now, because the change is already here.

Key Takeaways

1. AI is here to amplify human potential. Instead of focusing on AI as a way to cut costs or reduce headcount, Reid and Gillian see AI as augmentative: a new member of the team that changes workflows and unlocks capacity for human creativity.

2. "AI is the best educational tool we have created in human history," Reid says. There are real challenges around AI's impact on how young people learn and develop the skills they need to enter the workforce. But Reid and Gillian explore how AI can create new models of education and training that personalize instruction in a way that was previously only possible at the most prestigious universities.

3. With AI, everyone becomes a manager. Reid sees a near future where every employee has a team of AI assistants that they manage to get work done. "There won't be such a thing as individual contributors anymore." In this world, the same EQ skills that make people great managers and coworkers become the skills that make them AI super-users .

4. "If you're not using AI, you're going to be under-tooled," says Reid. From leveraging AI to run better meetings to reimagining what's possible to achieve with an AI assistant at every employee's side, Reid and Gillian outline concrete starting points for driving change at scale. Because, as Reid says, in six months, if AI isn't embedded in your workflows, "It'll be like saying, 'I'm a carpenter, but I use rocks, not hammers.'"

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Video Transcript

00:00   How the Media Covers AI

Parker Mitchell: I thought maybe we could kick it off actually with you, Gillian, and the conversation, the public narrative around AI. How is the media covering the idea of how AI is gonna impact work and the economic and social impacts of that on our livelihoods? And wondering what are the things you think the media might be under-covering, or what are the stories that you think should get more coverage? 

Gillian Tett: Well, I think the media is pretty negative on AI at the moment. A cynic might say that's because they know that their own jobs are threatened. And one of the really striking things about the AI revolution is that it's threatening white collar jobs, not just blue-collar jobs. And so it was very easy for pundits, who are working at financial newspapers, to say, actually, productivity increases are good, which is code for let's have less blue-collar workers if we have automation. And now that white-collar jobs are threatened as well, be they lawyers or traders or journalists, suddenly the narrative has changed. 

So I think there's a real concern about AI. From time to time, there is now a recognition about the extraordinary things that AI can do in relation to life sciences and other research capabilities. But for the most part, it's pretty negative. 

01:17  Now is the Time to Learn How to Work With AI

Reid Hoffman: Well, Gillian's exactly right about the, you know, kind of general media response. And, you know, I think that the kind of things that aren't covered is that one, look, whatever you're wishing, AI is here. If you haven't actually, in fact, already personally found significant use cases that would help you in your own work and in your own life, that means you're not trying hard enough. And there's a general reflex to wait for when it stabilizes. Like, well, when they release the new iPhone, I'll check this out. And it's like, no. It's actually improving on an order of magnitude of months. And so, you actually have to be, you know, kinda going with it. And so what, I think that it is scary. It is changes. It is changes to white collar work. It is the case that businesses start when they, you know, when they can encounter something new with how do we cut, you know, costs. 

So, you know, could we take our marketing department and could we take it from 10 people to two people? You know, can we do this with less journalists? You know, hence the point that Gillian was, you know, gesturing at. But the actual thing is that it'll actually change workflows and change everything else and that, as individuals, you can already begin to see what that is. And so we're telling, one, the positive story, namely, how do you get into it? What do you what you should be doing? What should you be experimenting with? Two, what are the ways that we're that we are, essentially experimenting with to expand our capabilities as individuals and as offices because those capabilities are really there. 

Like, you know, for example, you know, one of the things that I regularly use deep research for is as a research assistant on a broad variety of topics. What I'm trying to actually— I can now think much more broad and synthetically about a number of different kinds of areas relating them to what I'm doing, and I have an immediate research assistant. Now you say, should I get rid of the research assistant I have? No. Actually, frequently what I'll do is generate something and send it off to the research assistant saying, hey, could you track down this, this, and this about this and maybe use, you know, you know, deep research to follow up on these things and so forth. You know, as an iteration and as a as a thing to do. And I think that's the kind of thing about that kind of positive story, that's really important. 

And the other thing that, Gillian was gesturing at is the negative story that we've kind of, you know, kind of wrapped ourselves in the West is actually ultimately, you know, damaging to us. It isn't to say that we don't pay attention to the risks. It isn't to say don't pay attention to the issues. But the question is, AI is coming. It's like we're in a river. It's going down. You can say, I don't like this river. I'm gonna throw my oar up, and I'm gonna yell. Like, okay. That's a really good way of navigating a river. Right? So it's like actually, it's like okay. 

So how do I start steering? What do I learn? What's going on in terms of the currents and that kind of thing. And that's the kind of thing that we need to be doing as individuals, as industries, and as societies, and, obviously, of course, you know, for our audience, as companies. 

04:34   Augmented Intelligence

Gillian: I strongly agree. And in fact, one thing I'd say is that the reason we call it artificial intelligence is because the person on the wall behind me, and I'm sitting in King's College in Cambridge, which is where Alan Turing, was based, and that's actually his portrait up on the wall behind me and literally about a 100 yards away from me is where he used to live and did much of his work. It's called artificial intelligence because almost exactly eighty years ago, Alan Turing did the Turing test and, basically spawned the word artificial intelligence. 

But I often wonder how different it would feel if we called it instead augmented intelligence or accelerated intelligence because the way I see it is that it's not so much about replacement all the time, although sometimes it is, let's be honest. It's more about being an additional member of a team. And I say that because a few years ago, I gave speeches saying that there was one thing that AI could never do, which was to tell a really good joke, and therefore comedians had job security. And it turned out that I was totally wrong because the reason why I thought AI would never be able to tell a joke and challenge comedians was because the pre-transformers models of AI, which were basically path-dependent based on logic, essentially could only produce very basic, crude jokes like knock-knock, who's there jokes, or wordplay or Christmas cracker jokes, and they weren't funny. Post transformers, where, essentially, you're dealing with probabilistic observation, they can produce jokes that are funny about half the time. 

And the dirty secret of humor is that actually comedy writers for late night TV shows are only funny half the time. And the reason they know that is because those jokes are written not by individuals but teams who chuck jokes into the mix and they bat them around, and they finally produce a late-night television comedy. And adding an AI agent into that mix doesn't necessarily replace the humans. It simply adds to the jokes that are basically swirling around and gives more checks and balances. And humor is fascinating because humor in many ways is the very definition of cultural anthropology, which is what I studied my own PhD and I've done academic work in because humor can't be predicted by an algorithm because it depends on contradictions in culture, on ambiguity and silences that we don't like to talk about, and very tribal behavior. So the fact that AI can now master even that but can do that by being part of a team is really important as a parable for what we might see emerge in many professions. 

07:12  AI Assistants for All

Parker: Yeah. Let me double click on that, Reid, because you've talked about having teams, individuals, having assistants, teams having assistants. I'd love to hear you expand on that vision for what work might look like. 

Reid: So let me start with just a couple of near certainties to predict in the near future. And near future is like small number of years or medium number of months. Which is, one, there won't be such a thing as an individual contributor anymore because essentially, every person who's doing this will have a small to a large team of agents facilitating what they're doing, and they will be managing that process with those agents. That's one thing. So it's kind of like almost like the managerial skills, like, the kind of managerial skills you might exhibit today when you're using a deep research agent or other bots in order to do stuff. That's gonna get deepened. 

And another one, this one actually is a prediction I made in the MIT tech review a number of years ago, is every single meeting that we do, we will actually have an AI agent, not just for transcription and notes, all of which is happening now. But where that AI agent would be going, oh, you know, Gillian and Reid are talking about this. Do you realize also you should talk about Alan Turing in the following question, or you should refer to the following thing about the Turing papers in the King's library? You know? 

And so, you know, that kind of participation for information, for follow ups, for questions, you know, will then become like, it's almost like, wait, wait. Can we have this meeting? We don't have the AI agent turned on yet. We're gonna be so much less effective if the AI agent isn't here in the things we're doing. And, you know, part of this amazing transformation that we're about to go into, this is, like, only a few years into the future. And, you know, in fact, when you're doing white collar work, if you're not using AI tools, you'll be under-tooled. It'll be kinda like saying, I'm a carpenter, but I use rocks, not hammers. Or, you know, like, it's kinda gonna be a standard part of what is professional competence, and then that will spread through the entire team. 

So I think this kind of massive capability change is coming so fast that you need to get engaged and you can't, like, say, we're gonna set, you know, these three people to go off and study it and come back in six to twelve months and tell us about it. I think that's too slow. So I think what you want to do is what are the ways you can experiment quickly? So, you know, simple things that you can do that I've, you know, done with, you know, organizations that I'm on the board of and others to say, well, you know, make sure that there's kind of like a weekly, biweekly, you know, fortnightly review of where everyone says, here's what I've tried, here's what I've learned, here's the things that I'm doing. Right? And then you can also similarly go, and here's some of the things that we should be doing, you know, as a group. 

For example, when I'm working with my groups, one of the things I do is I take a transcript of the meeting, and I feed it in with some relatively standard prompts in AI that says, is there anything we missed? Was it an important question? Was it an important source of information? Was there a follow-up? And this set of different things because we just take the, we had the meeting, we did the transcript, we just put the transcript in, and it gives us a very quick response to that. It can even be before the meeting's over is how fast this can be, where you go, "Oh, right. Yeah. Yeah. We should do that too." And so anything to be doing to starting to be experimenting and seeing what is our company culture, our market position, the way that we operate in our groups and not just as, hey, you know, let's go assign Sue or Fred to go, you know, generate a report on this that, you know, we'll go look at in x months. 

11:12  AI Adoption Across Industries

Parker: Gillian, how are you seeing those adoption, sort of steps forward, step back, step sideways? How are those playing out either in the conversations you have with leaders or even potentially within journalism and the Financial Times itself? 

Gillian: Well, I wear several different hats in that I am both overseeing this college in King's College in Cambridge, where academia is potentially being very challenged by AI in many ways. Good news is that the life scientists and the other scientists that I deal with in the college are being given extraordinary wings all of a sudden to do the kind of research at speed that most have never dreamt to be possible. So they are totally positive about AI. 

Many people in the humanities are pretty negative about AI because they can see that it's basically either going to undercut their role as teachers, or, in their view, make, you know, a whole generation of students pretty stupid because they're cheating with AI and not using their brains. I mean, as it happens, Cambridge and Oxford are probably the most, ChatGPT-resistant types of education in the world because they rely so heavily on small face-to-face interactions and what we call tutorials and supervisions where they have to write essays and then talk about them for an hour or two. And that is AI-resistant in many ways, or rather AI-enabled because you can use AI to research your paper, and then you're forced to discuss it as a human being, using what you've seen from AI. So I actually expect that going forward, we may well see more spoken exams, more teaching patterns of the form that we have in Oxford and Cambridge. 

In terms of journalism, you know, I was actually meeting with the CEOs of most of the big British media companies yesterday and moderating a discussion with them all about this very topic. And the message is that they are very threatened by the fact that AI companies are scraping their data with no monetization or monetary reward, often no attribution. And they're basically demanding some form of compensation for journalistic content to be used to train models, which I think is entirely fair. What form it will take is unclear, but there needs to be some way to get the media ecosystem compensated. Otherwise, there will be no media ecosystem and content in the future to scrape. But when it comes to actually providing news, they're taking very different attitudes. I mean, the Financial Times is not using AI to write stories in any formal sense, maybe to do some research, but not to write stories. But it is to aggregate news headlines, for example. And I suspect you'll see a lot more of that going forward. 

And as far as CEOs are concerned, as Reid says, many of them have barely started thinking about it yet, but they need to quite urgently, because if they don't, they will get overtaken. And apart from anything else, they won't actually realize, you know, how to familiarize themselves in the way they see both the benefits and the risks around it. 

14:15  Personal Intelligence & AI Companions

Parker: Reid, I wanna pick up on something Gillian mentioned, which was the tutorial model, at those, you know, Cambridge and Oxford. It's famous. It's so successful. One of the companies that you cofounded a few years ago with Mustafa Suleyman was Pi, personal intelligence. Can you expand on that vision of having this idea of personal intelligence alongside you? 

Reid: So, one of the things that's another kind of startling prediction is that we will, within a small number of years this will be a little further down than the earlier predictions I made. But we will actually, when we have a kid, we will actually have them have an agent, that will go with them through their entire life, and, you know, learn and help and so on with them. By the way, we will adopt that as adults sooner because we won't have all the complexities around, well, what are what are the set of things around it around the child. And so part of that is having a essentially a companion. And part of our idea when we built Pi, you know, pun intended, but it's a personal intelligence, but, you know, apple pie, etc., is that, training for this. And in my, you know, earlier book, Impromptu, I called it amplification intelligence, although augmentation intelligence is good too. 

When we're gonna be amplified, you don't just need IQ, you also need EQ. You need conversational capability. And part of that is to, is to actually be a very good, you know, kind of companion in the things you're doing. And so Pi, I think, you know, kind of set the standard for all of the other GPT4-class models on how do you put an EQ, how do you have it, you know, have it be a conversationalist and ask questions, you know, and how does it help solve a variety of those kind of, like, you know, kind of life navigation. And it applies to work too because social intelligence is part of the meeting, part of kind of collaborating with teams that was important. But, you know, that that obviously, you know, people don't necessarily think of that in its kind of top role. But that's what Inflection and Pi are about. Now we've seen other agents beginning to do, you know, like, you know, Anthropic with Claude and others beginning to develop on the same in a similar line. 

16:33  Navigating Concerns About Job Displacement

Parker: How should company leaders, CEOs help their workforces navigate some of these natural threats that people will feel as they see parts of their job, not just the augmentation parts, which I think people will be excited about, but bits that they maybe spent ten, twenty years becoming an expert on, watching AI do parts of that as well as them. I think that's gonna be the crux of change management in companies. 

Gillian: The really interesting for the companies right now, I think, is actually, in many ways, the entry level jobs because what AI is replacing above all else are a number of the boring entry level jobs that graduate trainees in particular would do for a couple of years to have effectively an apprenticeship in a white-collar way into the wider world of work. And by that, I mean, you know, sort of early career engineers, early career lawyers, paralegals, or in the case of journalists, you know, your classic grad trainee journalist. And when I was running, you know, bits of the FT, you know, I used to make all the new entrants do really dumb, boring stuff like write the markets column, which frankly could have been done by, you know, automation twenty years ago. But we still had people doing that, often because it was a really good training ground for learning how to handle data information. 

So one of the questions is gonna be, how are we gonna train the next generation into apprenticeships and entry level jobs. The flip side of that though is that if we start talking, calling it augmented intelligence or artificial intelligence and start trying to train people how to use it to make their job more effective, we may actually start to see people not only using it to be more productive, but creating whole new categories of work that we haven't even imagined yet because that's been the story of calculators and computers. 

Reid: Actually, for young people right now, part of what I advise them to do is become as expert and possible in AI and come to the organizations going, I can be part of your AI transformation. Like, I will, I will front end, I will use this too. I'm an AI native. It's this, I'm part of generation AI and doing that. I actually think that's part of how the transformation is gonna happen. And by the way, when you get to, well, how are we doing apprenticeship and so on. AI is the, like, by just many, many miles, the best educational tool we have created in human history. Right? 

So the question around, like, it almost gets back to, like, we go back to what Gillian was talking about in terms of the Oxford-Cambridge model. We can now have essentially, you know, a kind of a quasi, not the same and better with an Oxford or, you know, kind of, Cambridge on. But we can have one that is interacting one on one with every individual and helping them, you know, kind of get better on, you know, kind of the way they're thinking, what they're doing. And so you say, well, we got, how do how do we get people up the curve? It's well, actually, in fact, using AI and learning from AI and then using AI to do the work is part of what I think is gonna happen. And then precisely as Gillian was gesturing, we'll start, we'll say, hey, you know, as opposed to having, you know, twenty lawyers, we only need four. But then we're gonna figure out other new things that we need to be doing or can be doing that are really good for how we do business, risk mitigation, analysis, contracts, etc. And then the work will expand in, you know, in different ways just as it has, you know, in every adoption of technology. 

While there's concerns and things to navigate, the human amplification is, like, just simply amazing. We have line of sight to a medical assistant on every smartphone running at under, you know, five pounds, five dollars per hour that is there 24/7 for everyone who has access to kind of a smartphone. We have a legal assistant, a tutor, etc. All of these things are part of that kind of amplification. And, you know, how do we get there? Like, here's, I'll end with that kind of, one of the ways that I think white-collar work will be changing, which is we already have coding copilots that essentially people with engineering mindsets are using. I think every white-collar job will have a coding copilot assistant that, part of how you're doing journalism, teaching, evaluation, analysis, accounting will be actually, in fact, having an AI system that's doing coding with you in order to be accomplishing those missions. 

Gillian: Yeah. The key question we face is that we know that, you know, like, any innovation can either unleash our demons or the angels of a better nature. That applies to electricity. It applies to guns. It applies to nuclear power. It applies to anything that we've created. And if we look at social media, the reality is it unleashed our demons for the most part. They overwhelmed our angels of our better nature. I do think that AI, agentic AI, does have ways to potentially unleash our angels, and the question really is how. And I would argue simply, to be totally biased, that mixing artificial intelligence or accelerated intelligence with anthropology intelligence, i.e., a sense of our own humanity, the other AI, is one way to go. 

Parker: I think that's a terrific ending, a terrific inspiration. The mission of our time is to ensure that we steer AI's adoption by humanity to unleash our better angels. What a terrific conversation. Reid, Gillian, thank you both so much for making the time to join us. We really appreciate it.