AI is changing the way we work.

Sign up for our newsletter and be the first to know about exclusive events, expert insight, and breakthrough research—delivered straight to your inbox.

Submit

Please share a few additional details to begin receiving the Valence newsletter

By clicking submit below, you consent to allow Valence to store and process the personal information submitted above to provide you the content requested.

Thank you! Your submission has been received!
Please close this window by clicking on it.
Oops! Something went wrong while submitting the form.

The Frontier Firm: AI, Learning & Workforce Design | Microsoft, Databricks

In this session from Valence's 2026 AI & The Workforce Summit, senior HR and workforce leaders from Microsoft and Databricks share what they are learning from operating at the frontier of AI adoption — as both builders of AI tools and as their own most demanding users. The conversation covers the Frontier Firm framework, what it means to be an "agent boss," how consumer-grade AI expectations are reshaping enterprise experience design, where the time savings from AI actually go, how organizations can preserve and deepen learning in an automated world, how AI is being used to scale culture at speed, and how to build workforce roadmaps when the pace of change outstrips traditional planning horizons.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Video Transcript

Key Takeaways

  • The Frontier Firm is AI-powered but human-led: Microsoft's research framework positions AI as a means to achieve outcomes across customer engagement, product innovation, process design, and employee experience — with human judgment at the center of every consequential decision. The question is not whether to use AI, but how to apply it to the outcomes that matter most.
  • Being a good agent boss requires clarity about goals, outcomes, and success criteria — but not the same emotional intelligence as managing people: Managing AI agents draws on some of the same skills as managing people — goal-setting, structuring work, defining success — but does not yet require the social and emotional intelligence that makes humans effective managers of other humans. This distinction matters for how organizations design leadership development.
  • Employees expect consumer-grade experiences at work — and AI makes that possible: The design principles that make Shopify, AirPods, and ChatGPT delightful — simple, fast, frictionless — are the same principles that should govern enterprise AI deployments. Amy's guiding rule for her team: if you wouldn't do it in your personal life, don't create it at work.
  • Time saved by AI goes back to the organization's dominant value, not to employees: At Databricks, a high-growth results-focused company, time savings from AI go directly into higher-impact work — not back to individuals as free time. Where time savings go is organization-dependent and should be discussed explicitly, not assumed.
  • AI is automating rote work — not the work that actually created learning: The tasks AI handles most effectively are codifiable, low-stakes, and routine — which were never the primary source of professional growth. The learning that matters most comes from mentorship, judgment under uncertainty, edge-case recognition, and reviewing AI's work critically. Organizations must actively design for these learning pathways, especially at entry level.
  • Governance, clarity, trust, and people-first design are the four enduring constants: As AI changes everything else, Neha identifies four elements that must remain persistent: responsible AI governance (privacy, security, accountability), leadership clarity about direction even amid uncertainty, trust between employees and leaders, and proactive design of new roles and career paths rather than waiting for AI to create them automatically.

Full Transcript

The Frontier Firm: An AI-Powered, Human-Led Framework

[00:00:00]

Prasad: I'm excited for this conversation because your organizations are at the forefront. You folks are at the frontier of building a lot of these AI tools that are then going out into workforces and to society. In some senses, you're also customer zero for all of this work — it has to work in your organizations before it works anywhere else. Neha, you published a fabulous report called the Frontier Firm. For this audience, could you share your key messages? Maybe a couple that are really pertinent to HR leaders?

[00:00:49]

Neha: A lot of it echoes what we've been talking about today — this idea of being AI-powered but human-led. The notion of the human, and the role of human judgment, is a very important one. When we're thinking about the Frontier Firm, we're really thinking about how do we apply AI through the outcomes we're trying to achieve and the ways we're trying to achieve them. That includes customer engagement — how do we improve that? That includes the products we're building, through innovation, through processes, and for our employees. In all of those pieces, we're thinking about how can we apply AI to move things forward.

▶ What Is the Frontier Firm? Microsoft's Framework for AI-Powered, Human-Led Organizations

The Frontier Firm is a Microsoft research framework for how leading organizations apply AI across the full scope of their operations — customer engagement, product innovation, process design, and employee experience — while keeping human judgment at the center. The concept, developed by Microsoft's workforce research team, positions AI not as a replacement for human decision-making but as a means to achieve outcomes more effectively. Organizations that successfully become Frontier Firms use AI to move faster, not to remove humans from consequential decisions.

What Is an Agent Boss — and How Do You Become One?

[00:01:32]

Prasad: One of the things you mentioned was this notion of agent bosses. What does that look like, and how do people become good agent bosses?

[00:01:49]

Neha: We've been talking about this a lot on my team recently. In some respects, it's very similar to how we think about managing people. In some respects, it's very much not. Where it is similar: it's about being very clear and identifying what your goals are, where you're trying to get to, how you want to structure work, what outcomes you're looking for, and how you'll define success. Where it diverges from managing people: it doesn't — at least not yet — involve the social and emotional intelligence that really drives what makes us good managers of humans.

▶ What Is an Agent Boss? How Organizations Are Developing AI Management Skills

An agent boss is a leader or employee who effectively directs and manages AI agents to accomplish work — a skill set that is rapidly becoming as important as traditional people management. According to Microsoft's workforce research, effective agent bosses share competencies with effective people managers: clear goal-setting, structured work design, and well-defined success criteria. However, agent bossing does not yet require the social and emotional intelligence that defines great human leadership. Organizations building agent boss capability need training programs that develop both sets of skills distinctly.

Bringing Consumer-Grade AI Experiences Into the Enterprise

[00:02:39]

Prasad: Amy, employees in their personal lives get used to consumer-grade tools — they don't want high-friction systems at work. How do you respond to employees who say they want the same seamless interaction they get with Claude, Gemini, or ChatGPT?

[00:03:18]

Amy: This is probably what I'm most passionate about. In our personal lives, we've gotten used to a one-click experience — Shopify if you want to buy something, AirPods that are working within 30 seconds. And then we go to work and we expect people to put in a ticket and wait 24 hours for a response. The way we're thinking about it: underneath all of those consumer experiences, they're simple, delightful, fast. Those are the same design principles we want at work.

Nobody wants to do the old ways anymore. And we shouldn't think about AI as 'take old, outdated ways of working and automate them.' We want to rethink the way people experience this entirely. The way I guide my team: if it isn't something you would do in your personal life, please don't create it at work.

If a new hire is coming in and they'd naturally go to Claude or ChatGPT to ask a question, we should not be asking them to figure out who in the organization to ask or which ticket to file. We need a one-stop shop for them to get the information they need. That needs to happen across every way we work — to bring those two worlds together for people.

▶ How Enterprise AI Experience Design Should Mirror Consumer Apps

Employees who experience seamless, one-click consumer AI tools — ChatGPT, Shopify, AirPods — arrive at work expecting the same. Amy, Chief People Officer at Databricks, frames this as a design imperative: the same principles that make consumer AI delightful (simple, fast, frictionless) should govern enterprise AI deployments. Her rule for her team is direct: if you wouldn't use it in your personal life, don't build it for work. For new hires especially, the gap between consumer AI experience and enterprise systems is a friction point that erodes adoption, trust, and productivity from day one.

[00:04:39]

Prasad: I heard you tested replacing long emails with AI-generated podcasts. What were you trying to do, and what did you learn about what works and scales?

[00:05:00]

Amy: Think about how people consume information now. The first thing I do when I walk my dog is put my AirPods in and listen to a podcast. What I don't want to do is read another 20-paragraph email — I genuinely won't do it.

We decided to take the things we used to put in long newsletters or emails and turn them into a less than five-minute, very conversational podcast that people could listen to while walking their dog or dropping their kids off at school. People absolutely loved it.

What was fun about it: it actually came from a Shark Tank-style competition on my team. The idea came from them. The framework of 'bring the two worlds together' is at the foundation. When we've had to drive substantial, complex change, we try to simplify, make it conversational, make it accessible — so people can consume it in a way and at a time that works for them.

Where Does the Time Saved by AI Actually Go?

[00:06:13]

Prasad: Here's a question for both of you. In my class at Stanford, I talk about two things that afflict all knowledge workers: time poverty and thought poverty. Time poverty because we're always pressed. Thought poverty because we're doing shallow work rather than deep work. Ethan said this morning, 'I can't believe Claude is taking minutes to do work that took me hours.' When AI saves time, where does the surplus go? Is it reducing time poverty? Or is it just adding 15 more things to the list?

[00:07:29]

Amy: It's really going to be organization dependent. I think about organizations on a continuum from very people-focused to very results-focused. At Databricks, we're in a super high-growth environment — on the results-focused end. In my mind, there's no question people aren't getting the time back. It's going to something else that's more impactful, more important to the organization. So you have to think about the framework you're working in and have explicit discussions about where those trade-offs are going to be.

[00:08:10]

Neha: We're seeing it go back into higher-quality work. For example, as engineers are able to do less manual coding, they're starting to think about, 'What's actually the interesting part of my work?' Instead of spending time on debugging — which engineers often cite as the least enjoyable part of their work — they're now thinking about orchestration: how can I think end-to-end about processes of work? Suddenly the time that's being saved isn't going into more rote, routine work. It's going into work that's actually interesting and meaningful, and where they can see the value playing out in the company.

Prasad: Thoughtful reinvestment into higher-quality work that is meaningful to both the individual and the organization.

Neha: That's right.

▶ Where Does AI-Saved Time Actually Go in Large Organizations?

When AI saves time for knowledge workers, where that time goes depends entirely on the organization's culture and priorities. At Databricks, a high-growth, results-focused company, time savings from AI flow directly into higher-impact work — not back to individuals as discretionary time. At Microsoft, engineers freed from debugging are investing time in orchestration and end-to-end process thinking — work that is more meaningful and more organizationally valuable. Leaders should make explicit decisions about where AI-recovered time goes rather than allowing it to simply fill with additional low-value tasks.

Preserving Learning and Developing Judgment in an AI-Assisted World

[00:09:13]

Prasad: If routine work gets automated, where does the learning come from? I've made the argument that automating work with few degrees of freedom might impede learning. How do you structure learning in an AI-aided world?

[00:09:54]

Amy: We're in a learning shift. I hear a lot of fear from people — 'We're going to turn stupid' — and we're not. For those of us who lived through the shift from encyclopedias to the internet, we didn't become dumb because information became more accessible. This is analogous. We're going to shift our skills. Things that used to take a lot of time will become easy. But the capabilities that matter most — choosing between options, using discernment, using judgment — will become more critical learning pathways. We as leaders have to think about that in the design of what we're asking of people now, and what skills we want to develop in them. We need to get rid of the idea that we're all going to lose our capacity to learn because AI has come to the forefront.

[00:10:51]

Neha: The work that AI is doing — at least now — is not the work that was really creating learning. It's the rote work, the routine work, the codifiable work, the low-stakes work. That wasn't the stuff we were really learning and stretching from. Now, if you look at the kind of work people are doing, that's where we're starting to see people stretch.

So it's about HR and business leaders working together to strategically craft ways for that learning to occur — to build that judgment. Research from medicine and law firms shows that human judgment develops through identifying edge cases. How can we build understanding of those edge cases? Through mentorship. Through orchestration. Through code reviews with engineers. Through reviewing the work that AI is doing to make sure it's actually doing what it's supposed to. Using that review process to build the skill of 'what does good look like?' and 'where are those edge cases we don't see all the time?' That has to be a shift in how we redesign skilling — especially at the entry level.

▶ How Organizations Can Preserve Deep Learning as AI Automates Routine Work

AI is primarily automating codifiable, low-stakes, routine work — which was never the primary source of professional growth or learning. The work that actually builds judgment — edge-case recognition, mentorship, critical review of AI outputs, end-to-end process thinking — remains human. According to Microsoft's workforce research, organizations need to actively redesign learning pathways around these higher-order skills, particularly for entry-level employees whose traditional on-ramps to judgment-building (doing rote tasks and gradually taking on complexity) are being compressed by AI automation.

Prasad: Anthropic just published a report last week on skill atrophy in coders who became cognitively dependent on Claude. I appreciate where both of you are headed in terms of engineering those learning environments intentionally.

Using AI to Scale Culture: The Databricks Equity Bot Story

[00:13:37]

Prasad: How are you deploying AI to scale your culture? Microsoft is at the forefront of AI. Databricks has a strong AI foundation and has grown really fast. Are you deploying AI in any way to scale culture — and if so, how?

[00:13:37]

Amy: Culture is a set of behavioral norms. At the intersection of AI and culture, it's really about making sure you're bringing whatever your framework is to life for employees. Technology can help support that — but we can't abdicate culture to artificial intelligence. It's not going to work that way.

We've mainly been using AI in ways that help support the kind of experience we want to provide for employees — which is a part of how they experience their work every day, and ultimately that leads to culture. And it's often been at the intersection of where we need to drive the biggest impact.

Here's an example. About a year ago, employees wanted liquidity. We decided to do a tender offer and buy back RSUs. In typical Databricks fashion, we decided to do in six weeks what other companies do in six months. We had two equity people and 10,000 people participating. The only way through that was a scalable solution — so we built an equity bot that answered 4,000 questions from employees.

This was a very emotional time for employees, something really important to them. If each person had had to put in a ticket and wait for their answer to a very personal question, we would have taken something positive and turned it into something genuinely frustrating. What employees came away with was a deeper sense of trust — both because of the experience we created and because the tech worked.

Our CEO — a very skeptical founder — decided to test the bot himself. He typed in his own question about his own circumstance, and he got the right answer. He immediately sent a company-wide email: 'Use the equity bot, it's awesome.' Getting that buy-in from the top helped people feel like they were really being taken care of, even though we were using tech at the forefront of getting them through a complicated process.

▶ How Databricks Used an AI Bot to Handle 4,000 Sensitive Employee Questions During a Tender Offer

When Databricks ran a company-wide tender offer with 10,000 participants and only two equity team members, they built an AI equity bot that answered 4,000 employee questions about a highly personal and emotionally significant financial process. The bot handled questions that would have otherwise required tickets and days-long waits. The outcome: employees reported a deeper sense of trust, and the company's CEO — a skeptical founder — publicly endorsed the tool after testing it himself. This case illustrates how AI can scale culturally sensitive employee experience without sacrificing the sense of individual care.

[00:16:47]

Neha: Culture is hugely important in creating the context for AI to be successful. We've been building a culture of learning for about 10 years — building that culture of experimentation and, critically, building incentives for managed risk-taking. If everyone is afraid of what happens when they experiment, there's not going to be much of it. How do we actually move that forward?

What we've found is that creating a strong sense of clarity — where exactly are we going with AI, what are we trying to achieve — and putting that together with the right culture and incentives has helped us move forward. When you take a team with a strong base of AI use, reward experimentation, and give them a shared understanding of where they're going, you get faster feature development than we've ever had before.

Building Workforce Roadmaps When the Pace of Change Outstrips Planning

[00:18:03]

Prasad: Since changes are happening so fast, how do you build roadmaps for what this means for your workforce and organizationally?

[00:18:32]

Amy: For us, it's really about agility now. We've literally stopped headcount planning outside of go-to-market functions. It's a very interesting world to work in when the expectation is to hire 4,000 or 5,000 people but there's no plan. The reason is that things are changing so quickly — for every headcount we're going to hire, we want that thought process to have happened where we ask: could this be automated, or do we need a person to do this work?

The intersection of understanding what the business needs are, what the technology can do, and then being as thoughtful as possible about where that's going — that's really where the art of this is now. People have to get comfortable with less certainty during this time, and understand how to marry headcount needs versus work to be produced. It's a little art and a little science these days.

[00:19:41]

Neha: What is going to have to be persistent is governance — having a good understanding of responsible AI, privacy, and security. That's going to be enduring.

There's a sense today that 'if I say anything, it's going to put me in jail, so I need to say nothing.' But that vacuum creates more problems than it solves. It also limits the trust that needs to be in place between employees and leaders. How do we continue to build that trust? That's going to matter as we move forward.

And probably the most important piece is putting people first. Employees need to know that we are actively designing roles to be interesting, challenging, and exciting. Research suggests that roughly 40% to 60% of roles since 1940 didn't exist before — and the only way to maintain employment levels is to think about what those new roles are going to look like and how we're going to design them. AI is not just going to make them appear. We're going to have to actively seek them out, in partnership with HR and business leaders. Career paths are totally changing. How do we make sure people feel like they have a place to grow into?

Even while there is continual change, making sure that governance, clarity, trust, and people-first design are all in place — and that everyone knows they're advancing — is going to be really important.

▶ Four Enduring Constants for HR Leaders as AI Reshapes the Workforce

As AI changes every other aspect of organizational life, Microsoft's workforce research identifies four elements that must remain persistent. First, responsible AI governance — protecting privacy, security, and accountability. Second, leadership clarity about organizational direction, even when the destination itself keeps shifting. Third, active trust-building between employees and leaders, recognizing that communication vacuums create more anxiety than honest uncertainty. Fourth, proactive people-first design: actively creating new roles and career paths rather than waiting for AI to generate them automatically. These four elements provide organizational stability during continuous technological change.

Prasad: Wonderful. Thank you both, Amy and Neha.