AI Freed Up Your Team's Time. Nobody Built What Should Fill it.

Every transformation kickoff hits the same silent moment: what do we actually build? Here's mine.

AI Freed Up Your Team's Time. Nobody Built What Should Fill it.
Photo by Land O'Lakes, Inc. / Unsplash

Every transformation kickoff I've sat through has the same shape. A sharp diagnosis on screen, a compelling case for change, senior people nodding, the board behind it, the budget approved. The mood is serious but settled — everyone knows what we're solving for.

Then comes the question that lets the air out: what do we actually build?

The silence isn't because people lack ideas. It's because the ideas are all tactical. A new process to follow, a specialist to bring in, a platform to stand up. Each one sounds right. None of them answer the question underneath: what infrastructure would let this company make good decisions at scale, without depending on any specific person, tool, or platform to hold it together?

There's no shortage of thinking about what's wrong with transformation. There's a near-total absence of a process for what to build instead.

Here's mine: the 4Es. Explore, Experiment, Envision, Enable. Refined through failure, proven by what lasted after I left.

It works for building any capability that needs to last: AI adoption, brand coherence across thirty markets, post-merger integration, innovation infrastructure. The domain changes. The process doesn't.

Why Most Methodologies Fail

Most transformation methodologies assume you already know what to build. The methodology is the project plan.

That's fine for problems where the rules don't change as you work on them. Building capability that lasts is a different class of problem. The people inside the system learn, adapt, and respond to what you build. Infrastructure that enabled capability at 500 people might create bureaucracy at 5,000. The principles that guide a design-led company won't transfer to an engineering-led one.

The process is rigorous and transferable. The outputs (the specific principles and infrastructure) are context-specific. That's not vagueness. Importing someone else's answers is precisely why most transformation fails.

There's another reason. Three assumptions the old transformation proposition rested on are under pressure at once.

  • The first is that you can diagnose from what gets reported up. In any organisation at scale, the information reaching leadership has been filtered, translated, and shaped for consumption. What people say they do has drifted from what they actually do, and the drift widens with every layer between you and the work. Shared ground has to be seen, not reported.
  • The second is that intelligence, properly applied, produces sound judgement. Research on how smart people actually reason keeps finding the opposite: cognitive ability and judgement are decoupled. Clever people aren't reasoning their way to better answers. They're building better arguments for the answers they already had. Intelligence dresses up the instinct. It doesn't override it. Judgement comes from somewhere else: structured feedback under consequence, applied repeatedly, until pattern recognition stops being theoretical. That can't be hired in. Though that's rarely why expertise gets hired anyway. Boards prefer to hear bad news from a stranger, not from the people who've been delivering it for two years.
  • The third is that good people can overcome a mediocre system. What Moneyball did to baseball, Google's Project Aristotle did to this one. Across 180 teams, the strongest predictor of performance was how the team was structured and how it operated. Systems dominate talent at scale. A good system makes average people capable. A bad one wastes exceptional ones.

Methodologies built on those three assumptions (gather the facts, apply rigorous logic, deploy the best talent) can't keep up.

Why these four

This is what the 4Es is built for.

Explore has to get close enough to actual work to see what's true, because what people report about how they work is no longer a safe starting point. Experiment tests under real consequence, which is how judgement gets built when it can no longer be hired in. Envision designs the system: principles, scaffolding, finite rules for infinite expression. Capability at scale is a property of systems, not of the brilliant people staffed into them. And Enable transfers what you've built, so the capability outlives the people who built it. Without that transfer, the other three revert the moment attention shifts.

💡
The four phases aren't steps in a project plan. They're what it takes to build capability when facts are contested, intelligence can't be trusted to produce judgement, and no amount of talent overcomes a weak system.

Explore: Discovering the Context

Exploration means understanding what principles actually guide decisions versus what's written on walls. Observing how people operate under pressure, not in workshops. Mapping where capability already exists but can't be used, and what blocks it. Noticing workarounds, because workarounds are where the infrastructure gaps live.

The principles you need to scale are usually already in the building. They're just locked in people's heads. Exploration makes them explicit.

It also surfaces something at risk of being lost: what the company already knows about its customers, its market, its craft. Knowledge that's tacit, distributed, and in danger of disappearing as AI handles the work that used to generate it. Exploration captures that knowledge before the accidental learning path goes away.

When we built a marketing experimentation capability, I spent five weeks watching how decisions actually got made. Not the process on paper, the real one. Where did intuition override evidence? Where was evidence ignored because the political cost of being wrong was too high? Two things surprised me. The appetite for experimentation was higher than leadership assumed. People were quietly running their own tests, but without shared language, the learning stayed local. And the biggest barrier wasn't resistance. It was that nobody had asked these people what they already knew about what worked.

For leaders, this phase demands intellectual humility. Genuinely not knowing. A turnaround I witnessed started this way. The new leader later said: "I wasn't competent for the role, so instead of telling people what to do, I asked questions." The principles that emerged still guide that company two decades later. They lasted because they were discovered, not decreed.

💡
If you're not surprised by what you find, you're not exploring.

Experiment: Trying What Might Work

Exploration produces hypotheses about what infrastructure might work in this context. Hypotheses need testing, not in theory but on real work with real stakes.

This phase tries approaches and keeps only the ones that produce the intended outcome. Not "does this tool work?" but "does this approach build capability that transfers?" That question changes what you measure. Effectiveness matters. But so does whether the people involved are becoming more capable or more dependent.

A distinction matters here that most companies miss. Experimentation is not piloting. Pilots are open-ended. They run until someone decides they've proved the concept, or until attention shifts. The learning is incidental. Real experimentation is designed to end. It has a thesis, a timeframe, and a clear definition of what success looks like. If the experiment can't tell you what the wider company should do differently, it was never an experiment. It was a demo.

In the experimentation build, we tested three infrastructure designs with pilot teams on live campaigns. Real budgets, real stakes. Two failed. The first was too heavy: people spent more time on the process than the thinking. The second was too threatening: it surfaced decisions senior people had been making on gut feel, and the transparency felt like exposure. Those failures taught us more than the success. The pilot that worked was the one where the team said "this is just how we should be working." A way of thinking embedded in what they already did.

Envision: Designing What Scales

With evidence from exploration and experimentation, you can design the infrastructure that enables capability at scale. Not before. Designing before you've explored and tested is how you get 200-page playbooks nobody opens.

This phase creates what I call "finite rules for infinite expression": infrastructure that bakes in principles while preserving flexibility. Think about grammar. Finite rules, infinite expression. Grammar doesn't constrain language; it enables it.

The design question is always: what's the minimum structure that enables the most distributed judgement? Governance that's simpler than the problem it governs doesn't produce control. It produces friction. Over-specification doesn't help either. Every additional rule is a bet that you can predict the future. Principles are the opposite bet: that you can't predict the future, so you build the capacity to respond to whatever arrives.

💡
Process tells you what to do. It breaks when conditions shift. Principle tells you how to think. It holds when they do.

This is where most capability work goes wrong. A three-level structure helps:

  • Purpose principles: why we exist. Change rarely. These are the stable ground everything else stands on. Also useful for specific functions, not only at company level.
  • Operating principles: how we work. Guide daily decisions. Most companies and functions skip this level entirely, which is why people escalate decisions they should be making themselves.
  • Craft principles: what good looks like in specific work. The layer almost no company has spelled out clearly enough to scale. It's where onboarding breaks down and quality becomes inconsistent. And it's the layer that determines whether AI helps or hollows out your team. When a marketer uses AI to generate a brief, what tells them the output is good enough? Not the tool. The principles they've absorbed about what a good brief looks like. Without craft principles, AI produces faster mediocrity.

In the experimentation build, we designed the lightest possible infrastructure. Principles for what makes a good experiment: a way of thinking. Templates that guided without constraining. A peer review mechanism where marketers challenged each other's test designs, which did something no training could: it built shared judgement through practice. Purpose principle: "we test because learning compounds and guessing doesn't." Operating principle: "every test has a hypothesis written before it runs." Craft principle: "a good experiment tests one variable; the control must be real, not assumed."

Would a smart person joining the team six months from now be able to use this infrastructure to make good decisions without asking the people who designed it?

Enable: Where Half-Life Gets Built

The final phase is transfer.

Training teaches people what to do. Transfer builds their ability to figure out what to do when nobody's told them. The distinction matters because most transformation fails right here: the consultants leave, the project team disbands, and everything slowly reverts.

What makes reversion harder is capability that's been genuinely absorbed, not enforcement. The people closest to the work don't just use the infrastructure. They own it, adapt it and teach it to people who arrive after the builders have gone.

In that build, transfer took eighteen months. Early adopters became teachers, not because we asked them to, but because the methodology let them. The central team shrank deliberately. We measured one thing: how often teams consulted us. When we went two weeks without a question, that was progress. But that wasn't the moment I knew transfer was real.

The moment was a phone call. A marketer in a market our programme had never reached asked me about our test design principles. Not because anyone had told her to, but because a colleague in another team had taught her. That colleague had learned from someone who'd been in one of the original pilots. The knowledge was three handshakes from anyone on the original team. Nobody in the chain had been asked to spread it. The infrastructure had created the conditions, and the capability travelled on its own.

I've written before about the business results this build produced: 16 million DKK invested, 300 million DKK in incremental sales in year one. That ratio won't transfer to every context. It's shaped by the operating base it sat on top of and the category reach already in place. What does transfer is the pattern: a small, disciplined capability investment unlocking compounding value across a much larger operating base. In this case, the programme was killed in a restructure. The capability didn't notice.

This phase demands something most leaders resist: letting go. Making yourself unnecessary.

💡
Every principle you make explicit is one less thing that depends on your judgement. Every capability you transfer is one step toward the departure test: could this work if everyone who built it left tomorrow?

How long should you expect the whole thing to take? Longer than you'd like. It varies with the size and complexity of what you're building into. The signal is always the same: a team that keeps moving after the builders are gone.

When the 4Es Fails

The methodology can fail. Five ways:

  • Performative exploration: discovery motions that validate what leadership already decided.
  • Rigged experimentation: pilots designed to succeed rather than learn.
  • Over-engineered envisioning: writing process where principle was needed. Infrastructure so detailed nobody uses it.
  • Rushed enabling: launching before capability transfers.
  • And the subtlest: leadership that can't let go. Everything else works, but the builders can't stop being needed. The system works as long as they're there. Which means it doesn't work.

Each failure mode maps to a leadership capability: intellectual humility (Explore), comfort with ambiguity (Experiment), principle-based thinking (Envision), letting go (Enable). These aren't personality traits you either have or you don't. They're practices that develop through doing the work. The 4Es creates the conditions for leaders to develop the capabilities it requires, rather than waiting for the right leaders to arrive.

The Evidence Question

Norway's sovereign wealth fund offers a public proof point. Their April AI summit showed the 4Es playing out live at $1.8 trillion scale. They explored first: insourcing their data and building a single data foundation before touching AI. They experimented with small autonomous teams of two developers and one business person, no ceremonies. They envisioned infrastructure, not tools: a governance framework that translates principles into daily practice, and an Investment Simulator that surfaces portfolio managers' behavioural blind spots rather than telling them what to trade. And they enabled by training everyone in AI and through a volunteer Ambassador Network working with each team to find the specific pain point, where AI could help them and hold the work inside the governance principles already set. The CEO's mandate was non-negotiable. The ambassadors made it real. Over half the organisation now writes its own code. If the people who built this left, would it keep working? The structure says yes. Time will test it.

I don't have a controlled study across dozens of companies. What I do have is a 25-year pattern inside one large company across six radically different domains: physical product, consumer experience, digital transformation, sustainability, diversity and inclusion, and marketing experimentation. I've detailed the results and the structural reasons they lasted in earlier pieces. Apply the departure test to your own past investments. Look at transformation budgets from the past five years. How many built capability that persists today?

If the answer is "not many," something different is needed. The 4Es is my take.

Continuous Calibration

The 4Es isn't linear. Exploration shapes experimentation. Experimentation sends you back to explore. Enabling exposes the next thing to envision.

Beneath all four phases runs continuous calibration: the ongoing micro-adjustments that keep infrastructure alive. There's never a moment when everything works.

Regenerative transformation isn't "build then done." It's "build capability for continuous calibration."

Build what doesn't need you.

Next month: what this methodology demands of leaders — and why the qualities that make transformation succeed are often produced by circumstances most leaders don't experience.