Work in progress

Deep gratitude to Richard Rumelt and his work, with as strong recommendation to read Richard Rumelt’s Good Strategy, Bad Strategy

CHAPTER ONE

The Pilot Trap

The boardroom on the forty-second floor had the kind of view that was supposed to make you feel powerful. On a clear day, you could see three states. But on this particular Tuesday morning in March, the CEO of NovaCom Industries wasn’t looking at the view. He was looking at a spreadsheet.

Forty-seven. That was the number of AI initiatives currently underway across NovaCom’s six business units. Forty-seven pilots, proofs of concept, and “strategic experiments.” The company had been at this for three years. They’d hired a Chief AI Officer, built a center of excellence, contracted with four different vendors, and spent north of $80 million. The board had been told, repeatedly, that NovaCom was “leading in AI adoption.”

And yet here was the question the CEO could not answer, the one the board’s lead independent director had asked him over dinner the previous evening, quietly enough that no one else could hear:

Which of these forty-seven initiatives has actually changed our competitive position?

He stared at the spreadsheet. He could find plenty of metrics. One pilot had reduced invoice processing time by 30 percent. Another had improved demand forecasting accuracy by a few points. A chatbot in customer service was deflecting 20 percent of routine inquiries. Each of these was real. Each had a positive ROI if you squinted at the numbers the right way.

But changed their competitive position? Made NovaCom harder to beat, more valuable to customers, more strategically differentiated? He couldn’t point to a single one.

NovaCom is a composite, but barely. The details are drawn from dozens of companies I’ve worked with or studied over the past five years. Change the industry, swap the numbers, rename the executives, and you’ll find this same story playing out in boardrooms on every continent. It is, in fact, the default story of AI transformation in the 2020s.

And it is a story of strategic failure disguised as operational progress.

•  •  •

The Busy-ness of AI

There’s an epidemic in business right now, and it doesn’t look like a crisis. It looks like progress. Companies everywhere are busy with AI. They’re running pilots. They’re building dashboards. They’re sending their executives to AI bootcamps and hiring machine learning engineers and issuing press releases about their “AI-first” strategies. Activity is everywhere.

Results are not.

Study after study confirms the gap. The vast majority of AI pilots never make it to production. Of those that do, most deliver incremental efficiency gains rather than strategic advantage. And the executives leading these efforts often can’t articulate how their AI investments connect to the broader competitive strategy of the business. They’re doing AI. They just don’t know why.

This isn’t a technology problem. The technology works. Large language models can draft contracts, analyze medical images, optimize supply chains, and predict customer behavior with startling accuracy. The tools have never been better or more accessible. Any company with a modest budget and a decent internet connection can deploy AI capabilities that would have been science fiction a decade ago.

So if the technology isn’t the bottleneck, what is?

The answer, I believe, is strategy — or rather, the absence of it. Most companies don’t have an AI strategy. They have an AI shopping list.

•  •  •

Shopping Lists Are Not Strategies

Here is what a typical “AI strategy” looks like in practice. A company identifies a dozen or so processes that seem ripe for AI automation. Perhaps a vendor demonstrates an impressive demo. Perhaps a competitor announces an AI initiative and the board starts asking questions. Perhaps a consulting firm produces a report showing billions of dollars in potential AI-driven value for the industry.

So the company makes a list. Automate invoice processing. Improve demand forecasting. Deploy a customer service chatbot. Build a recommendation engine. Use computer vision for quality inspection. The list is sensible. Each item on it is a real opportunity. Most will produce a positive return on investment if executed competently.

But a list of good ideas is not a strategy.

Richard Rumelt, perhaps the most incisive thinker on strategy of the past half-century, made this point with devastating clarity. A strategy, he argued, is not a set of goals. It is not a vision statement. It is not a list of things you want to do. A strategy is a coherent response to a specific challenge. It begins with a diagnosis of the situation, defines a guiding policy for dealing with that situation, and specifies a set of coherent actions that carry out the policy.

Rumelt called this the kernel of strategy: diagnosis, guiding policy, coherent actions. And he argued that the vast majority of what passes for strategy in business is actually something else entirely — what he bluntly labeled bad strategy.

Bad strategy isn’t just the absence of good strategy. It’s an active substitution. Organizations create the appearance of strategic thinking without doing the hard work it actually requires. And nowhere is this substitution more prevalent than in the AI strategies of modern corporations.

•  •  •

Why Smart Leaders Fall Into the Trap

Let me be direct about something: the leaders falling into the pilot trap are not stupid. They are, in most cases, experienced, thoughtful executives who have led successful transformations before. They fell into the trap not because they lack intelligence but because the trap is well-designed.

Three forces conspire to pull smart leaders into the pilot trap.

The first is vendor pressure. The AI vendor ecosystem is enormous, well-funded, and extraordinarily good at selling. Every vendor arrives with a demo that seems to solve a real problem. And each one does, in isolation. The trouble is that buying solutions to individual problems is not the same as solving the strategic challenge facing the business. It’s like furnishing a house by buying every attractive piece of furniture you encounter, without ever deciding what kind of home you want to live in. You end up with a lot of furniture and no coherent living space.

The second is peer pressure. When your competitors announce AI initiatives, it creates a gravitational pull. No CEO wants to tell the board that the company is “behind” on AI. So companies launch initiatives not because they’ve identified a strategic need, but because others are doing it. This is strategy by mimicry, and it is reliably poor. Your competitor’s crux is almost certainly not your crux. Their AI priorities reflect their strategic situation, not yours. Copying their moves is like taking someone else’s prescription medication because they seem healthy.

The third is the tyranny of the pilot. Pilots are irresistible to large organizations because they feel low-risk. “Let’s just try it and see.” This sounds reasonable. It sounds prudent. But it creates a pernicious dynamic. Because pilots are small and low-commitment, they don’t require strategic thinking. You don’t need to diagnose the critical challenge facing the business to launch a chatbot pilot. You just need a budget and a vendor. And because each individual pilot is easy to justify, the portfolio of pilots grows without anyone ever asking whether the portfolio itself makes strategic sense.

Forty-seven pilots later, you’re NovaCom.

•  •  •

Bad Strategy in Disguise

Rumelt identified four hallmarks of bad strategy, and each one maps with uncomfortable precision onto how most companies approach AI.

The first hallmark is fluff. Fluff is the use of inflated, abstract language to create the illusion of strategic thinking. In the AI world, fluff sounds like this: “We will leverage the power of artificial intelligence to transform our business and deliver exceptional value to stakeholders.” This says nothing. It commits to nothing. It diagnoses nothing. And yet sentences almost identical to this one appear in AI strategy documents across every industry. They are the strategic equivalent of empty calories — they fill space without providing nourishment.

The second hallmark is failure to face the challenge. A strategy must grapple with a specific, named challenge. But most AI strategies skip this step entirely. They begin with the technology (“AI can do amazing things”) and work backward to find applications, rather than beginning with the challenge (“here is what’s threatening our competitive position”) and working forward to find solutions. This inversion is the root cause of the pilot trap. When you start with the technology, every application looks equally promising. When you start with the challenge, only a few applications matter.

The third hallmark is mistaking goals for strategy. Goals tell you where you want to end up. Strategy tells you how to get there. Yet company after company publishes AI “strategies” that are nothing more than lists of ambitious goals: “50 percent of processes AI-enabled by 2027.” “Become an AI-first organization.” “Achieve $500 million in AI-driven value creation.” These are aspirations, not strategies. They do not diagnose a challenge, define an approach, or specify coherent actions. They are wishes wearing strategy’s clothing.

The fourth hallmark is bad strategic objectives. Even when companies move beyond goals to define specific objectives, the objectives are often incoherent — they conflict with each other, scatter resources across too many fronts, or ignore the constraints the organization actually faces. A company that simultaneously tries to build custom AI models, deploy six off-the-shelf tools, reskill its entire workforce, and overhaul its data infrastructure is not executing a strategy. It is executing a panic.

•  •  •

The Cost of the Trap

The pilot trap is expensive in ways that go beyond the budget line. Yes, companies waste money on AI initiatives that don’t move the needle. But the deeper costs are strategic.

Attention is finite. Every hour a leadership team spends reviewing forty-seven AI pilots is an hour not spent identifying the one or two AI applications that could genuinely transform the business. The pilot trap doesn’t just waste money. It wastes the scarcest resource of all: the focused attention of the people who matter most.

Organizational energy is depletable. Transformation requires people to change how they work. Every pilot that launches and then quietly dies — every initiative that creates a burst of excitement followed by a long fizzle — depletes the organization’s willingness to try again. AI fatigue is real. It’s the organizational immune response to too many half-hearted efforts, and once it sets in, it makes genuine transformation significantly harder.

Competitors who are strategic pull ahead. While NovaCom is running forty-seven pilots, somewhere a competitor is running three — but the right three. That competitor has diagnosed its critical challenge, identified where AI creates asymmetric advantage, and concentrated its resources accordingly. In two years, that competitor won’t just be more efficient. It will have built capabilities that NovaCom can’t easily replicate. The window for strategic AI advantage doesn’t stay open forever.

•  •  •

The Way Out: Finding the Crux

If the pilot trap is the disease, the cure is not more pilots, better pilots, or faster pilots. The cure is strategy. Real strategy. The kind Rumelt describes.

It begins with what Rumelt, in his later work, called finding the crux. The crux is the critical challenge — the one obstacle that, if overcome, unlocks disproportionate progress. In mountaineering, the crux is the hardest move on the route. Every other move matters, but the crux is the one that determines whether you reach the summit.

For AI transformation, the crux is almost never technical. It’s rarely “we need better AI models” or “we need more data scientists.” Instead, the crux tends to be strategic or organizational in nature. It might be:

  • We don’t actually know where we create differentiated value for customers, so we can’t tell where AI would strengthen versus commoditize our offering.

  • Our decision-making processes are so fragmented that no AI tool can improve them without first redesigning how decisions flow through the organization.

  • We have no mechanism for learning from our AI experiments, so each pilot starts from zero and we never build cumulative capability.

  • Our competitive position depends on relationships and judgment, not processes — and we haven’t thought about what AI means for a business built on human capital.

Each of these cruxes demands a fundamentally different AI strategy. An organization whose crux is fragmented decision-making needs a completely different approach from one whose crux is cumulative learning. But the company stuck in the pilot trap never gets to this level of specificity. It’s too busy doing a little of everything to do a lot of anything.

•  •  •

Simple, Not Easy

Here is the promise of this book, stated plainly: AI transformation can be made simple. Not easy — simple.

Simple means reducible to a clear set of steps, decisions, and principles that any competent leader can understand and apply. Simple means cutting through the complexity that vendors, consultants, and the technology itself seem designed to create. Simple means knowing what to do first, what to do next, and what not to do at all.

The framework in this book has five steps. They mirror the logic Rumelt laid out for strategy in general, applied specifically to the challenge of AI transformation:

Step One: Diagnose. Identify your critical AI challenge — the crux. This means looking past the technology and understanding, with precision, where your business creates value, where it’s vulnerable, and where AI can make a strategic difference. Not a list of opportunities. One crux.

Step Two: Prioritize. Find your sources of AI power. Apply a rigorous test to separate the AI applications that create durable competitive advantage from those that merely improve efficiency. Kill the distractions. Fund the strategic bets.

Step Three: Design. Build your guiding policy. Define the overall approach — the principles and boundaries that will channel your AI investments. A good guiding policy tells you what to say no to, not just what to say yes to.

Step Four: Execute. Take coherent action. Translate the guiding policy into organizational moves that reinforce each other: people, processes, data, and governance, all pointing in the same direction. Coherence, not speed, is what separates execution from scrambling.

Step Five: Adapt. Sustain advantage by finding the next crux. The AI landscape shifts constantly. Build the organizational discipline to re-diagnose, re-prioritize, and re-design as conditions change.

Each step comes with practical tools: decision matrices, scorecards, checklists, and processes you can apply Monday morning. This is not abstract theory. It is a workbook for leaders who are tired of spending millions on AI and getting PowerPoint decks in return.

•  •  •

What Happened to NovaCom

Let me return, briefly, to the forty-second floor.

After that Tuesday morning with the spreadsheet, NovaCom’s CEO did something unusual. He didn’t commission another AI review. He didn’t hire another consultant. He didn’t launch a forty-eighth pilot. Instead, he asked his leadership team a question that many of them found uncomfortable:

If we could only do one thing with AI — one thing that would change our competitive position — what would it be?

The first round of answers was predictable: everything on the current list seemed important. But he held the line. One thing. The argument that followed was the first real strategic conversation about AI that NovaCom’s leadership team had ever had.

It took them three weeks. Three weeks of uncomfortable honesty about where the business actually stood, what customers actually valued, and where competitors were actually gaining ground. Three weeks of letting go of the reassuring notion that forty-seven pilots meant they were making progress.

They found their crux. It had nothing to do with chatbots or invoice processing. It was something more fundamental: NovaCom’s business depended on technical proposals that took weeks to produce and won at a rate of 18 percent. The crux wasn’t efficiency in general — it was the speed and intelligence of their proposal process in particular. An AI capability that could cut proposal turnaround from weeks to days while improving win rates would fundamentally alter the company’s competitive position.

They killed thirty-one of the forty-seven pilots. They redirected resources to three initiatives, all connected to the crux. Within a year, proposal turnaround had dropped by 60 percent and win rates had risen to 29 percent. The board stopped asking whether NovaCom was “doing enough with AI.” The answer was obvious.

That’s the difference between a pilot trap and a strategy.

•  •  •

The Road Ahead

This book is organized as a journey through the five steps. In Part I, which includes this chapter and the two that follow, we’ll complete the diagnosis. Chapter 2 will give you a precise diagnostic for bad AI strategy — a way to test whether what your organization calls a strategy actually is one. Chapter 3 will teach you how to find the crux: the specific, structured process for identifying the critical challenge where AI can make the biggest difference for your business.

Parts II, III, and IV will take you through the remaining steps: building the guiding policy, executing with coherence, and adapting for the long game.

But before we go further, I want to make one thing clear. This book asks you to do something that is simple but not comfortable. It asks you to stop. To stop launching pilots. To stop chasing the latest AI announcement. To stop equating activity with progress. And to start doing the hard, quiet, unglamorous work of strategic thinking.

The pilot trap is seductive because it feels like action. Strategy feels like waiting. But strategy is not waiting — it is the precondition for action that matters.

Forty-seven pilots, or three that change everything. The choice is simpler than you think.

 

CHAPTER TWO

The Four Hallmarks of Bad AI Strategy

A Scene from the Offsite

It's a Tuesday morning in the executive conference room on the forty-third floor. The sun is streaming through windows that overlook the city, and there's fresh coffee on the mahogany table. The Chief Information Officer stands at the front of the room, clicking through a PowerPoint presentation. Forty slides. Each one more polished than the last. Charts in jewel tones. Arrows pointing upward. A timeline that stretches three years into the future, color-coded and detailed. The title slide reads: "Our AI Transformation: A Strategic Roadmap for Enterprise Excellence."

The room is quiet except for the sound of pages turning and the subtle hum of the air conditioning. The board members have their tablets out, taking notes. The CEO nods thoughtfully. The CMO leans back in her chair. It all looks impressive—the kind of thing you'd show to investors or include in an annual report. This is strategy that photographs well.

Then a board member raises her hand. She's the type who has sat through a lot of presentations, and she has learned to listen for what's missing rather than what's said. She asks: "What specific challenge does this strategy address?"

The room goes quiet. Really quiet. The kind of silence that comes when a question cuts through layers of presentation and lands on something true. The CIO's hand hovers over the remote. Someone shuffles papers. The CEO shifts in his seat. And the board member—she waits. She already knows the answer. There isn't one.

This is bad strategy in its natural habitat. It doesn't announce itself. It doesn't come with a warning label. It looks expensive. It looks professional. It looks like a strategy. But when you ask what problem it actually solves, the room goes quiet because nobody has answered that question yet.

• • •

What Rumelt Taught Us

In 2011, the strategy consultant Richard Rumelt published a book called Good Strategy Bad Strategy that became, for many leaders, a sudden diagnosis of what they'd been doing wrong. Rumelt's central insight is deceptively simple: bad strategy isn't just the absence of good strategy. Bad strategy is an active substitution. It's when you mistake motion for direction, when you confuse a goal with a strategy, when you use complexity to hide the fact that you haven't actually thought the problem through.

For anyone who has sat in on strategic planning sessions, this recognition lands like relief. You've been in that room. You know the feeling. The presentation looks like a strategy, but something is hollow at the center of it.

Rumelt identified four hallmarks of bad strategy. They're not esoteric. They're not hard to spot once you know what to look for. And they're everywhere—in corporate conference rooms, in military command structures, in startups, and most acutely, in how organizations approach AI.

This chapter takes Rumelt's framework and applies it directly to the AI strategies we see in organizations today. If you recognize your own strategy in these pages, don't despair. Recognition is the first step toward building something better. And that begins with understanding exactly what makes a strategy bad.

• • •

Hallmark One: Fluff

Open the AI strategy document of almost any large organization and you'll find a passage that reads something like this: "Leverage AI-driven insights to unlock transformative value across the enterprise while simultaneously enabling stakeholder empowerment through actionable intelligence frameworks that catalyze innovation ecosystems."

Read that aloud. Notice how it sounds impressive? Notice also how you have no idea what it means. That's not an accident. That's fluff, and it's the first hallmark of bad strategy.

Fluff is the substitution of jargon for thought. It's complexity weaponized to hide the absence of clarity. When a strategy document is filled with fluff, it's typically because the people writing it haven't actually done the hard work of thinking through what they're trying to accomplish. Instead, they've assembled words that sound strategic. They've strung together modifiers and abstractions until the whole thing becomes impenetrable.

Here's the test: Can you state your AI strategy in plain language to someone who is intelligent but hasn't spent their career in your industry? Not to a consultant. Not to other executives who will nod along. To a smart 14-year-old. If you can't, you don't have a strategy. You have a document.

Let's look at some real examples. A financial services company had this in their AI strategy: "Operationalize machine learning capabilities across our digital transformation initiative to enhance customer value propositions through intelligent automation." Ask yourself: What does that mean? What specific thing will the company do? What problem will it solve? The answer is: you don't know. Nobody does. That's the point of the fluff. It creates the illusion of strategy while avoiding the exposure of concrete commitments.

Compare that with this: "Our risk assessment process currently takes three weeks. During that time, opportunities expire and customers get frustrated. We're deploying AI to compress that timeline to 72 hours, which will let us compete with faster-moving fintech firms." You understand that immediately. You can see the problem. You can see the solution. You can imagine how to measure whether it works. That's clarity. That's the absence of fluff.

Fluff often appears as a virtue of being 'comprehensive.' The strategy tries to cover everything. It addresses 'enterprise-wide AI transformation.' It talks about 'digital capabilities' and 'innovation platforms' and 'intelligent systems.' The broader the scope, the easier it is to hide behind abstraction. A real strategy is often narrower. It says: Here's what we're doing. Here's why. Here's why it's better than other things we could do. And here's what we're not doing—and why we're making that choice.

Watch out for these fluff markers in your own strategy documents: any phrase that could work equally well in a different industry, any sentence that contains more than three technical terms, any paragraph that doesn't contain a verb in the active voice, any claim about 'unlocking value' or 'transforming' something without specifying how you'll know it happened. These aren't just writing problems. They're thinking problems. And they're the sign that you're looking at bad strategy.

• • •

Hallmark Two: Failure to Face the Challenge

This is the most damaging hallmark. This is where strategies fail in the deepest sense—not because they're poorly written or unambitious, but because they start in the wrong place.

Most bad AI strategies begin with the technology. They start by asking: What can AI do? What are the latest capabilities? What tools are available? What are other companies doing? And then they work backward, trying to retrofit those capabilities onto the organization. "AI can process images. Let's build an image recognition system. AI can forecast. Let's create a forecasting tool. AI can chat. Let's deploy a chatbot." The technology looking for a problem instead of the problem looking for a solution.

A good strategy starts with diagnosis. It starts by asking: What is actually threatening us? What is our constraint? Where do we lose to our competitors? What is the bottleneck in our operations? What do customers complain about? What are we doing that competitors could do faster or cheaper or better? Only after you've named that specific challenge can you ask: How might AI help address it?

Here's the difference mapped out: The company that says, "We will deploy AI across our operations and achieve a 30% improvement in efficiency," hasn't named a challenge. That's shopping. They're shopping for AI because it's fashionable, and they hope that by deploying it everywhere, something good will happen.

The company that says, "Our proposals take three weeks to write and manually compile. Our competitors can turn proposals around in five days. We lose deals because of speed and we lose money because our team spends 40% of their time on administrative assembly. We're deploying AI to automate the document compilation and formatting, which should cut proposal time to five days and free up 30 hours per week of staff time," has named a specific challenge. That's strategy. You can see the gap between the current state and the desired state. You can see why this particular intervention matters. You can measure whether it works.

The failure to face the challenge is often a failure of nerve. It's easier to be vague about what you're trying to accomplish than to be specific and be wrong. If you say, "We will transform our organization with AI," and nothing happens, you can say the timeline was ambitious or the implementation was flawed. But if you say, "We will reduce proposal time from three weeks to five days," and you don't achieve it, you've failed in a way that's observable and specific. So many organizations choose vagueness.

Another form of this hallmark is the Whack-a-Mole strategy. The organization identifies seventeen different potential AI use cases, all potentially valuable, and pursues all of them simultaneously. There's no diagnosis of which challenge is most critical. There's no judgment about which problem, solved, would have the biggest impact. Instead, it's a distributed bet across the landscape. This looks comprehensive. It looks like the organization is thinking big. But it's actually a sign that it hasn't thought deeply. A real strategy makes choices. It says: Yes to this. No to that. Because we've diagnosed that this particular challenge, once solved, creates the conditions for everything else.

• • •

Hallmark Three: Mistaking Goals for Strategy

"Become AI-first by 2027." "Achieve 50% of revenue from AI-augmented products by 2025." "Reskill 10,000 employees in AI competencies." "Realize $500 million in value from AI transformation."

These are goals. They're very clear goals, and for many organizations, they're stated with impressive specificity. Dates. Numbers. Percentages. They look like strategy. But they're not. They're destinations. A strategy isn't a destination. A strategy is the vehicle, the route, the reasoning about why this particular route beats the alternatives.

This distinction matters because it's where many organizations get stuck. They set a goal—"We will become AI-first"—and then they treat the goal as if it's a strategy. They create initiatives to support it. They track progress against it. They celebrate when they move the needle. But they never actually answer the question: How do we get there? What's our approach? Why is that approach better than other approaches? What specific bets are we making? What are we betting against?

Let me walk through what it looks like to transform a goal into an actual strategy. Start with a bad goal-based plan: "Become AI-driven in our product design process by 2026."

That's a destination. To turn it into a strategy, you need three components. Rumelt calls them diagnosis, guiding policy, and coherent action.

Diagnosis: What's the actual challenge? Maybe it's this: Our product design cycle takes eighteen weeks. Competitor designs are in market in ten weeks. By the time our products launch, market preferences have shifted and we're selling yesterday's solutions. Our designers spend 40% of their time on iterative refinement that could be automated. We're losing market share to faster competitors.

Guiding policy: What's our approach to addressing this? Here's where you make choices: We're going to focus on automating the iterative refinement stage of design. We're going to build internal AI tools rather than licensing external tools, so we can customize them to our specific design language. We're not going to try to replace designers—we're going to make designers faster. We're going to start with two design teams as a pilot, not all twelve teams. We're not going to reskill our entire design organization—we're going to hire three specialists who understand both design and AI, and have them lead the pilot.

That's a policy. It's a set of intentional constraints and choices. It says what you're doing and, implicitly, what you're not doing. It creates a framework within which decisions get made.

Coherent action: What are the specific moves that follow from this diagnosis and this policy? Phase one, pilot with the automotive design team and the consumer products design team. Hire a lead AI engineer and two specialist product designers. Build a custom tool that learns from our historical design iterations. Run a six-week pilot. Phase two, measure whether the pilot compressed the cycle time and freed up designer hours as predicted. If yes, expand to four more teams. If no, understand why and adjust. Don't proceed to phase three until you've learned something from phases one and two.

Notice the difference. The goal was: "Become AI-driven in our product design process by 2026." The strategy is: Here's what's broken. Here's how we'll fix it. Here's why this particular approach makes sense for us. Here's how we'll test our assumptions. Here's what happens next if we're right, and here's how we'll know if we're wrong.

Most organizations that fail at AI strategy fail here. They have a goal but no logic connecting the goal to the work. They have a destination but no map. So they fill in the gaps with either fluff—"We'll leverage our AI capabilities"—or scattered initiatives that don't hang together. The initiatives might all be individually defensible, but they don't form a coherent path to the destination. And so the organization exhausts itself in activity without clear direction.

• • •

Hallmark Four: Bad Strategic Objectives

Sometimes organizations do the work. They diagnose. They create a guiding policy. They identify concrete objectives. But then those objectives turn out to be either contradictory, scattered, or simply impossible given actual constraints.

Here's a real example. A mid-sized insurance company created an AI strategy with the following objectives: First, build custom machine learning models for claims prediction and pricing. Second, deploy Salesforce Einstein, UiPath RPA, and a new Palantir analytics platform across the organization. Third, reskill ten thousand employees in AI competencies within eighteen months. Fourth, completely overhaul the data infrastructure to support real-time AI decision-making. Fifth, do all of this with no new budget—repurpose existing IT spending.

On a spreadsheet, these look like ambitious objectives. But they're not strategic objectives. They're a panic attack with a Gantt chart.

Start with the contradiction: How do you simultaneously build custom ML models and deploy three different vendor platforms? These require different skill sets, different governance models, and different maintenance approaches. They pull resources in different directions. You can do one or the other, but doing both is just chaos disguised as comprehensiveness.

Then the human constraints: Ten thousand employees can't be reskilled in eighteen months if you only have a staff of fifty L&D professionals. That's not a timeline problem. That's a math problem. You could reskill two hundred employees in eighteen months, and maybe a thousand if you partner with external training companies and make a serious investment. But ten thousand? With the same team that was already managing other training initiatives? It's not going to happen. So you'll either miss the objective or you'll execute a theatrical reskilling that doesn't actually build competence.

And the infrastructure overhaul: Completely redesigning your data infrastructure while simultaneously building ML models and deploying three new platforms is like rebuilding an airplane's engine while the plane is in flight. It's possible, but it requires acknowledging the risk and the difficulty. Most organizations that attempt it simply fail silently—the projects drag, the timelines slip, and two years later they're still running the old infrastructure while the new system is 'still being built.' That's not strategy. That's wishful thinking.

Strategic objectives should pass what we might call the coherence test. Do they point in the same direction? Do they use the same resources efficiently? Do they account for real constraints? Do they actually flow from the guiding policy you've established?

Better objectives for that insurance company might be: First, choose one priority problem—let's say claims processing. Second, overhaul the claims-related data infrastructure. Third, hire or train ten claims specialists in ML, and build one custom model for claims prediction. Fourth, do this in eighteen months with a dedicated team of fifteen people. That's selective. That's constrained. But that's also something you can actually accomplish. And accomplishing something concrete is how you build momentum and learning for the next phase.

The worst strategic objectives are the ones that scatter resources. If everything is important, nothing is important. If you're pursuing six different strategic objectives, each one is getting one-sixth of your attention. Most organizations don't have enough organizational muscle to do that well. Good strategy requires choosing what not to do as much as it requires choosing what to do.

• • •

The Diagnostic: Is Your AI Strategy Bad?

If you've read this far, you're probably wondering: Is mine bad? Here's a practical self-assessment. For each of Rumelt's four hallmarks, ask yourself these questions. Be honest. Your answer determines whether you have a strategy or a document.

Testing for Fluff

Can you state your AI strategy in a single paragraph, in plain English, without jargon? If someone asks, "So what's your AI strategy?" can you explain it as clearly as you'd explain to an intelligent friend who works in a different field? If you find yourself reaching for words like 'leverage,' 'unlock,' 'optimize,' 'empower,' or 'transform,' you're reaching for fluff. Those words are fine as seasoning, but they shouldn't be the meat of your explanation.

Here's a test: Write out your strategy in two hundred words or less. Then give it to three people from different parts of the company who aren't involved in strategy. Ask them to summarize it back to you in their own words. If they summarize it correctly and specifically, you've got clarity. If they come back with generalizations or if they ask clarifying questions, you've got fluff.

Testing for Failure to Face the Challenge

Does your strategy begin with a problem or a solution? If you started by identifying what's broken and then asked how AI might help, you've faced the challenge. If you started with "Here's what AI can do for us," you've inverted it. Ask yourself: Could we describe our AI strategy to someone without mentioning AI at all? If we talked about the problem we're solving, would AI be an obvious tool or would it feel forced? If it feels forced, you haven't really diagnosed the challenge. You've chosen a technology and worked backward.

Another test: Name your competitor or the scenario that would make your business obsolete. If your AI strategy doesn't directly address that threat, you're not facing the real challenge. You might be addressing a nice-to-have. You're probably not addressing what's actually at stake.

Testing for Goals Masquerading as Strategy

Does your strategy document contain a clear diagnosis of what's broken, a guiding policy about your approach, and a set of coherent actions? Or does it contain goals, timelines, and milestones? Goals are important, but if they're the primary content of your strategy document, you've mistaken a destination for a route map.

Here's the test: For every goal in your strategy, write out the logic that connects it to action. "Achieve $500 million in AI value" is a goal. But write out: Here's how we'll create value. These are the specific businesses or operations where we'll focus. This is why we believe those areas are most susceptible to AI improvement. These are the moves we'll make. If you can't write that logic, the goal isn't grounded in strategy.

Testing for Bad Strategic Objectives

Do your strategic objectives require the same resources? Do they point in the same direction? Can you rank them in terms of priority, or are they all equally important? If they all feel equally important, you probably have too many.

Do your objectives fit within actual constraints? Real constraints like: the budget you actually have, the people you actually have, the timeline you actually have, the technical infrastructure you actually have. If your objectives would require doubling your engineering staff or a complete rebuild of your data infrastructure or six months of nobody doing anything else, they're not realistic objectives. They're fantasies with deadlines.

Your AI strategy might be bad strategy if you can't explain it without jargon, if it doesn't name a specific challenge, if it mistakes a goal for a diagnosis, or if its objectives conflict or require impossible constraints.

• • •

The Board Member's Question

Return with me to that offsite. The board member's question—"What specific challenge does this strategy address?"—wasn't hostile. It was actually the most strategic question anyone had asked all morning. It was a question about coherence. About whether the strategy was actually rooted in reality or whether it was floating free of it, impressive but untethered.

The room was quiet because nobody had asked that question before. The strategy had been built in response to other pressures: competitive anxiety, FOMO about AI, the consulting firm that recommended a "comprehensive AI roadmap," the CEO's conviction that AI was going to be strategic (which it might be, but only if you understand how). But nobody had started with the diagnosis. And so when the board member asked for it, they had to confront the fact that they didn't have one.

This is where most organizations are with AI. They know they should have a strategy. They know competitors are investing. They know it could matter. But they haven't done the hard work of diagnosis. They've built a strategy on the foundation of wanting to be strategic rather than on the foundation of understanding what's actually at stake.

Bad strategy persists because it's comfortable. A vague strategy doesn't expose you to the risk of being wrong in a specific way. It lets you claim credit if things improve and blame implementation when they don't. A real strategy is uncomfortably specific. It names what it's trying to do. It acknowledges trade-offs. It's vulnerable to being wrong.

But here's what a real strategy also does: It works. When you've diagnosed the actual challenge, when you've created a guiding policy that makes sense for your situation, when you've set coherent objectives that actually drive toward solving that challenge, the whole organization lines up. People know what to do. Resources flow in the right direction. You start to see results not because you executed a perfect plan, but because everybody's effort is pointed the same way.

You can't build a good strategy on the foundation of a bad one. Bad strategy is a hole you have to climb out of, not a starting point you can build on. Diagnosing your own bad strategy—recognizing the fluff, facing the actual challenge, grounding your goals in logic, making your objectives coherent—is uncomfortable. But it's necessary.

In the next chapter, we'll turn to the harder work: building a good strategy. We'll learn to find the crux of your AI challenge—the thing that, if solved, changes everything else. We'll learn to move from diagnosis to decision. We'll learn to avoid not just bad strategy, but the temptation to prefer motion over direction.

But first, you have to do what that board member did: You have to ask the question. You have to sit with the silence that follows when nobody has a specific answer. And you have to be willing to do the work to give the organization something better than a forty-slide presentation. You have to give it strategy that actually makes sense.

 

CHAPTER THREE

Finding the Crux

The hospital network CEO stared at the spreadsheet on her desk with a mixture of hope and desperation. Twenty-three AI vendors. Twenty-three pitches. Each one promising transformation. The radiology company had impressive numbers: 94% accuracy in detecting tumors, faster readings, radiologists freed to focus on complex cases. The scheduling optimization firm claimed they could reduce wait times by 18% and improve staff utilization by 22%. The billing AI promised to cut claim denials in half. And then there was the patient engagement platform, the clinical documentation tool, the drug interaction checker, the infection risk predictor—the list went on.

She had asked her CTO for help prioritizing. He had dutifully created a matrix: impact vs. effort, with each opportunity scored and ranked. It looked rigorous. But when she asked the simple question—"Which one should we do first?"—the CTO hesitated. Then he said, "Well, depends on your strategy." That was the problem. They didn't have one. Or rather, they had many strategies, diffused across departments and budget cycles, which amounted to the same thing.

She turned to her Chief Medical Officer. "Of all these AIs, which matters most?" The CMO gave her a non-answer: "They're all important." The CEO tried a different angle with the CFO. "Which will save us the most money?" The CFO couldn't say without more analysis. By the end of the week, the CEO had spent hours in meetings and had no clarity. This is what I call the diagnosis problem. Organizations have access to powerful AI tools and use cases, but they lack a disciplined way to figure out which one is the strategic crux.

• • •

What Is the Crux?

The concept of the crux comes from mountaineering. When climbers approach a challenging route, they don't treat every difficult move as equally important. The crux is the single hardest, most critical move on the entire route. Everything else on the climb—however difficult it may be—is less consequential than mastering the crux. Once you get through the crux, the rest of the climb becomes manageable. This isn't opinion; it's practical necessity. A climber doesn't waste energy fretting about every pitch equally. They focus on the crux.

Business strategy works the same way. In his book The Crux, Richard Rumelt argues that strategy is not a list of nice-to-haves or a spreadsheet of initiatives. Strategy is the diagnosis of what really matters—the critical challenge that, once addressed, unlocks everything else. It requires brutal honesty about your situation and disciplined focus on one thing.

For most organizations considering AI, the crux is not the most obvious problem. It's not necessarily the one generating the most noise from vendors or the one that appears at the top of a capability matrix. The crux is the challenge—often unexpected, sometimes uncomfortable—that sits at the intersection of strategic importance and organizational capability. In healthcare systems, it might not be clinical decision-making AI at all. In manufacturing, it might not be predictive maintenance. The crux forces you to look past the seductive pitch and ask: What is the actual bottleneck limiting our ability to succeed?

• • •

Why Companies Miss the Crux

If finding the crux is so valuable, why do so few organizations do it? There are three primary reasons, and understanding them is the first step to avoiding these traps.

First, organizations start with AI capabilities instead of business challenges. This is the vendor trap. When a company begins its AI journey by asking, "What can AI do for us?" instead of "What business challenges are most critical?", it inevitably ends up with a capability-driven roadmap. The sales engineer walks through use cases. The team thinks, "Yes, that could help us." Before long, there's a list of ten applications, all technically feasible, none strategically focused. The conversation becomes about implementation difficulty and cost, not about which capability would actually change the trajectory of the business. This approach treats AI like a toolkit to be deployed across many problems, rather than as a strategic lever to solve a specific crux.

Second, organizations confuse urgency with importance. The loudest problem in the room is rarely the most strategic one. A production outage feels urgent but might not be the crux. A customer complaint that's making headlines feels urgent but might address a symptom, not the root cause. In hospitals, staffing shortages are urgent. Budget deficits are urgent. But neither might be the crux blocking AI value. The crux often hides beneath the noise, quietly limiting potential. You find it through diagnosis, not by chasing fires. This is hard because it requires patience and discipline in a culture that rewards reactivity.

Third, many organizations lack a structured process for diagnosis. Without a diagnostic framework, decision-making defaults to consensus. And consensus, by its nature, produces lists, not focus. Everyone gets heard. Everyone's concern makes it into the strategy. The result is diffusion: many initiatives, no clear priority, dispersed resources. A hospital network might greenlight both the radiology AI and the scheduling AI because they're both important and stakeholders champion each. But without diagnosis, you don't have a reason to choose one over the other, so you choose both—and then neither gets the organizational focus needed to succeed.

• • •

The Diagnosis Process

Finding the crux requires a disciplined diagnostic process. This isn't brainstorming. It's not a workshop where you stick notes on a whiteboard and vote on favorites. Diagnosis is investigative work. It's honest accounting of your competitive position and ruthless assessment of what actually matters. Here's a structured four-step method:

Step One: Map Your Value Chain Honestly

Start with a clear-eyed map of how you create value. Not how you think you create value, but how you actually do. For a hospital, this isn't abstract. Value comes from clinical excellence, efficiency, and safety. But which part of the value chain generates differentiation? Is it in diagnostics? Is it in treatment outcomes? Is it in patient experience? Is it in cost management? Be specific. Many organizations don't have clarity here because they've never forced themselves to choose.

Step Two: Identify Critical Vulnerabilities

For each key part of your value chain, ask: Where are we vulnerable? Where could we lose our competitive position? This requires intellectual honesty. You might be strong in service quality but weak in cost. You might be fast to market but poor at retention. You might have great customer relationships but mediocre operations. Vulnerabilities are the gaps between your current state and what competitors or market leaders achieve. They're not always obvious from inside the organization.

Step Three: Test for AI Suitability

Not all critical vulnerabilities are suitable for AI. AI excels in specific domains: pattern recognition at scale, speed, consistency, and learning from data. Some critical challenges are organizational or political—they won't be solved by technology alone. Others require domain expertise, judgment, or human creativity. For each vulnerability you've identified, ask: Does AI have a meaningful role? Is the vulnerability rooted in the types of problems AI solves well? If a hospital's critical vulnerability is slow clinical decision-making because doctors lack information, AI might help. If the vulnerability is that doctors lack the authority to make certain decisions, AI won't solve it.

Step Four: Name the Crux

At the intersection of strategic importance and AI suitability, there is typically one challenge that stands out. Sometimes it's obvious. Sometimes it requires debate and discussion. But when you've done the first three steps honestly, the crux usually emerges. Name it. Make it explicit. Your crux should be specific enough that someone unfamiliar with your organization can understand what you're trying to solve. "Improve operations" is not a crux. "Reduce surgical scheduling conflicts caused by incomplete patient data" is.

• • •

The Crux Test

Once you think you've identified your crux, test it. A valid crux passes five questions. If your proposed crux fails any of these, keep diagnosing. You haven't found it yet.

Is it specific? Your crux should be concrete, not vague. "Improve efficiency" fails. "Reduce time between patient intake and first clinical assessment from average 45 minutes to 20 minutes" passes. The test is this: Could you measure whether you've addressed it? If you can't clearly define success, you haven't found your crux.

Is it a challenge? A crux is a problem to be solved, not a goal to be achieved. "Become the most innovative hospital network in the region" is a goal. "Understand why our innovation efforts haven't translated into adoption by front-line clinicians" is a challenge. Challenges are framed as obstacles. Goals are outcomes. A true crux is diagnostic—it's a statement of what's blocking you.

Is it yours? Avoid generic statements that could apply to any competitor in your industry. "Need to improve customer experience" might be true for every hospital, every bank, every airline. But it's not your crux—it's an industry trend. A valid crux is specific to your competitive position. Maybe your crux is different from your competitor's crux. That's the point. Your crux should be rooted in your particular strategy and circumstances, not borrowed from an industry playbook.

Does solving it create asymmetric advantage? If you solve your crux, will you be measurably better positioned than competitors? Will you gain an edge that's hard to replicate? This is the strategic test. If solving your crux only brings you to parity with competitors, you haven't found the right one. A true crux, once addressed, shifts competitive positioning.

Can AI genuinely address it? Finally, make sure your crux is actually suitable for AI. Some genuinely critical challenges aren't. If your crux is that your organization lacks vision, AI won't fix it. If it's that your culture resists change, AI won't solve it. These are real cruxes, but they require different interventions. If you're positioning AI as your strategic lever, your crux should be one that AI is genuinely equipped to help with.

• • •

Case Study: The Hospital Network's Hidden Crux

Return to the hospital network CEO and her twenty-three AI opportunities. She decided to apply the diagnosis process. She started by mapping the value chain. The network operated twelve facilities across three states. Some served urban populations, others rural. They competed on clinical quality and cost-effectiveness. Her team identified that differentiation came from coordinated specialty care—being able to treat complex cases locally rather than referring patients elsewhere.

In the vulnerability analysis, a pattern emerged. Coordinating care across the network was a nightmare. Patient records existed in different systems across different facilities. A cardiologist at one facility couldn't easily access lab results from another. The system had grown through acquisitions, and integration had never been prioritized. This created dangerous gaps. Sometimes patients repeated tests because another facility didn't know a test had been done. Sometimes specialist consultations missed critical history because it wasn't available in a readily accessible form. The network was essentially twelve separate organizations using the same brand.

This was a vulnerability to the core strategy. To compete on coordinated specialty care, you need coordinated data. The CEO had been thinking about clinical AI—radiology AI could help diagnose, scheduling AI could optimize utilization. But the AI wouldn't fully work without unified data.

When tested against the crux framework, the real crux became clear: the network's inability to unify patient data across facilities was blocking every strategic objective. Radiology AI couldn't reach its potential without unified data. Scheduling optimization couldn't work without a complete view of patient availability and history. Even basic care coordination—the foundation of their strategy—was hampered. And data unification was specifically something that needed to happen before most other AI could deliver value.

The crux wasn't any of the twenty-three individual AI applications. It was the data integration problem. Once named, this crux reframed everything. The radiology AI and scheduling AI didn't disappear from the roadmap, but they moved downstream. First priority: establish a unified patient data layer. Everything else would follow. This wasn't what the vendors wanted to hear—they wanted to sell specific applications. But it was what the strategy demanded.

• • •

The Discomfort of Specificity

Finding your crux is uncomfortable. It forces specificity in a world where many leaders prefer optionality. Optionality feels safe. If you commit to multiple initiatives, at least some will succeed. If you commit to one crux, you're taking a bet. You're saying: "This matters most. Everything else is secondary." That's a statement that can be proven wrong.

But as Richard Rumelt shows in his work, strategy IS the act of choosing. It's not a document that avoids making choices. Strategy narrows focus. It sacrifices some opportunities to pursue others. That's not a bug—it's the definition of strategy. When the hospital network CEO identified data integration as the crux, she had to say no to launching five other AI initiatives simultaneously. That was strategically sound. It was also hard. But the alternative—doing everything, prioritizing nothing—was a guarantee of mediocrity.

The diagnosis process is how you build confidence in that choice. You're not relying on gut feel or vendor influence. You've systematically identified where your organization is vulnerable, where AI can help, and what single challenge unlocks everything else. That diagnostic rigor is what separates strategy from wishful thinking.

With your crux identified, you're ready for the next step: finding your sources of AI power. Chapter 4 moves from diagnosis—understanding what matters—to strategy—understanding how you'll win. Your crux tells you what battle you're fighting. Your guiding policy will tell you how to fight it.

 

CHAPTER FOUR

Sources of AI Power

A mid-size logistics company made a transformative decision: invest $12 million in artificial intelligence across eight different initiatives. The leadership team was convinced that AI, applied broadly across their operations, would unlock new efficiencies and competitive advantages. Within eighteen months, they had deployed sophisticated machine learning models, integrated them with legacy systems, and trained hundreds of employees to work with these new tools. Yet when the dust settled and the results were tallied, only one initiative—route optimization—had fundamentally transformed their economics. The other seven initiatives, despite consuming 85% of the investment and equally impressive technical execution, had yielded modest improvements that any well-resourced competitor could replicate within months.

Why? The difference lay not in the quality of the AI implementations themselves, but in whether each initiative tapped into what strategy theorist Richard Rumelt calls a "source of power." Route optimization worked because it amplified the company's existing strength in asset management, eliminated a critical bottleneck in their operations, and positioned them ahead of a clear industry dynamic—customers demanding faster, more reliable delivery. The other seven initiatives were purely efficiency plays, the equivalent of automating work that was already being done reasonably well. They improved metrics but didn't create durable advantage. In strategic terms, they were noise masquerading as signal.

This chapter introduces one of the most practical frameworks for evaluating where AI can create lasting advantage and where it will merely create temporary efficiency gains. It comes from Richard Rumelt's seminal work "Good Strategy Bad Strategy," and it is essential reading for anyone responsible for allocating AI investment. The framework asks a deceptively simple question: What is the source of power in this initiative? Is it leverage? Is it the removal of a bottleneck? Is it riding a wave of industry dynamics? Or is it simply focus—the concentration of resources that compounds competitive advantage? Without clear answers to these questions, AI investments become lottery tickets masquerading as strategy.

• • •

Understanding the Four Sources of Power

Strategy works by harnessing sources of power. Rumelt identified several mechanisms by which organizations create durable competitive advantage, and understanding these mechanisms is essential for any leader evaluating where AI can create real advantage versus where it merely creates short-term efficiency. Too many organizations treat strategy as though it were aspirational—a statement of ambition that competes less against reality and more against what we wish to be true. Real strategy, Rumelt argues, is fundamentally about power: identifying and harnessing mechanisms that create real competitive asymmetry.

Four sources of power are particularly relevant for AI-driven strategy. The first is leverage: the ability to gain disproportionate results from a focused investment. The second is the removal of constraint—Rumelt's concept of chain-link systems, where organizations are only as strong as their weakest link. The third is dynamics: the ability to recognize and position for waves of change in your industry. The fourth is focus: the concentrated application of resources that compounds advantage over time. None of these sources of power is novel. What is novel is applying this framework specifically to AI investment decisions, where the temptation to spread resources thinly across many initiatives—because AI "seems like it could help everywhere"—is nearly irresistible.

• • •

Leverage: Amplifying Your Strengths

Leverage occurs when a small investment yields disproportionate results. In the context of AI, leverage occurs specifically when the technology amplifies an existing strength or removes a bottleneck that has been constraining the whole system. The key insight here is subtle but crucial: the leverage is not in the AI model itself. The power is in what the model releases.

Consider a retail organization with a sophisticated supply chain and an exceptionally talented distribution network. They invest in AI-powered demand forecasting. The model itself is technically excellent—it reduces forecast error by 15%. But that is not the source of leverage. The source of leverage is that this 15% improvement in forecast accuracy unlocks the full potential of their distribution network. Previously, distribution was constrained by forecast uncertainty; they had to stock safety inventory to hedge against being wrong. The improved forecast eliminates that need. They can run leaner, ship faster, and respond more dynamically to actual customer demand. The distribution network goes from 70% utilized to 94% utilized. That is leverage. The AI amplified an existing strength.

This is fundamentally different from most efficiency plays. When a company uses AI to automate data entry, it saves the cost of data entry. That is not leverage; that is simply cost reduction. When a company uses AI to detect anomalies in a process that was already reasonably controlled, they catch maybe 5% more anomalies. That is not leverage; that is incremental improvement. Leverage occurs when AI addresses an input to a system that was constraining the whole system's output.

Identifying leverage opportunities requires a different kind of analysis. Begin by mapping your organization's value chain. Where is demand the highest? Where is constraint the most binding? What existing strength do you have that, if you could remove the constraint preventing it from operating at full capacity, would compound? These are the places where AI leverage might exist. The technical question of whether AI can solve the constraint is secondary. The strategic question of whether solving the constraint will unlock existing strength is primary.

• • •

Chain-Link Systems: Fixing the Weakest Link

Rumelt's concept of chain-link systems captures a harsh reality: a system is only as strong as its weakest link. A supply chain optimization is constrained not by the competence of procurement or inventory management, but by whichever of these (or other links) is weakest. A manufacturing operation is constrained not by production quality or equipment, but by whichever link in the chain is least capable. Strategy is, among other things, the process of identifying weak links and strengthening them. AI can be a tool for strengthening weak links, but only if you've identified them correctly.

Here is where many organizations go fundamentally astray. They apply AI not to weak links but to strong links, because strong links are where the data is cleanest and the wins are easiest to demonstrate. A manufacturer's quality control process might be already quite good—they catch 99.2% of defects. The data is clean. The problem is easy to model. So they build an AI system to detect defects, improve the catch rate to 99.7%, publish the success, and feel excellent about their AI investment. Meanwhile, the actual constraint on their system—supplier reliability—remains unaddressed. No amount of inspection AI fixes bad inputs. The weak link was never quality control; it was upstream.

The temptation toward strong-link AI is understandable. Strong links generate easily measurable wins. Weak links are often organizational or process-based, hard to quantify, and politically complicated. Addressing a weak supplier relationship or fixing a fragmented forecasting process lacks the clean technical elegance of building a machine learning model. But strategy is not about elegance. It is about power. And power, in a chain-link system, comes from addressing the constraint.

Evaluating AI investments through a chain-link lens requires asking a harder set of questions. What is actually constraining our system? Not "what problem would AI be good at solving," but "what constraint, if removed, would unlock disproportionate value?" Once you've identified the actual constraint, the secondary question is whether AI can help address it. Often, it can. But sometimes the constraint is not technical—it is organizational or structural—and AI is the wrong tool entirely.

• • •

Dynamics: Positioning for Waves of Change

Industries are not static. They move in waves. Sometimes these waves are driven by technology shifts (the internet, mobile, cloud). Sometimes they are driven by customer preference changes (shift to remote work, demand for sustainability). Sometimes they are driven by competitive dynamics (the rise of platform businesses, the consolidation of markets). Strategy is partly the art of reading these waves and positioning your organization to ride them rather than fight them.

AI has created several distinct dynamics that are reshaping industries in real time. The first is the shift from human judgment to data-driven decisions in domains where judgment was previously the primary mode. This dynamic is running through consulting, financial services, recruiting, and dozens of other sectors. Organizations that position AI investment to ride this wave—building capability in data-driven decision-making before it becomes a table stakes expectation—gain advantage. Organizations that resist the wave until it is irresistible pay the cost of late adoption.

The second dynamic is the customer expectation shift around personalization and speed. Customers increasingly expect organizations to know their preferences, anticipate their needs, and respond with customized solutions at unprecedented speed. This dynamic is running through retail, media, financial services, and healthcare. AI is the primary enabling technology for this dynamic. Organizations that invest in AI-driven personalization early gain data advantages and learning advantages that compound. Late movers inherit a customer base trained to expect personalization, but no data heritage to support it.

The third dynamic is the emergence of new business models that were not economically viable without AI. Predictive maintenance was not viable before AI because the cost of prediction exceeded the value of prediction. Ride-sharing was not viable before AI because dynamic pricing and real-time matching were not possible. Algorithmic trading was not viable before AI. These business models are now reshaping their respective industries. Organizations that position AI investment to enable these new models position themselves ahead of the dynamic. Organizations that fail to see the dynamic, or see it but move slowly, risk disruption.

The power of positioning for dynamics is that you are not just improving efficiency; you are aligning your organization with the direction of industry change. When you ride a dynamic, initial advantages compound. You attract talent because you are seen as forward-thinking. You attract customers because your solutions feel native to their changing expectations. You attract capital because investors recognize that your strategy is aligned with industry movement. When you fight a dynamic or ignore it, you face headwinds on every front.

• • •

Focusing Effects: The Power of Concentration

The final source of power is focus. This source is distinct from the others because it is not about amplifying a particular advantage, removing a particular constraint, or positioning for a particular wave. It is about the concentrated application of resources and what that concentration enables.

When you focus AI resources on one decisive point instead of spreading them across many initiatives, several compounding effects occur. First, data gets richer. Instead of fragmenting your training data across eight use cases, you concentrate it on one. Richer data means better models. Second, models get better faster because the team is learning about one problem domain in depth rather than learning eight problem domains superficially. Third, organizational learning accelerates because the entire organization is learning to work with AI in one particular context, not trying to assimilate AI across many contexts simultaneously. Fourth, the proving ground for your AI capability is focused, so the early wins are deeper and more visible, which generates organizational momentum.

Scatter creates waste. This is a rule that holds universally in competitive strategy, and it holds doubly true in AI strategy. Every initiative requires initial investment in infrastructure, training, and organizational change management. When you scatter across many initiatives, you are paying this fixed cost many times. Every initiative requires some portion of your scarce AI talent. When you scatter, you are fragmenting talent. Every initiative has to learn independently how to embed AI into your organizational processes. When you scatter, you are relearning the same organizational lessons eight times.

Focus creates power. It concentrates resources, accelerates learning, generates visible wins early, builds organizational capability in one domain that can then be extended to others, and positions you to develop defensibility and true competitive advantage. An organization that is willing to say "we will build an AI-driven route optimization capability and we will make that capability state-of-the-art" can achieve something that an organization trying to dabble in eight initiatives cannot: genuine competitive advantage that competitors cannot replicate quickly.

• • •

The Power Mapping Exercise: From Intuition to Analysis

So how do you operationalize these insights? How do you move from understanding the sources of power to making concrete AI investment decisions? The answer is a structured exercise called power mapping, which converts strategic intuition into analytical discipline.

Begin by listing your top 8-15 AI opportunities. These should be concrete initiatives, not abstract possibilities. "AI for customer experience" is not concrete enough. "AI-powered product recommendations for returning customers" is concrete. Include initiatives that are already on the radar—things your organization is actively considering—as well as initiatives that have been suggested but not yet seriously evaluated.

For each initiative, create a simple scoring matrix. Score each opportunity on its alignment with each of the four sources of power. Use a 1-5 scale, where 1 means "no alignment with this source of power" and 5 means "strong alignment." For leverage, ask: Does this initiative amplify an existing organizational strength or remove a critical bottleneck? For chain-link, ask: Does this initiative address our actual weakest link or does it improve an already-strong area? For dynamics, ask: Does this initiative position us ahead of an industry wave or does it fight existing industry movement? For focus, ask: Can we concentrate our resources on this one initiative and achieve depth, or does this initiative require us to split focus?

Opportunities scoring high (4-5) across multiple sources of power are your strategic priorities. These are where AI can create durable advantage. Opportunities scoring high (4-5) on just one source are secondary priorities—pursue them only if you have the resources and you're willing to sacrifice focus on higher-priority initiatives. Opportunities scoring high on none of the four sources are, in strategic terms, distractions dressed as opportunities. They may be technically interesting. They may be what competitors are doing. They may have champions inside the organization who are passionate about them. But they are not strategic. Starving these initiatives of resources is not a failure of ambition; it is a success of strategy.

• • •

From Intuition to Strategy

Return, now, to the logistics company at the beginning of this chapter. Their route optimization initiative scored high on leverage—the optimization amplified their existing fleet and asset advantage. It scored high on chain-link—it addressed the constraint that was preventing their distribution network from operating at full utilization. It scored high on dynamics—customers increasingly demanded faster, more reliable delivery, and route optimization positioned the company ahead of that wave. And it scored high on focus—the organization concentrated its resources, its talent, and its learning on one decisive domain.

The other seven initiatives scored high on one measure and one measure only: "someone thought it seemed cool." That is not a source of power. That is a source of waste. The organization had the discipline to ask harder questions. After the power mapping exercise, the leadership team recognized that most of their AI initiatives were not aligned with any source of power. They were not amplifying strength, removing constraint, riding a wave, or achieving focus. They were spreading resources thinly across nice-to-haves. The decision, then, was clear: concentrate resources on the initiative with genuine strategic power and shelve the others, not because those initiatives were technically inferior, but because they were strategically unfocused.

This is the hard wisdom of strategy. It is mostly about what you don't do. It is about saying no to opportunities that, while technically sound or competitively interesting, are not sources of power. It is about having the discipline to concentrate where you have genuine advantage and to resist the siren song of doing everything at once. The companies that master this discipline—that apply AI with strategic discipline rather than strategic scatter—are the companies that will achieve lasting competitive advantage from AI. The companies that treat AI as a procurement problem, something to be deployed everywhere to chase every efficiency, will find themselves chasing trends rather than creating them.

The final chapter of this section moves from the question of where AI creates strategic power to the question of how you know if you've actually created it. Chapter 5 introduces the Advantage Test—a structured methodology for evaluating whether your AI initiative is actually creating competitive advantage or merely creating efficiency. It is the bridge between strategy and execution, between intention and impact.

 

CHAPTER FIVE

The Advantage Test

The Monthly Approval Machine

The third Tuesday of every month, the AI committee of Consolidated Metro Bank gathers in a glass-walled conference room on the forty-second floor. The agenda is always the same: review AI proposals, score them against a rubric, and decide which ones get funded. The meeting has become remarkably predictable. Almost every proposal that reaches the committee has been vetted by department heads, blessed by the chief technology officer, and blessed again by the finance team. Almost every proposal promises positive return on investment. And almost every proposal gets approved.

This year, the bank approved fourteen AI initiatives. Fourteen. The committee felt good about it. The AI strategy was working. The organization was transforming. Then the CEO, Margaret Chen, noticed something odd: their closest competitors were pulling ahead. Not by a little. By a lot. One competitor had approved fewer AI projects but was capturing market share in small business lending. Another had launched fewer initiatives overall but was winning customer loyalty metrics. Still another was seeing superior loan portfolio performance despite having a smaller AI team.

"Something is wrong with our approach," Margaret said at the next strategy session. "We're doing more AI than most of these competitors, but they're ahead of us. We're approving everything that makes financial sense, but nothing is giving us an edge." The room fell silent. The chief technology officer started to defend the approval rate. The head of retail banking began explaining quarterly results. But Margaret wasn't satisfied with incremental justifications. The problem, she realized, wasn't that they were doing AI wrong. The problem was that they were doing too much of the wrong AI. They needed a filter—something more rigorous than ROI, something that could tell them not just whether an AI initiative would make money, but whether it would make them harder to beat.

• • •

Why ROI Is Not Enough

Return on investment is a powerful metric. It tells you whether something will make more money than it costs. For a bank evaluating an AI initiative, ROI sounds like the perfect lens. If a machine learning model that predicts default risk costs $2 million to build and deploy and saves $5 million per year in avoided losses, the ROI is clear and positive. Finance teams love it. Boards love it. It feels like a rigorous basis for decision-making.

But ROI has a fatal flaw when it comes to strategic AI: it answers the wrong question. ROI asks, "Will this make money?" It does not ask, "Will this make us harder to beat?" An AI initiative can clear the ROI hurdle and still be a strategic disaster. Consider a few examples. A bank might implement an AI-powered chatbot for customer service that generates a 25 percent cost reduction through automation. Positive ROI. Good business. Except the chatbot can be purchased from a software vendor by any competitor for the same price. The bank has not built a competitive advantage—it has simply adopted a commodity. Or consider an AI initiative to optimize branch staffing patterns. Profitable. Easy to implement. And easily replicated by any competitor with basic data science skills and access to the same historical data. Or a machine learning model that predicts which customers are most likely to churn. Yes, it has positive ROI. Yes, it generates value. But five competitors have built functionally identical models. When everyone can do the same thing, nobody has an advantage.

There is a deeper problem too. ROI is often measured on a single initiative in isolation. An AI investment to automate internal processes might have excellent ROI. But if it diverts a team of twenty data scientists away from a system that could create genuine competitive lock-in, the organization has made a strategic error. ROI does not account for opportunity cost. It does not ask whether approving this initiative prevents you from approving something better. It does not ask whether the resources invested in initiative A could compound more powerfully if invested in initiative B instead. Most dangerously, ROI does not distinguish between static advantages and compounding ones. A static advantage—something you do better today than competitors do—might have fine ROI but will erode over time as competitors catch up. A compounding advantage—something that gets stronger the more you use it—might have less dramatic first-year returns but will create a widening moat that becomes impossible to cross. ROI looks backward and sideways. It does not look forward at the trajectory of advantage.

• • •

The Four Tests of Strategic AI

In Chapter Four, we introduced Richard Rumelt's framework for identifying sources of power—the competitive advantages that actually matter. Those sources were leverage (having something competitors need and cannot easily replicate), predictability (being able to move reliably into spaces where competitors cannot), and network effects (becoming more valuable as your network grows). These insights provide the foundation for the Advantage Test, a four-part framework for evaluating AI initiatives. Unlike ROI, the Advantage Test asks whether an AI investment will genuinely strengthen competitive position. It forces teams to be rigorous about what they fund and, more importantly, what they kill.

The Advantage Test has four dimensions, and every AI initiative must score adequately on all four. If an initiative fails on even one dimension, it should not receive funding. This sounds harsh, but it is precisely this discipline that separates organizations that transform from those that merely tinker. Here are the four tests:

Concentration: Does this initiative address the critical challenge or opportunity you identified in Step One? Does it focus force where it matters most? An AI initiative might be technically brilliant and financially profitable, but if it does not connect to the central strategic challenge facing the organization, it is a distraction. It consumes energy and resources that should be directed toward the crux. Strategy is about choosing. It is about saying yes to a few things and no to everything else. If your critical challenge is that you are losing customers to superior mobile experiences, then an AI initiative to optimize warehouse logistics, no matter how positive its ROI, is not strategic. It is a pleasant sidetrack.

Inimitability: Would competitors struggle to replicate this capability within eighteen months? This is the question that separates advantage from table stakes. Many AI capabilities look impressive at first but collapse under scrutiny. Can your competitors simply license the same software? Can they hire people with similar skills? Do they have access to equivalent data? If the answer to any of these is yes, the capability is not defensible. Easy-to-copy AI is not a competitive advantage—it is the minimum ante to stay in the game. True inimitability usually requires a rare combination of factors: proprietary data that competitors cannot access, organizational knowledge embedded in systems and people, unique relationships or platforms, or network effects that grow stronger over time. When you honestly assess inimitability, most AI initiatives fail this test.

Coherence: Does this initiative function as part of a reinforcing system, or is it a standalone tool? This is the dimension most teams ignore, and it is the most important. A single AI capability, no matter how impressive, tends toward commoditization. It can be copied. It can be licensed. It can be hired away. But a system where multiple AI capabilities reinforce one another, where data flows between them, where insights from one feed intelligence into another, where the organization becomes more intelligent at every connection point—that system becomes harder to replicate. The difference is the difference between an impressive tactical move and strategic transformation. A bank might deploy an impressive loan approval model (standalone tool). But what if that model fed insights into customer relationship management systems, which fed insights into cross-sell targeting, which fed back into loan underwriting, creating a virtuous cycle where the bank's understanding of customers and their needs got better and better? That is coherence. That is a system. That is hard to replicate.

Compounding: Does the advantage grow over time, or does it stay static? Compounding advantages strengthen through data accumulation, organizational learning, customer switching costs, or network effects. A machine learning model that gets more accurate as it sees more data is compounding. A platform that becomes more valuable as more customers use it is compounding. An operational capability that your team learns to execute more efficiently over successive cycles is compounding. Static advantages—where you do something better than competitors do today but the gap does not widen—erode. Competitors catch up. Advantage flattens. The most valuable AI investments are those where advantage compounds: they get better, faster, stronger, more defensible the longer they run. These are the initiatives worth killing other things for.

• • •

Testing Five Insurance Proposals

To illustrate how the Advantage Test works in practice, consider a mid-sized insurance company, Regional Coverage Inc., reviewing five AI proposals in a single funding cycle. All five have positive ROI. All five are technically sound. But when evaluated through the Advantage Test, they tell very different stories.

Proposal One: Claims Processing Automation. The proposal is to implement an AI system that reads claim documents, extracts relevant information, and routes claims to appropriate adjusters. The system costs $800,000 to implement and will save $1.2 million per year in labor costs. ROI is strong. But examine it through the Advantage Test. On concentration, it is middling—claims processing is important but not the strategic crux of the business. On inimitability, it is disastrous. An identical system is available off-the-shelf from a software vendor. Every competitor can buy it. Within 18 months, not only will competitors replicate it; they will have better versions from multiple vendors competing on price. On coherence and compounding, similar issues. It is a standalone tool, not part of a larger system. The advantage (if it exists) is transient. This proposal should not receive funding, despite positive ROI.

Proposal Two: Fraud Detection Enhancement. The company has an existing fraud detection system. The proposal is to enhance it with a newer machine learning model that is 3 percent more accurate at identifying fraudulent claims. Cost: $400,000. Savings: $600,000 per year in prevented fraud losses. Again, positive ROI. Examined through the Advantage Test: concentration is okay (fraud reduction is important), but inimitability is poor. The underlying model uses standard techniques and standard data. Competitors can hire the same talent and build similar models. Coherence is weak—it is an isolated point solution. Compounding is modest. A 3 percent accuracy improvement might last six months before competitors match it. This proposal too should be declined.

Proposal Three: AI-Powered Risk Assessment Platform. This is a different beast. The proposal is to build a proprietary system that integrates underwriting, medical records data, actuarial analysis, and historical claim outcomes to generate superior risk assessments for life insurance underwriting. The system will be trained on 25 years of the company's claims data—a dataset competitors do not have access to. The cost is $2.5 million. The expected savings and improved underwriting margins are $4 million per year. On concentration: strong. Superior underwriting is the crux of insurance company competitive advantage. On inimitability: very strong. The system is trained on proprietary data that competitors cannot access or replicate for many years. On coherence: strong. The system feeds insights into product design, pricing, and claims prediction. On compounding: very strong. The more claims data the company collects, the better the model becomes. Competitors trying to replicate it face a moving target—their own data is years behind. This proposal passes the Advantage Test decisively. It should be funded.

Proposal Four: Customer Lifetime Value Prediction. Using AI to predict which customers will generate the highest lifetime value, enabling targeted retention and service investments. Cost: $600,000. Expected payoff: $900,000 per year in improved retention economics. The ROI is positive. But on the Advantage Test: concentration is weak (customer retention is important but not the core strategic challenge), inimitability is poor (competitors can build identical models), coherence is modest (it improves one dimension of customer management but does not integrate deeply with other systems), and compounding is limited. This is a proposal that looks good on paper but fails to pass the test. It should not be funded.

Proposal Five: Integrated Customer Intelligence System. This proposal is more ambitious. It seeks to build a unified platform that pulls in all customer data—claims history, policy details, agent notes, competitor communications, social media signals where available—and uses AI to identify risks, opportunities, and next-best actions for the insurance company's relationship with every customer. The system would feed insights into pricing algorithms, claims handling, product recommendations, and agent coaching. Cost: $4 million. Expected ROI is more difficult to estimate (it is not a cost-reduction play but a revenue and margin improvement system), but conservatively, the company estimates $5 million per year in additional value. On concentration: very strong. Superior customer understanding is increasingly strategic in insurance. On inimitability: strong, though not airtight. Competitors could build something similar, but the company's historical data and agent relationships give it an edge. On coherence: extremely strong. The system is designed to integrate across all customer-facing processes. On compounding: very strong. The more customer data the system processes, the smarter it becomes. This proposal passes the Advantage Test decisively, though with less certainty than Proposal Three. It should be prioritized for funding, though perhaps after Proposal Three.

• • •

How to Score and Prioritize

The Advantage Test becomes operational through a straightforward scoring methodology. For each initiative, score each of the four dimensions—concentration, inimitability, coherence, compounding—on a scale of one to five. A score of one means the initiative fails badly on that dimension. A score of five means it excels. A score of three is adequate. The critical rule is this: an initiative must score at least three on every single dimension. If it scores one or two on any dimension, reject it outright. Do not average across dimensions. An initiative that scores five on concentration, inimitability, and coherence but one on compounding should be rejected, not averaged into a three-point-five rating. The reason is simple. A strategy is only as strong as its weakest element. An initiative that will create an advantage today but erode tomorrow is not worth the investment.

For initiatives that pass (score three or higher on all dimensions), use the total score to prioritize. An initiative that scores five, five, five, four (total nineteen) should be funded before one that scores three, three, three, three (total twelve). The total score tells you not just whether an initiative is worth doing, but how much better it is than other alternatives. The scoring guide below provides rough calibration for what each score level means:

Concentration

4 - Directly addresses the central strategic challenge.

3 - Clearly connected to the strategic challenge.

2 - Contributes to the strategic challenge indirectly.

1 - Tangential or unrelated to core challenge.

Inimitability

4 - Competitors would struggle for 3+ years to replicate.

3 - Competitors would need 18-36 months to replicate.

2 - Competitors could replicate in 12-18 months.

1 - Competitors could replicate in months or copy from vendors.

Coherence

4 - Integrates deeply with multiple business functions.

3 - Integrates meaningfully with other initiatives.

2 - Touches multiple functions but not deeply.

1 - Standalone tool with limited integration.

Compounding

4 - Advantage strengthens measurably over time.

3 - Advantage grows, though at a slow rate.

2 - Advantage stable or growing very slowly.

1 - Advantage static or eroding.

This scoring method forces intellectual honesty. It requires teams to ask hard questions and to defend their answers. It prevents the organizational bias toward saying yes. And it surfaces the priorities that will actually create advantage.

• • •

Six Common Mistakes in Applying the Test

The Advantage Test sounds simple in theory. In practice, organizations consistently make the same errors when applying it. Being aware of these mistakes can help you avoid them.

First mistake: Scoring generously because the team is emotionally invested. A team has spent six months designing an AI proposal. They believe in it. They have sold it internally. Now they are asked to score it against the Advantage Test. The natural bias is to score high. To find ways to justify the initiative. To tell themselves that yes, inimitability is actually a four because our implementation is especially clever. This requires vigilance. The evaluation team must include people who have not been involved in designing the proposal. Or better yet, bring in an external facilitator who can ask tough questions without organizational loyalty clouding the assessment.

Second mistake: Confusing technical sophistication with strategic advantage. A team proposes an AI system using advanced techniques—transformers, reinforcement learning, graph neural networks. The technical brilliance is real. But technical brilliance does not equal strategic advantage. A competitor with the same budget and talent can build something equally sophisticated. Unless that sophistication creates a defensible capability that competitors cannot easily match, it is not strategic. Technical excellence is necessary. It is not sufficient.

Third mistake: Ignoring the coherence dimension because standalone tools are easier. A team might reason: "We cannot predict how this will integrate with other initiatives five years from now, so we will just score it as a three and move on." But coherence is not about perfect prediction. It is about intentional design toward systems thinking. Do you design this initiative in isolation, or do you intentionally architect it to feed insights elsewhere? Do you build the connections or assume they will emerge later? Standalone tools rarely evolve into systems. Systems are intentionally built. If an initiative cannot be designed to cohere with the broader strategy, it probably should not be funded.

Fourth mistake: Failing to honestly assess inimitability because "our data is unique." This is perhaps the most common rationalization. A team argues that their data is proprietary, so competitors cannot replicate their model. But do competitors have access to the same raw data sources you do? If the data comes from standard banking systems, claims databases, or public sources (possibly enriched), then competitors can access similar data. They may not have your historical archive, but they can start building from today forward. In five years, if you have not built a compelling compounding advantage, the data gap narrows. Claiming uniqueness of readily-available data is wishful thinking. Real data advantages come from sources competitors genuinely cannot access: unique relationships with customers, proprietary sensors, exclusive partnerships, or network effects. Be skeptical of claims that data alone creates defensible advantage.

Fifth mistake: Scoring concentration too generously because the initiative is important. Yes, improving customer service is important. But is it the crux? The central strategic challenge? If you diagnosed that your critical challenge is superior product innovation and this initiative is about customer service, then concentration is weak, not strong. Importance and strategic centrality are not the same. Many important things should not be strategic priorities. An initiative must address the specific challenge you diagnosed, not merely contribute to a general area.

Sixth mistake: Underestimating how quickly competitors can respond. A team might argue that their inimitability score is high because competitors are not currently doing what they propose to do. But inimitability is about the eighteen-month horizon, not the current moment. Will competitors have hired similar talent by then? Will equivalent tools be available? Will the market have taught everyone your tricks? This requires a realistic assessment of how fast your industry moves and how capable your competitors are. In fast-moving industries like technology, eighteen months is a long time. In slower-moving industries like banking, it might not be. Be honest about your competitive environment.

• • •

The Discipline of Killing Good Ideas

If you apply the Advantage Test rigorously, you will discover something surprising: the most important output is not the list of initiatives you will fund. The most important output is the list of initiatives you will kill. Every initiative that fails the test is consuming resources—data science talent, cloud computing budget, executive attention, project management capacity. These resources are finite. If you do not kill a good-but-not-strategic initiative, you have implicitly decided not to fund a great-and-strategic one. The Advantage Test makes this tradeoff explicit and painful.

This is psychologically difficult. A proposal to be killed typically has advocates. A team spent months designing it. Stakeholders are rooting for it. Finance has penciled in the anticipated savings. The conversation about killing it is uncomfortable. And the benefits of killing it are not immediate. You do not see the magical AI system that would have been built with the freed resources if you are not actually building it. You only see that something people cared about is going away.

But this is where the organizations that transform separate themselves from those that merely tinker. Transforming organizations have the discipline to kill. They understand that strategy is about concentration. They understand that "good" and "strategic" are not the same. They understand that every initiative not funded is a conscious decision to invest more heavily in initiatives that are strategic. And they communicate this clearly.

One organization we worked with had approved nearly thirty AI initiatives across the company when we introduced the Advantage Test. When the test was applied rigorously, only eight passed. Eight out of thirty. The organization faced a choice: admit that the previous approval process had been flawed, or adjust the test downward to justify prior decisions. They chose honesty. The leadership team killed twenty-two initiatives. Most had positive ROI. Many had executive sponsors. But they did not align with the Advantage Test. The message the leadership sent was clear: we are serious about strategic transformation. We are not doing every good idea. We are doing the ideas that will make us harder to beat. Within eighteen months, the impact was visible. The eight initiatives that passed the test achieved better results than the twenty-two combined had achieved in prior years. Resources concentrated on strategy compounds faster than resources scattered across initiatives that are merely good.

• • •

From Good Ideas to Strategic Advantage

Remember Margaret Chen and Consolidated Metro Bank? They had fourteen AI initiatives approved in a single year. Every one looked good. Every one had positive ROI. And every one was pulling the organization in different directions.

When Margaret insisted on applying the Advantage Test, the analysis was straightforward. The fourteen initiatives were evaluated against the four dimensions. Six passed. They scored three or higher on concentration, inimitability, coherence, and compounding. The other eight failed. Some failed on multiple dimensions. Eight initiatives that the committee had approved, that stakeholders were expecting, that budgets had been allocated for—had to be killed.

The organization redirected $15 million that had been budgeted for the eight killed initiatives to enhance and accelerate the six that passed. One of those six was particularly strategic: an AI-driven relationship intelligence platform that leveraged the bank's unique, proprietary database on small business customers accumulated over forty years. The system would use machine learning to identify relationships with growth potential, predict their capital needs, and suggest next-best-action conversations for account managers. It integrated with customer relationship management, lending systems, and treasury management. It compounded as more relationships fed into the model. It was eminently strategic.

Within eighteen months, the results were striking. Small business lending volumes grew 40 percent. Customer relationship depth and profitability increased measurably. Credit quality remained stable or improved. Most importantly, competitors could not figure out what was happening. They could not replicate the system because they lacked the historical relationship data. They could not hire the capability because the bank's team had become deeply integrated into the bank's systems. The advantage widened rather than narrowed. The Advantage Test had transformed a company that was doing AI from one that was doing too much AI into one that was doing the right amount of the right AI.

The lesson was not that more AI is bad or that the killed initiatives were worthless. The lesson was that saying yes to everything is the same as saying yes to nothing. Without a filter—without the Advantage Test—an organization will allocate resources to every idea that looks reasonable and achieve less than it would by concentrating ruthlessly on strategy.

Now you have the framework for deciding what to do. You have diagnosed the crux. You have sketched a strategy. You have a filter—the Advantage Test—for separating strategic opportunities from good-but-not-strategic ones. But strategy is not static. It unfolds in time. You cannot implement everything at once. You cannot even implement the six most strategic initiatives simultaneously without drowning in complexity. So what do you do first? How do you sequence? How do you make progress toward transformation without overwhelming the organization? That is the question Chapter Six takes up, through the lens of proximate objectives: how to choose the first step that makes possible the second step, which makes possible the third, in a sequence that compounds.

 

CHAPTER SIX

Proximate Objectives

• • •

The Roadmap That Went Nowhere

The CEO of a Fortune 500 consumer goods company stood before her entire organization on a Tuesday afternoon in March, eyes bright with conviction. Her presentation was polished, her vision clear: an 18-month artificial intelligence transformation roadmap that would fundamentally reshape how the company operated. She described AI-powered demand planning that would revolutionize inventory management, an autonomous supply chain that would respond to disruption in real time, and personalized marketing at scale that would transform customer relationships. The slides showed a confident trajectory, milestones neatly spaced, each one building toward a transformation that would move the company from legacy operations into the future. The executives in the room nodded. The technologists took notes. The investors would certainly like this.

Six months later, nothing had shipped. Not a single deliverable. Not even a pilot that had graduated from prototype to production. The roadmap had been ambitious — too ambitious, as it turned out — and its ambition had become its primary failure mode. Every milestone was 12 months or more away. The closest deliverable was still 9 months down the road. No one in the organization could point to a proximate objective, a goal they could reasonably achieve with current capabilities and resources. The engineering team had begun to lose focus, fragmenting into competing initiatives that were supposed to build toward those distant milestones. The business leaders who had applauded the vision in March were now questioning whether AI was real or just another technology fad. And the morale across the company had begun to crater not because the vision was wrong, but because the team could see the destination but had no achievable next step. They could see the mountain but not the path.

This story plays out across hundreds of organizations every year. Not because the leadership is incompetent or the vision is flawed, but because building the roadmap is fundamentally different from building the capability to execute it. The two require different thinking. A roadmap built from an aspiration backward often fails to account for the prerequisites that don't yet exist — the clean data that hasn't been assembled, the AI-literate teams that haven't been hired or trained, the governance structures that haven't been built. What looks logical on a slide deck becomes impossible in practice because the intermediate steps assume capabilities the organization doesn't yet possess. The result is paralysis dressed up as strategy.

• • •

What Rumelt Means by Proximate Objectives

Richard Rumelt, one of the most influential strategic thinkers of our time, introduced the concept of proximate objectives in his book Good Strategy/Bad Strategy. The idea is deceptively simple: a proximate objective is close enough to be feasible. It's not a distant aspiration or a long-term vision — those are important but insufficient for execution. A proximate objective is something the organization can actually accomplish with its current capabilities, resources, and knowledge. It's within reach.

This distinction matters more than it might initially appear. Many organizations confuse vision with strategy, and stretch goals with executable plans. They set targets that are inspirational but impossible. A sales leader announces a goal to triple revenue in two years. A product leader commits to becoming the market leader by next quarter. These goals are framed as motivation, but they often have the opposite effect. They demoralize teams because teams can sense the gap between aspiration and reality. They understand, at some intuitive level, that the path doesn't exist. Stretch goals can work in stable, well-understood domains where you simply need to execute harder. But in transformation — especially in AI transformation — they collapse under the weight of unknowns.

A proximate objective, by contrast, acknowledges both the aspiration and the reality. It says: given where we are and what we know, what is the next meaningful step we can actually take? The power of this framing is that it creates momentum. When a team accomplishes a proximate objective, they learn. They build confidence. They acquire capabilities that make the next objective achievable. Early wins change organizational psychology. Skeptics become believers. Resources flow more easily. Talented people want to join rather than leave. This is not about lowering ambition — it's about sequencing ambition in a way that creates conditions for success.

A proximate objective is ambitious enough to matter, but close enough to be achieved. It's the next meaningful step, not the distant destination.

• • •

Why Most AI Roadmaps Fail

The failure modes of AI roadmaps are surprisingly consistent across organizations. They tend to follow a predictable pattern: the vision is clear, the business case is compelling, but the path from here to there is built on sand. The roadmap assumes capabilities that don't exist. It assumes teams that haven't been assembled. It assumes clean data that hasn't been collected. It assumes organizational maturity that hasn't been built.

Here's how it typically unfolds. A consulting firm or an internal strategist looks at the end state — where the organization wants to be — and works backward to create a roadmap. The end state might be: "An AI-powered supply chain that reduces inventory costs by 20% and responds to disruption in real time." That's a worthy goal. But the path to it depends on prerequisites that often remain implicit. It assumes the organization has clean, unified data across multiple systems. It assumes the supply chain team understands machine learning well enough to specify the problem. It assumes the company has built the governance structures to move a model from development to production. It assumes there's organizational trust in AI decision-making. None of these assumptions are usually stated explicitly, which means they're rarely examined.

The result is a roadmap that looks logical on paper but is impossible in practice. Each milestone depends on capabilities that are supposed to be built in parallel. The organization launches four major initiatives simultaneously, each one assuming that the other three are succeeding. When inevitable delays happen — when data integration takes longer than expected, when hiring is slower than projected — the entire roadmap cascades into failure. Dependencies weren't sequenced. Capabilities weren't built systematically. Resources were spread too thin. And the organization ends up in exactly the position that the CEO at the consumer goods company found herself: looking at a roadmap with nothing to show for six months of effort.

The pattern reveals itself when you ask a simple question: what can we ship in 90 days with current capabilities? If the honest answer is "nothing," the roadmap is broken. It's not a vision problem. It's not a commitment problem. It's a sequencing problem. The organization has confused having a destination with having a path.

• • •

The Three Horizons of AI Investment

The most practical framework for thinking about proximate objectives in AI transformation comes from adapting McKinsey's Three Horizons framework. McKinsey originally developed this model to help organizations balance exploration with exploitation in their product portfolio. It works equally well for AI investment because it forces you to think simultaneously about quick wins, strategic bets, and exploratory plays. Rather than treating all initiatives as equally urgent or distant, it creates explicit categories with different success criteria, timelines, and risk profiles.

Horizon 1: Quick Wins (0–90 Days)

Horizon 1 objectives are the closest to home. They're achievable within 90 days using capabilities and assets the organization already has. The goal is to build credibility, generate learning, and create organizational confidence in AI. A Horizon 1 initiative doesn't require new hiring, new data pipelines, or new infrastructure. It takes existing data and existing tools and applies them to a problem that matters. It produces a visible result — not just a plan or a proof of concept, but something that works in production.

Examples of Horizon 1 initiatives might include: predicting customer churn using historical transaction data and a standard machine learning framework, automating routine email classification using NLP models available off the shelf, detecting anomalies in production equipment using sensors that are already instrumented, or prioritizing sales leads based on buying signals in existing CRM data. None of these require revolutionary breakthroughs. All of them require clear business definition, good data, and straightforward execution. The power of Horizon 1 is not innovation — it's validation. You're proving that AI works in your organization. You're building teams that have shipped AI. You're creating a constituency of people who have seen value and want more.

Horizon 2: Strategic Bets (3–12 Months)

Horizon 2 objectives require investment, but they have a clear path forward. These are initiatives that will build new capabilities and create competitive advantage. A Horizon 2 project might require hiring specialists, investing in new infrastructure, or collecting new data. But the required work is understood. You're not exploring unknown territory — you're building something you've already learned is possible. Horizon 2 initiatives are typically informed by successful Horizon 1 projects. A quick win in demand sensing (H1) might graduate to a more sophisticated demand forecasting system that spans multiple business units and product lines (H2). A proof of concept in anomaly detection might scale to a comprehensive equipment monitoring system (H2).

The difference between Horizon 2 and Horizon 1 is not the technology — it's the scope, the team, and the investment required. Horizon 2 projects take longer because they're more ambitious. They're worth the investment because they build defensible competitive advantage. But they still have enough definition at the start to be feasible. You know roughly what the problem is, what the solution should look like, and what skills you need. This is the sweet spot for strategic AI investment.

Horizon 3: Exploratory Experiments (12+ Months)

Horizon 3 is where you explore emerging technologies, test uncertain approaches, and prepare for future disruption. A Horizon 3 project might involve experimenting with large language models for new applications, testing novel approaches to personalization, or exploring autonomous systems in domains where they haven't been used before. The success criteria are different. You're not expecting all Horizon 3 projects to graduate to production. You're expecting most of them to fail, but you're learning from the failure. Horizon 3 is where you find the occasional surprise that becomes tomorrow's competitive advantage.

The critical thing about Horizon 3 is that it should never be funded at the expense of Horizons 1 and 2. Organizations that over-invest in exploratory experiments while neglecting quick wins and strategic bets tend to have lots of interesting research but no actual AI transformation. Conversely, organizations that get stuck in Horizon 1 efficiency plays forever never develop the more sophisticated capabilities that create defensible advantage. The balance matters.

Portfolio Allocation

In the early stages of AI transformation, a reasonable allocation across the three horizons is approximately 50 percent of resources and attention on Horizon 1 quick wins, 30 percent on Horizon 2 strategic bets, and 20 percent on Horizon 3 exploratory experiments. This allocation front-loads the building of organizational confidence and capability. It ensures that momentum is created through early success. As the organization matures in its AI capability, this allocation can shift. A more mature AI organization might allocate 40 percent to H1 (fewer quick wins needed, but still important for continuous learning), 40 percent to H2 (the bulk of strategic advantage building), and 20 percent to H3 (maintaining awareness of emerging possibilities).

• • •

Designing Proximate Objectives

Having a framework is useful, but designing actual proximate objectives requires a more rigorous methodology. The following approach has proven effective across many organizations:

Step 1: Start with Advantage

Proximate objectives should only be built around priorities that have passed the Advantage Test from Chapter 5. You're not trying to solve every problem — you're trying to solve problems where AI creates defensible advantage. If an initiative would provide only marginal advantage or if the advantage is easily replicable by competitors, it's not a good candidate for a proximate objective. You want to invest your Horizon 1 effort in problems where AI solves something that's hard for others to copy.

Step 2: Define the Smallest Meaningful Step

For each priority that passed the Advantage Test, ask: what is the smallest step we could take in 90 days that would be genuinely useful? This is different from a pilot or a prototype. Pilots often stay in protected environments. Prototypes prove a concept but don't solve a real problem. A meaningful step produces a result that actual users or business stakeholders can use. It doesn't have to be perfect or comprehensive. But it has to work. It has to deliver value. If you're building an AI system for demand sensing, the smallest meaningful step might be: predict demand for the top 20 SKUs using the last 24 months of sales data and deliver a daily forecast that the demand planning team uses to make inventory decisions. That's specific. That's achievable in 90 days. And it creates genuine business value.

Step 3: Verify the Prerequisites Exist

Before you commit to a proximate objective, verify that the prerequisites exist. Do you have access to the required data? Not perfect data, but usable data. Do you have the people with the right skills? Not necessarily a massive team, but enough to do the work. Do you have access to the decision-makers who can use the output? Do you have the compute infrastructure required, or is it available? This is where many proximate objectives fail — they're designed with an implicit assumption that someone will figure out the prerequisites. They won't. The prerequisites need to be confirmed upfront.

Step 4: Ensure the Learning Transfer

A proximate objective should generate learning that informs the next objective. That means building in reflection and documentation. When the demand forecasting pilot succeeds, what did you learn about data quality? About the gaps between how the organization thinks about demand and how the data reflects it? About the team's appetite for data-driven decision making? The learning from Horizon 1 should directly shape what Horizon 2 looks like. If the learning transfer isn't happening, you're just doing random projects instead of building a transformation.

Let's make this concrete with three examples. The first is from a manufacturing company that chose demand forecasting as their Horizon 1 objective. They had good historical data, a clear business problem, and a demand planning team ready to partner. The 90-day objective was: "Build an ML model to predict weekly demand for our top 50 products using historical sales data and deliver a confidence-ranked forecast that reduces forecast error by at least 15 percent compared to the current method." That's specific. That's measurable. And it's achievable with existing people and infrastructure. The learning from this project — about data quality, about the relationship between historical patterns and actual demand, about how to structure a human-AI workflow — then informed the more ambitious Horizon 2 objective: expand demand forecasting to all products, across all regions, incorporating external signals like weather and promotions.

The second example is from a healthcare provider who chose anomaly detection in patient data as their Horizon 1 objective. The proximate objective was: "Build a system to flag patients with unusual vital sign patterns as a screening tool for a specific high-risk condition, with 95 percent sensitivity and review by clinical staff before any action." The goal wasn't to replace clinical judgment — it was to augment it, to help clinicians catch cases they might otherwise miss. Again, specific, achievable, learnable. The Horizon 2 expansion involved applying similar anomaly detection to other conditions and other data types.

The third example is from a retail company that chose recommendation optimization as their Horizon 1 objective. The specific goal was: "Improve email campaign click-through rates by 10 percent using a collaborative filtering model trained on customer browsing history." Not revolutionizing personalization — not yet. Just one channel, one model, one clear metric. Successful, it led to Horizon 2 expansion to website personalization, mobile app personalization, and more sophisticated models.

• • •

The Momentum Paradox

Here's a counterintuitive truth about AI transformation: the biggest mistake most organizations make is not going too slow — it's trying to go too fast. They launch eight Horizon 1 initiatives simultaneously. They promise the board aggressive timelines. They commit resources to projects that are still being defined. And they end up with nothing shipped, morale collapsing, and the whole transformation in question. The path to moving faster is to deliberately slow down and think about sequencing.

The paradox emerges because of how organizations build momentum. One completed initiative creates more momentum than five half-finished ones. When you launch demand forecasting, get it working, move a model into production, and the demand planning team starts using it and sees value, something shifts in the organization. The skeptics who said "AI doesn't work here" have to rethink. The budget holders who were skeptical become believers. The technologists who doubted whether anyone cared suddenly see a clear path to impact. And — this is the key — the next initiative gets easier. There's a template. There's a team that's shipped. There's evidence that it's possible. The second project that would have taken 120 days when it was theoretical now takes 80 days because the path is clearer.

Contrast this with the organization that tries to launch five projects simultaneously. Resources get spread thin. Dependencies multiply. Each team is waiting for another team. When inevitable delays happen — and they will — the cascading impact is devastating. The organization hasn't proven anything. There's no evidence of success. There's only evidence of overcommitment. The skepticism deepens. The next round of funding becomes harder. The talent starts to leave because the project feels dysfunctional. The paradox resolves itself when you understand that speed is not a function of ambition — it's a function of completed, working projects. The way to accelerate is to ruthlessly sequence, to do fewer things, to complete them, and to use the learning and momentum from completion to inform the next wave.

One shipped initiative creates more momentum than five half-finished ones. Counterintuitively, the way to go faster is to go slower and sequence deliberately.

This is also why early wins matter so much for organizational psychology. The first successful AI project changes how people think about what's possible. The demand forecasting system that ships six weeks early, works better than expected, and starts delivering value immediately sends a message: this is real. This works. We can do this. That message is worth more than a perfectly optimized roadmap that never ships anything. It's worth more than ambitious goals that are still theoretical nine months later. It's the psychological foundation that allows the organization to take on bigger challenges in Horizon 2.

• • •

Governing the Portfolio Across Horizons

Once you've defined proximate objectives across the three horizons, the question becomes how to manage them as a portfolio. This requires explicit governance that most organizations don't have in place. Governance in this context doesn't mean bureaucracy — it means clarity about how decisions get made and how initiatives move between horizons.

The first governance principle is that Horizon 1 success should feed Horizon 2 planning. When a quick win is successfully completed, there should be a formal review to ask: does this reveal something that justifies a larger investment? If demand forecasting works better than expected, should we expand it? What would Horizon 2 expansion look like? What would it cost? How would it change competitive positioning? This sounds obvious, but most organizations don't do this systematically. They complete a project, declare success, and then scatter to new objectives instead of building on what worked.

The second principle is that Horizon 3 experiments should remain genuinely experimental. Don't fund them by cutting corners on Horizon 1 and 2. Don't pressure them to prove business value when their value is learning. Create space for exploration, but protect that space with clarity about what success looks like. A Horizon 3 experiment succeeds if it generates learning, even if the product itself doesn't ship. This is hard for many organizations to accept because they're used to holding everything to a commercial success criterion. But exploratory work requires a different mindset.

The third principle is active portfolio rebalancing. If Horizon 1 initiatives are all being captured by operational work, if everything is maintenance and no new capability is being built, the portfolio is out of balance. If Horizon 3 experiments are stalling because they lack focus, kill them. If Horizon 2 projects are slipping because the prerequisites didn't materialize, go back to Horizon 1 to build them. The portfolio should be actively managed, with quarterly reviews to assess health, to ask whether the mix is right, and to redirect resources as needed.

• • •

The Payoff of Sequencing

Remember the consumer goods company CEO whose 18-month roadmap had delivered nothing after six months? Her organization eventually figured this out. They didn't abandon the vision. They reframed it. They replaced the ambitious 18-month roadmap with a series of 90-day sprints, each one building on the last. The first sprint focused on a single, proximate objective: AI-assisted demand sensing for the top 20 SKUs using existing sales data. It was narrow. It was achievable. And it worked. Six weeks ahead of schedule, actually. The demand planning team started using the model. It wasn't perfect, but it was useful. It reduced forecast error by 12 percent — not huge, but real.

That success unlocked everything else. The second sprint expanded demand sensing to the top 100 SKUs. The third sprint added external signals. The fourth sprint moved into territory that had seemed impossible months earlier — autonomous supply chain adjustments, where the system didn't just predict demand but could automatically adjust orders and shipments within pre-approved parameters. Eighteen months later, the company had achieved more than the original roadmap promised. Not because they moved faster, but because each step made the next one possible. The learning from the first objective informed the second. The team that shipped demand sensing became the foundation for the autonomous supply chain work. The organizational confidence that came from an early win created space for bigger bets.

This is the power of proximate objectives. They're not about lowering ambition. They're about sequencing ambition in a way that creates conditions for success. They're about understanding that transformation is not a sprint — it's a series of sprints, each one building toward something larger. And they're about recognizing that the best way to reach an ambitious destination is not to stare at it from a distance, but to take one achievable step forward, complete it, and then let what you've learned shape the next step.

With priorities tested for advantage and objectives sequenced from proximate to strategic, the next question becomes: how do you build the capability to execute? Should you hire AI specialists, acquire capabilities from vendors, or partner with experts? Should you build in-house or outsource? These questions shape everything that comes next. And they're the focus of Chapter 7.

 

CHAPTER SEVEN

Build, Buy, or Partner

• • •

A fintech startup spent 18 months building a custom fraud detection model. A competitor bought an off-the-shelf solution and launched in 3 months. But 2 years later, the startup's model — trained on their unique transaction data — outperformed the competitor's by 40%. The competitor was stuck with a commodity tool that every other fintech also had. The question isn't which approach is faster. It's which approach is more strategic.

This opening scenario reveals something profound about AI capability decisions that most organizations still get wrong. They frame the build-versus-buy choice as a false binary: either invest heavily in custom solutions or accept the speed and simplicity of off-the-shelf tools. Neither framing captures the strategic reality. The fintech didn't succeed because building is inherently better than buying. The competitor didn't fail because buying is inherently faster. Both succeeded and failed in their own context — the startup succeeded because they built something that mattered, something connected to their crux. The competitor failed because they bought something that was table stakes and expected it to be a source of differentiation. This chapter explores how to think about this decision correctly.

• • •

1. The Build Temptation

Building custom AI is seductive. It promises differentiation, ownership, and control. When you build, the capability stays inside the four walls of your organization. You own the intellectual property. You shape the roadmap. You're not dependent on a vendor's priorities or a competitor's ability to buy the same tool. For executives who remember when building core systems was the only path to competitive advantage, the build instinct is powerful and deeply familiar.

But building is expensive, slow, and requires talent most companies can't attract or retain. A machine learning team costs $1–2 million per year in fully loaded compensation. A research scientist costs even more. Training cycles are long — you need 18–36 months to build a production system that actually works, not just a prototype that looks impressive in a demo. And the attrition risk is real. Your best ML engineers are constantly recruited by well-funded startups and hyperscalers offering equity and prestige. You build a capability, and two years later, the person who understands it best leaves for a role at OpenAI. Now you own the model, but you don't own the knowledge to improve it.

The decision to build must therefore rest on strategic criteria, not just control preferences. Build custom AI when three conditions converge. First, the AI capability sits at the heart of your competitive advantage — not a nice-to-have, but a must-have, something that changes the core of how you compete. For a logistics company, custom route optimization that learns from real-world performance data is a build candidate. For a law firm, custom contract analysis that understands the firm's precedents and negotiating preferences is a build candidate. Second, you have (or can get) proprietary data that makes a custom model fundamentally better than what's available off the shelf. This is critical. A model is only better if it has data that others don't. The fintech startup had 18 months of their own transaction data. Their competitor didn't. Third, you have or can recruit the talent to execute. This means not just hiring ML engineers today, but building a retention strategy to keep them engaged and growing for years. If any of these three conditions is weak, building becomes increasingly risky.

When NOT to build is equally important. Don't build when the capability is table stakes — when it's foundational but not differentiating. Email filtering is no longer a build candidate for any company. No one invests in building their own spam detection from scratch anymore. The capability is solved. Building replicates work that's already been done well elsewhere. Don't build when off-the-shelf is 90% as good. This is the hard threshold. If a vendor solution gets you to 90% of the value at 20% of the cost, the 10% gap may not justify the investment, especially if that 10% doesn't connect to your crux. Don't build when your data isn't actually differentiated. Some companies believe their data is proprietary when it's not. A health system that assumes its patient data will yield better ML models than academic medical centers with vastly larger datasets is probably fooling itself. Don't build when you can't retain the talent. If your market can't compete with the Bay Area on compensation and prestige, building creates a dependency on people you can't keep.

• • •

2. The Buy Trap

Buying AI tools is fast and low-risk. You negotiate a contract, implement the software, and in weeks (not years) you have a working system. Your CFO is happy because capital expenditure is lower than a multi-year build. Your board is happy because you're moving fast. Your team is happy because they're not managing the complexity of ML infrastructure. But buying creates a different danger, one that sneaks up slowly: dependency on vendors who control the capability, and commoditization because your competitors can buy the same tool.

The vendor lock-in risk is real and often underestimated. Once you've integrated a vendor's AI model into your operations, switching costs become enormous. Your processes are built around the tool's output format. Your people are trained on the tool's interface. Your data pipeline connects to the vendor's API. If the vendor raises prices 50% at contract renewal, what do you do? Rip out the system and rebuild? That costs more than accepting the price increase. The vendor knows this. Over time, their pricing power grows and your negotiating position weakens. This dynamic is why many organizations that bought enterprise software in the 2000s now feel trapped — they switched from mainframe systems to client-server, and they're now switching to cloud, but it's always to another vendor's lock-in.

The commodity risk is subtler but more strategically dangerous. If you can buy the fraud detection tool, so can your competitors. If you can buy the demand forecasting tool, your supply chain rival can too. Buying a capability that competitors can also buy is only strategically sound if that capability isn't part of your differentiation strategy. It's table-stakes work, not edge-work.

When to buy is therefore clear: for non-strategic functions, for proven use cases where differentiation isn't the goal, and when speed to market matters more than uniqueness. If your company competes on design, not on invoice processing, buy your invoice automation. If you compete on brand, not on email marketing automation, buy your email platform. Apply the chain-link lens from Chapter 6: if this capability isn't a strategic link in your chain, buy it and move on. Free up your build capacity for the things that matter. This is not a failure. This is strategy. Strategy is about choosing what NOT to do as much as what to do.

But many organizations get this backwards. They buy expensive, complex AI systems for strategic functions and then spend millions trying to customize them. They invest in a vendor's black-box fraud detection tool but keep a team of data scientists trying to reverse-engineer it and tweak it. This wastes money and creates resentment. Either buy truly off-the-shelf tools for truly non-strategic capabilities, or commit to building or partnering for the things that matter. Half-building a bought system is the worst of both worlds — you get the limitations of the vendor's roadmap and the costs of building.

• • •

3. The Partnership Middle Ground

Partnerships offer a middle path between full build and full buy. You might partner with an AI startup that has built a specialized model in your domain. You might partner with a university lab to gain access to cutting-edge research. You might partner with an ecosystem player — a consulting firm, a platform company, or a systems integrator — to access capability you don't have internally. Partnerships promise access to capability without the full build cost and without the vendor lock-in of a pure buy.

But partnerships have hidden costs that often make them deceptively expensive. Misaligned incentives are the first problem. The AI startup you partner with might be growing quickly and considering you a small customer. A year into the partnership, they get acquisition interest from a hyperscaler. Your partnership terms suddenly don't look like a priority. The university lab might pivot to different research when funding dries up. The consulting firm you partnered with learns your business intimately, and then bids against you on a new contract. These aren't failures of individual partners — they're structural features of partnerships. Everyone has their own agenda, and alignment is temporary.

IP ambiguity is the second hidden cost. Who owns the model? If you fund the development, do you own the IP, or does the partner? If the partner owns it, can they license it to your competitors? If you own it, what happens when the partnership ends? Many partnerships stumble on IP terms because neither party anticipated who would own what. A financial services company once partnered with an AI firm to build a lending model. The agreement was vague on IP. When the partnership ended, both parties claimed ownership. Neither could move forward without legal action. Three years of work sat unused while lawyers fought.

Dependency risk is the third hidden cost. By partnering rather than building or buying, you create an organizational dependency on people outside your control. If the partnership is going well, your team gets lazy about understanding the capability — they rely on the partner to maintain it. If the partnership ends, you suddenly need to rebuild internally or switch to a vendor, and your team is out of practice. This is particularly dangerous for Horizon 2 work, where you're building new capabilities that might become strategic later. A partnership that provides today's capability but leaves you dependent and unskilled is strategically riskier than it looks.

The danger that your partner becomes your competitor is real and should not be ignored. Consulting firms build deep expertise in your business, and some of that expertise might be applied to help your competitors. AI startups learn what matters in your industry and might build products that disrupt your business. Universities publish research, and your competitors read it too. Partnerships are not the same as M&A or full acquisition. You're not in control, and you should plan accordingly.

When to partner is therefore specific: when you need capability you can't build but shouldn't just buy — particularly for Horizon 2 and 3 initiatives. If you're exploring a new market or a new capability that might become strategic later, partnering lets you learn and validate without full commitment. If you're in a domain where specialized expertise is rare (rare disease diagnosis, for example), partnering with the research institutions that have the data and expertise might be your only path. But structure partnerships carefully. Insist on clear IP agreements that anticipate what happens if the relationship ends. Include sunset clauses — specific dates when the partnership obligation expires and both parties can move on. Require knowledge transfer plans so that your team learns what the partner knows. Pay attention to the learning, not just the output. The goal is that when the partnership ends, your organization is richer in capability than before.

• • •

4. The Decision Framework

A structured decision tree can guide build-buy-partner choices and reduce the temptation to let emotions or politics drive the decision. Start with the first question: "Is this AI capability at the core of our competitive advantage?" This is the foundation. If yes, you are in a build or build-like territory. The capability matters too much to outsource completely. If no, move to the second question.

The second question is: "Is differentiation important at all?" If the answer is yes — meaning this capability matters for competitive advantage but is not at the core — then partner or partner-to-build. You need some degree of control and ownership, but full build isn't necessary. If the answer is no — meaning the capability doesn't matter for differentiation — then buy. It's table stakes. Get it done quickly and cheaply.

Then overlay practical constraints. Talent availability: Can you recruit and retain the people needed to build? Timeline pressure: Do you need results in months or years? Data readiness: Do you have proprietary data, or do you need the vendor's data? Budget: Can you afford a multi-year build, or do you need speed? These constraints might modify the decision. A capability that looks like a "build" might become a "partner" if you can't retain talent. A capability that looks like a "buy" might become a "partner" if buying requires a long implementation timeline and you need learning and customization.

Example 1: A manufacturing company decides it needs demand forecasting. This isn't at the core of their competitive advantage — supply chains are, but forecasting is upstream. It's not differentiation work. But they have detailed historical sales data that might improve forecast accuracy. The decision: buy a proven forecasting tool (fast, proven, low-risk), but build a custom feature that incorporates their proprietary data into the buying tool's output. They get speed, but also edge.

Example 2: A healthcare system decides it needs diagnostic support for a rare disease. This IS at the core of their competitive advantage in that specialty. But they have limited patient data compared to academic medical centers. The decision: partner with a research institution that has the data and expertise. The partnership includes knowledge transfer, so the health system's radiologists become better trained. If the partnership ends, the organization is better skilled.

Example 3: A consumer software company decides it needs content moderation. This is important, but not differentiation work — everyone in the industry needs content moderation. The decision: buy a content moderation platform. Don't build. Don't partner. Buy from a vendor that has scale and can iterate on the problem faster than you can alone.

• • •

5. Hidden Costs Nobody Talks About

The true cost of each approach includes things that don't appear on the initial business case, the ROI spreadsheet, or the proposal presentation. These hidden costs accumulate over years and often exceed the initial investment.

For build approaches, the hidden costs include ongoing maintenance and model drift. A model trained today on this year's data will degrade in accuracy next year. Markets shift, customer behavior shifts, and the patterns the model learned are no longer true. Monitoring systems need constant care. When the model degrades, you need resources to understand why and rebuild it. Talent retention is another hidden cost. The person who trained the model becomes indispensable. If they leave, you lose institutional knowledge. Opportunity cost is the hardest to measure but the most real. Every engineer working on a build that might have been a buy is an engineer not working on something more strategic. The longer the build cycle, the more real the opportunity cost becomes.

For buy approaches, the hidden costs include vendor lock-in and pricing power over time. What costs $100,000 in year one might cost $500,000 in year five, because you're locked in. Customization limitations become more painful over time. As you learn what works, you want to adapt the tool, but it's designed for a standard workflow, not your workflow. Upgrade dependency means you're forced to upgrade when the vendor upgrades, which introduces risk. Data portability is the flip side — if you want to leave the vendor, extracting your data and your models is difficult and expensive. Many vendor agreements make it intentionally difficult.

For partnership approaches, the hidden costs include coordination overhead. Meetings with the partner. Decisions made slowly because alignment is required. Change control processes that take longer because you need partner sign-off. IP disputes become expensive. If the partnership ends badly, you might face litigation. Dependency on the partner's roadmap means your timeline is not your own. If the partner decides to deprioritize something, you're stuck. Strategic misalignment can emerge slowly. The partner's business model might start to conflict with yours. A partner might decide they'd rather acquire you than keep partnering with you. These dynamics aren't failures — they're features of partnership. Account for them.

The lesson is simple: get the true cost for all three options before deciding. Include the hidden costs. Talk to other organizations that have chosen each path. Ask them about the surprises, the costs they didn't anticipate, and what they'd do differently. Then make the decision with full information.

• • •

6. Portfolio Coherence

Most companies end up with a mix of build, buy, and partner — and that's fine. What matters is coherence. A technology portfolio is like a portfolio of investments. Some diversification is healthy. Some concentrated bets are appropriate. But if you have ten random investments with no connection to each other, you have a poor portfolio. The same is true with AI capabilities.

Does your mix of build, buy, and partner decisions reinforce the guiding policy from Chapter 6? Or does it create a Frankenstein architecture where nothing integrates and every system speaks a different language? A manufacturing company might decide to build a custom supply chain optimizer (strategic advantage) but buy inventory management (table stakes) and partner with a logistics AI startup (learning and development). That's coherent. The builds address what matters most. The buys handle table-stakes efficiently. The partnerships create options for future capability.

A different manufacturing company might decide to build inventory management, buy supply chain optimization, and partner with a logistics AI startup. The pattern is inverted and no longer coherent. They're building what others have solved and buying what should be a strategic advantage. The builds and buys work at cross-purposes. Apply Rumelt's coherence principle: every capability decision should be tested against the guiding policy. If the decision contradicts the policy, revisit the decision. If the policy doesn't guide the decision, revisit the policy.

Portfolio coherence also applies across time. Your Horizon 1 strategy might call for buying commodity tools and moving fast. Your Horizon 2 strategy might call for building differentiated capabilities. This is fine — they're different time horizons with different economics. But make sure it's intentional, not accidental. Are you buying because you don't have time to build in Horizon 1? Or because the capability isn't important? If it's the former, what's your plan to transition to building in Horizon 2 if the capability becomes strategic? If you don't have a plan, you're creating a future problem.

• • •

7. Conclusion: Strategic Clarity

Return to the fintech story with which this chapter opened. The startup's custom fraud detection model was superior not because custom is always better than bought. It was superior because the startup built something in the right place — the place where custom AI connected to their crux, the source of their competitive advantage. They had proprietary data that made the model better. They had the talent to build it. They understood that fraud detection was not just a feature, but a fundamental part of how they would win.

The competitor could have succeeded with a different strategy. They could have bought the fraud detection tool and built something custom in a different area — perhaps in real-time account analytics or in personalized lending offers, the areas where they could create differentiation. They didn't fail because buying is inherently inferior to building. They failed because they made a build-buy-partner decision without a strategic framework. They bought fraud detection and then expected it to be a source of differentiation, when it was always going to be table stakes. They wanted competitive advantage from a commodity.

The framework in this chapter — from the decision tree to the hidden cost analysis to the portfolio coherence test — is designed to prevent this error. Use it to ask the hard questions before committing to build or buy or partner. Is this capability at your crux? Do you have proprietary data? Can you retain talent? Will competitors eventually buy the same thing? If the answers don't support the decision, don't make it. The build-buy-partner choice is too important to get wrong, and the consequences compound over years.

This chapter concludes Part II: Strategy. You now have a complete framework for the what (prioritize, from Chapter 5), the when (sequence, from Step 2 of Chapter 6), and the how (build-buy-partner, from this chapter). With the strategic decisions made, Part III turns to the hardest part — making it work inside a real organization. Strategy is just paper until people make it happen. Chapter 8 starts with the human side: how to organize, how to hire, how to develop talent, and how to make sure the strategy doesn't just stay on the slide deck. The next chapter is where the work becomes real.

 

CHAPTER EIGHT

The Human Side of AI

• • •

When a global professional services firm announced its AI strategy at a town hall meeting, the message seemed clear and compelling. The Chief Executive Officer invoked the promise of "efficiency" repeatedly—fourteen times in a single speech. The presentation was polished. The business case was solid. The technology roadmap was realistic. Then something unexpected happened: within a single week, three of the firm's top performers resigned.

These departures were not random. The departing consultants were senior enough to read between the lines. "Efficiency," they understood, meant fewer people. It meant that the firm intended to accomplish the same work with a smaller headcount. These were experienced professionals who had weathered downturns and restructurings before. They understood what an AI transformation meant for their career trajectories, their job security, and their colleagues. They chose to leave before the cuts came.

The paradox was painful in its simplicity: the firm's AI strategy was technically sound. The operating model was well thought out. The implementation plan was realistic. But its people strategy was nonexistent. This is the central lesson of Part III of this book—AI transformation succeeds or fails primarily on the human side. The technology is the easier part. The people are everything.

• • •

The Workforce Paradox

AI creates a fundamental paradox in the modern workforce, one that many organizations fail to acknowledge until it's too late. The people you most need for your AI transformation are precisely the people most threatened by it.

Consider the commercial underwriter at an insurance company, the regulatory analyst at a financial institution, or the consultant who diagnoses client problems. These are knowledge workers with deep process understanding. They know the edge cases that break simple rules. They understand why certain procedures exist and which ones can be safely modified. They have built relationships with customers, colleagues, and stakeholders over years. They know where the landmines are—the places where a seemingly logical process fails in practice.

These are exactly the people you need to design an AI system that actually works. They need to help define what data the AI should learn from. They need to explain the rules and heuristics that have accumulated over years of experience. They need to identify edge cases and exceptions that training data might miss. They need to help validate whether the AI's decisions make sense. Without these subject matter experts actively engaged in AI design, you end up with systems that technically work but miss critical nuances, that produce recommendations that seem plausible but ring false to experienced humans, that handle normal cases well but fail catastrophically on edge cases.

But here is the paradox: these same people fear being automated away. They see the AI system and reasonably ask: "If the AI can do what I do, why does the company need me?" This fear is not irrational. It is often justified. In many organizations' AI roadmaps, the answer is actually "they don't need you anymore."

Companies that fail to resolve this paradox—that fail to clearly articulate why they need these people and what their role will be—end up with one of two outcomes, both destructive. The first is passive resistance. Employees comply with the AI transformation because they have no choice. They go to the training sessions. They use the tools. But they don't bring their best thinking. They don't contribute their expertise to making the AI better. They are emotionally disengaged. They're updating their resumes. The AI system suffers because it doesn't get the benefit of deep human expertise in its design and validation.

The second outcome is active sabotage—subtle, and therefore harder to detect and address. A subject matter expert who fears for their job can find dozens of ways to undermine an AI initiative without being overtly insubordinate. They can consistently find flaws in the training data ("I've seen cases that this doesn't account for"). They can cherry-pick exceptions that make the AI look bad. They can withhold the context and expertise that would make the system better. They can subtly discourage their colleagues from adopting it. They can plant seeds of doubt: "The data scientists didn't understand our actual business; this will never work." This kind of resistance is corrosive because it's hard to prove and address directly.

• • •

Reskilling Is Not a Strategy

Open any major company's AI transformation plan and you will find "reskilling" listed as a key initiative. It usually appears in the human capital or talent section. It typically has a budget allocated and a senior leader assigned. And in most cases, it is treated as a checkbox exercise rather than a serious strategic intervention.

The standard approach to reskilling in most organizations looks something like this: "We're going to make AI and machine learning training available to all employees through an online learning platform. We'll partner with LinkedIn Learning or Coursera. We'll send out monthly emails encouraging people to take a course on machine learning fundamentals. We'll track completion rates and celebrate when 50% of the organization has completed the training."

This approach misses the mark entirely, for a simple reason: most employees don't need to understand how AI works. They need to understand how their role changes, what new skills they need for the changed role, and what career path exists for them in the transformed organization. A customer service representative taking a course on neural networks and training data is not being reskilled. A customer service representative learning how to work effectively with an AI chatbot—how to handle the cases it can't, how to escalate appropriately, how to provide feedback that improves the system—is being reskilled.

Effective reskilling must be role-specific, connected to real and anticipated job changes, and ongoing rather than a one-time event. It must answer concrete questions: "What will I actually do differently starting next month? What new skills will that require? How will I learn them? Who will teach me? How will I know if I'm doing it well? What happens if I can't adapt?" Reskilling must also address the economic reality: if a role is genuinely being eliminated, reskilling programs that merely train people for equivalent roles elsewhere in the organization are incomplete. Outplacement support, bridge employment, and honest conversations about severance must accompany that reskilling.

The companies that handle reskilling effectively treat it not as a talent development program but as a core part of the business transformation. They invest heavily. They assign it to their Chief Human Resources Officer or their Chief Learning Officer, with executive visibility and accountability. They customize curricula by role. They tie it to promotion and compensation decisions. They create mentorship between people doing the old work and people doing the new work. They measure success not by course completion rates but by adoption rates, engagement scores, and career progression.

• • •

Role Redesign, Not Role Elimination

The wrong strategic question to ask about AI and the workforce is: "Which jobs can we eliminate?" The right question is: "How do we redesign roles to combine human strengths with AI strengths?"

This distinction matters because it leads to fundamentally different decisions and outcomes. The elimination question starts from a cost-cutting mindset. It assumes that if a machine can do the work, the human should be removed. The redesign question starts from a capability-building mindset. It assumes that the human and the machine working together can accomplish things neither could do alone.

Where are humans superior to AI? Humans excel at judgment calls in ambiguous situations. They bring empathy and emotional intelligence. They create novel connections across disparate domains—creative thinking, in other words. They navigate complex interpersonal situations and communicate nuance. They adapt to unexpected changes and handle genuine novelty. They build and maintain relationships. They provide meaning and vision.

Where is AI superior to humans? AI excels at pattern recognition at scale. It can process millions of data points and find subtle patterns that humans would miss. AI is consistent and doesn't get tired or distracted. AI can process massive volumes of information very quickly. AI can be deployed across thousands of tasks simultaneously. AI doesn't need breaks or vacation or benefits.

The magic happens when you redesign roles so that each party does what they're best at. Consider a commercial underwriter at an insurance company—a concrete example that makes this tangible.

The Traditional Commercial Underwriter

An underwriter spends their day gathering data. They request financial statements from applicants. They verify information through third-party sources. They input data into systems. They run standard analyses. They check ratios and metrics against benchmarks. They review comparable transactions. They compile all of this into a recommendation memo. The memo typically takes 40% of their time. The remaining 60% is data gathering and analysis.

The Redesigned Role

Now add an AI system trained on decades of underwriting decisions. The AI system handles data gathering. It pulls information directly from applicants' systems via APIs. It verifies data through third-party data feeds. It runs all standard analyses and benchmarking. It compiles all of this into a preliminary recommendation. It takes 30 seconds what used to take the underwriter two days.

But the human underwriter's job doesn't disappear. Instead, it transforms. Now they spend their time on what matters: risk judgment, customer relationships, and deal structuring. They review the AI's preliminary recommendation and ask: "Is this right? Do we understand the customer's true risk profile? Are there edge cases or unusual circumstances this analysis might miss? Is there context in the relationship that changes our assessment? Should we take this deal? If so, on what terms?" These are judgment calls that require expertise, intuition built on years of experience, and understanding of the business context. These are the conversations that actually matter.

The underwriter is now more productive, because they're not wasting time on routine analysis. They have higher job satisfaction because they're doing the work that requires judgment and relationship skills—the work that feels meaningful. The company takes on more deals with the same number of underwriters. Risk quality might actually improve because the human judgment is more informed by comprehensive AI analysis. This is what good role redesign looks like.

• • •

The Coherence Principle Applied to People

Recall from earlier chapters that Rumelt's coherence principle states that all strategic actions must reinforce each other. They must point in the same direction. A company that announces a shift toward digital distribution but continues to invest primarily in physical retail locations is incoherent. A company that commits to serving the mass market while building a luxury brand is incoherent. Incoherence creates confusion, cynicism, and poor execution.

This principle applies powerfully to the human side of AI transformation. Your workforce strategy, your incentive structures, your communication, your hiring criteria, and your organizational design must all point in the same direction. If they don't, you send incoherent signals that breed cynicism.

Consider what incoherence looks like: You announce an AI transformation where roles will be redesigned and people will work alongside AI systems. You commit publicly to "no layoffs in the AI transformation." This is your people strategy. But then, in your talent management decisions, you continue promoting people for the skills that are about to be made obsolete. You measure performance using pre-AI metrics that no longer align with the redesigned roles. You hire for capabilities that will matter less in the transformed environment. You continue compensating people for the old work. You create organizational structures that made sense before but don't fit the new AI-enabled ways of working.

Employees see this incoherence immediately. They read between the lines: the words are inspiring, but the actual decisions all point toward "we're trying to get smaller." They become cynical. They don't believe the public messaging. They're suspicious that the no-layoff promise is temporary. They resist participation in the transformation because they suspect its true purpose is to eliminate their jobs. Performance and engagement decline. The AI transformation stalls.

Coherence requires alignment across multiple levers. Your hiring criteria must shift to emphasize the capabilities that matter in the AI-enabled future. Your promotion decisions must reward people for the new skills—working with AI, being comfortable with experimentation, showing adaptability. Your performance metrics must be updated to measure what matters now, not what mattered before. Your compensation structure must recognize that AI collaboration is valuable and compensate accordingly. Your organizational design must reflect the new roles and workflows. Your communication must consistently reinforce the message that people have a future in the transformed organization, and that the company is investing in them.

• • •

Communication That Actually Works

What should you actually say to employees about AI transformation? And perhaps more importantly, what should you not say?

What Not to Say: Don't use "efficiency" as a euphemism for headcount reduction. Employees understand what you mean. You might think you're being diplomatic, but you're being dishonest. Dishonesty erodes trust more than hard truths. Don't promise "no one will lose their job" if that promise isn't true and if you're not certain about it. If some roles will be eliminated, say so. Employees will eventually figure it out, and broken promises are corrosive to trust. Don't communicate AI strategy primarily through press releases aimed at investors and Wall Street. Your employees will rightly feel that you're managing your stock price more carefully than you're managing their futures. Don't rely on big town halls and speeches as your primary communication mechanism. That's broadcasting, not dialogue. Employees have questions, and they need to feel heard.

What to Do: Be specific about how roles will change. Don't say "your role will evolve." Say: "You currently spend 60% of your time on data gathering and analysis. That work will be handled by an AI system. Your job will change to focus on client relationship management, risk judgment, and deal structuring. Those conversations are higher-value work, and they're where we need your expertise." This specificity helps people understand what's actually happening to their job.

Share the timeline honestly. Ambiguity breeds anxiety. Give people a clear timeline for when roles will change. If you don't know yet, say so: "We expect this change to roll out over the next twelve months, and we'll provide more specific timelines for your area by next quarter." This is more helpful than pretending you have more certainty than you do.

Acknowledge uncertainty. You won't have all the answers. The AI field is evolving rapidly. Your competitive environment is changing. Some of what you plan will work; some won't. Say this out loud: "We have a strategy, but we're also learning as we go. We're going to make mistakes. We're going to adjust. We're counting on you to help us figure out what works." This honesty makes people more willing to engage.

Involve employees in designing their new roles. Don't announce the redesigned role as a done deal. Instead, involve the people doing the work in shaping what the new role will look like: "We know that AI will handle the data gathering and analysis. We're not sure yet exactly how you'll spend your time. We need your input on what the job should look like." This involvement increases buy-in and generates better role designs.

Create visible proof points early. Find one team that can adopt the new AI-enabled way of working early. Support them heavily. Make their success visible. Tell their story: "Sarah's team started using our AI assistant three months ago. Skeptics predicted it would never work. Instead, they've increased throughput 40% while improving quality. More importantly, Sarah tells us she has more time to do the work she actually enjoys—the strategic client work. And she's still with us, still growing, still developing." Visible proof points are more persuasive than speeches.

• • •

Building an AI-Capable Culture

Culture change is difficult. You can't mandate it with a policy or a memo. But you can cultivate it through consistent actions, example-setting, reinforcement, and patience. The target culture for AI transformation has three key characteristics: data literacy, experimentation tolerance, and human-AI fluency.

Data literacy means everyone in your organization can read and challenge data-driven insights. It doesn't mean everyone needs to be a data scientist. It means that when someone presents an analysis, others can ask good questions: "What data is this based on? How recent is it? Are there known limitations? What assumptions went into this analysis? What would change if we used different assumptions?" Data literacy is the bulwark against both blind trust in data (treating every number as gospel) and dismissive cynicism (rejecting data-driven insights out of hand).

Experimentation tolerance means that people are comfortable running controlled tests, learning from failures, and iterating. In many organizations, failure is shameful. A pilot project that doesn't work is seen as a waste of resources and a mark against the team that led it. In an AI-capable culture, a failed experiment is valued learning. It's data about what doesn't work, which is progress. This doesn't mean being reckless. It means setting appropriate scope and stakes for experiments, documenting what you learn, and building organizational knowledge from failures.

Human-AI fluency means people naturally think about which parts of their work are best done by humans and which by AI. It's second nature. It's embedded in how they approach problems. Instead of "how do I do this?" they ask "how should this work be divided between humans and AI?" They understand the strengths and limitations of both. They're comfortable with AI as a collaborator rather than seeing it as a threat.

How do you cultivate these three cultural characteristics? Through education and training that's ongoing, not a one-time event. Through hiring decisions that prioritize people with these traits. Through modeling—leaders publicly demonstrating data literacy, comfort with experimentation, and human-AI thinking. Through rewards and promotion—making it clear that people who exhibit these characteristics advance. Through storytelling—celebrating examples and making them part of organizational lore. Through governance and decision-making processes that embody these values. If your budget approval process still requires ironclad certainty and punishes failed experiments, you won't cultivate an experimentation culture no matter how many training programs you run.

• • •

The Difference It Makes

Return to the professional services firm where we started this chapter. The firm that announced an AI strategy centered on "efficiency" and watched its top talent walk out the door within a week. What happened next?

The departures were painful. They forced a reckoning. The board and senior leadership realized that they had made a significant strategic mistake. They had focused entirely on the technology and the cost structure while ignoring the human dimension. They brought in a new Chief People Officer who had successfully navigated transformations at other companies. This new leader had a simple conviction: you build AI transformation around people, not around cost.

The firm completely reframed its AI narrative. Instead of leading with efficiency, they led with role evolution. They met with consulting teams and asked: "What work do you find boring and repetitive? What do you actually enjoy about your job?" The answers were consistent. Consultants hated gathering data and compiling analyses. They loved the client interaction, the problem diagnosis, and the strategic advice. The new narrative was simple: "AI will handle the boring work. You'll spend more time on the work you actually enjoy."

They invested heavily in reskilling, focused specifically on how to work with AI systems. They updated their promotion criteria to reward consultants who could effectively collaborate with AI. They changed their performance metrics to measure the quality of client relationships and strategic advice, not the volume of analyses produced. They hired new consultants with the profile of people who were comfortable with AI and excited by the possibility of focusing more on client work.

The results were striking. Attrition dropped back to normal levels. Engagement scores, which had plummeted after the initial AI announcement, rebounded. And—this was the most important outcome—AI adoption accelerated. Consultants used the tools because they wanted to. They contributed their expertise to making the AI better. They collaborated with data scientists to improve the models. They became advocates for the technology rather than resisters.

This firm learned the hard way that AI transformation is ultimately about people. The technology is the enabler, but people are the foundation. Get the people side right—resolve the paradox, address reskilling seriously, redesign roles thoughtfully, maintain coherence, communicate honestly, and cultivate a capable culture—and the technology will flourish. Get it wrong, and you'll have the best AI systems in the world that no one wants to use.

With the human foundation in place, the question that remains is structural: How do you redesign the way your organization makes decisions? How do you build decision-making processes that incorporate both human judgment and AI-driven insights? That is the subject of Chapter Nine.

 

CHAPTER NINE

Redesigning Decisions

• • •

A major airline automated its overbooking decisions with AI. The model was technically excellent — it maximized revenue per seat with precision that would have impressed any revenue manager. But it also created a PR disaster when it bumped a family of four traveling to a funeral during a holiday weekend, because the model couldn't weigh reputational risk against marginal revenue. The problem wasn't the AI itself. It wasn't that the machine learning engineers had made a technical mistake. The problem was that nobody redesigned the decision system around the AI — nobody defined where human judgment needed to remain. Nobody asked the fundamental question: just because we can automate this decision, should we?

This is the story of AI transformation at the decision level. It's where abstract concepts like "augmentation" and "automation" meet the real world — where families miss funerals, customers feel treated unfairly, and organizations face the consequences of machines making decisions they were never designed to make. But it's also where the highest value of AI lies: not in replacing human decision-makers, but in fundamentally reimagining how decisions get made.

• • •

Decisions Are the Unit of Transformation

Most discussions about AI in business focus on processes and tasks. We talk about "automating customer service" or "optimizing supply chain operations" or "streamlining loan approvals." These framings aren't wrong, exactly, but they're incomplete. They focus on the activities themselves rather than the critical outcome: the decisions those activities produce.

The real impact of AI is on decisions. Every business runs on thousands of decisions, made constantly, across every function. Pricing decisions determine whether a customer buys or walks away. Hiring decisions shape organizational culture and capability. Inventory decisions determine profitability or excess waste. Credit decisions affect whether families can buy homes. Routing decisions determine delivery times and customer satisfaction. Scheduling decisions determine whether employees work reasonable hours. These aren't abstract management concepts — they're the economic heartbeat of the organization.

AI's power is in augmenting these decisions: making them faster, more consistent, better informed by patterns in data. AI can spot what's likely to happen next before humans see it coming. AI can weigh hundreds of variables simultaneously in ways human brains simply cannot. AI can make the same decision the same way every time, eliminating inconsistency. But "augmenting" is not the same as "automating," and confusing the two is where organizations get into trouble.

When you think about transformation at the decision level, your question changes from "How can we use AI to do what we're already doing faster?" to "How should we redesign this decision so that it combines human judgment and machine intelligence in the optimal way?" That shift — from automation thinking to augmentation thinking — is where real transformation begins.

• • •

The Decision Spectrum: From Fully Human to Fully Automated

Not all decisions are created equal. Some are routine and low-stakes. Some are complex and high-stakes. Some involve ethical dimensions that machines cannot and should not navigate alone. Thinking about how to integrate AI into your decision-making requires a framework that acknowledges these differences.

Consider a decision spectrum that runs from fully human at one end to fully automated at the other. Between those poles sit four distinct tiers, each with different characteristics, requirements, and risks.

Note that your counsel should be consulted frequently regarding legal and ethical considerations related to AI, automation, security, data privacy, and other topics.

Tier 1: Fully Automated. AI decides, no human involvement. Appropriate only for high-volume, low-stakes, well-defined decisions where the outcome space is narrow and the cost of error is low. Example: an e-commerce site deciding which item recommendation to show in a personalization slot. If the AI picks the wrong recommendation, the customer sees a product they might not want — an inconvenience, not a catastrophe.

Tier 2: AI Recommends, Human Approves. The AI provides a recommendation based on its analysis, but a human must approve before action is taken. This works for medium-stakes decisions where AI can provide excellent analysis but where the decision is consequential enough to warrant human review. Example: a bank's loan underwriting system flags a mortgage application as approved, but a loan officer reviews it before final approval, with authority to override. The AI does the heavy analytical lifting; the human applies judgment and accountability.

Tier 3: AI Informs, Human Decides. The AI provides data, patterns, and analysis, but the human makes the final decision. This is appropriate for high-stakes and complex decisions where the context matters enormously, where values are at play, or where unprecedented situations might arise. Example: a hiring committee uses AI analysis to highlight patterns in candidate qualifications, background, and interview performance, but the humans on the committee make the final decision, weighing culture fit, team needs, and judgment that no model can capture.

Tier 4: Fully Human. AI plays no role. These are decisions about values, organizational direction, how to handle relationships, or unprecedented situations. Example: how does a company respond when an AI system makes a harmful decision? That requires human deliberation about values, not machine analysis.

The key insight is this: not every decision should move to full automation just because AI makes it possible. Strategic organizations deliberately classify their key decisions across this spectrum. They ask: for this specific decision, what tier makes sense? The answer depends on the stakes, the complexity, the cost of error, and the degree to which context and judgment matter.

Start with your ten most important decisions. Where on this spectrum does each one belong? If you've been pushing everything toward Tier 1 (full automation) because the technology is available, you've probably misclassified some. The airline hadn't asked this question. It had automated a Tier 2 decision (AI recommends, human approves) all the way to Tier 1, removing human judgment from a situation where the cost of error was high.

• • •

The Chain-Link Problem in Decision Systems

Strategy scholar Richard Rumelt introduced the concept of "chain-link" systems: organizational capabilities that are only as strong as their weakest link. Replace one weak link with an excellent one, and performance barely improves. But strengthen every link in the chain, and the system becomes genuinely competitive. This concept applies powerfully to AI-augmented decision systems.

A decision system is a chain. One link is the AI model itself — does it generate accurate, unbiased predictions? That's important. But the model is not the only link. Other links include: the data feeding the model (is it current, accurate, and representative?). The governance process (can we act on the AI's recommendation quickly enough for it to matter?). The escalation paths (what happens when the AI's recommendation seems obviously wrong?). The feedback loop (do human overrides improve the AI for next time, or do they disappear into a black hole?). The monitoring system (are we tracking whether the AI is behaving consistently in the field, or only during validation?). The human decision-makers (do they understand AI's capabilities and limitations well enough to use it appropriately?).

Organizations often excel at one or two links. An AI team builds a brilliant model (strong link). But then the governance process takes three weeks to act on the model's recommendations, by which time the opportunity has vanished (weak link). Or an organization automates routine decisions beautifully (strong link) but fails to monitor for edge cases or model drift (weak link). The airline had a strong link in its overbooking model but a weak link in its governance structure — nobody was explicitly asking whether certain high-risk situations should be escalated to humans.

Redesigning decisions means strengthening the entire chain, not just the AI link. It means treating the decision system as an integrated whole. When you build decision intelligence, you're building a capability that spans model quality, data quality, process speed, escalation governance, feedback mechanisms, monitoring, and human capability. Neglect any of these, and the system fails.

• • •

Governance for the AI Era

Most organizations' decision governance was designed for a world where humans made all the decisions. It didn't anticipate machines. That governance structure doesn't work when AI is part of the loop.

Traditional governance asks: who decides? With AI in the loop, the question becomes more complex: who decides what the AI should do? Who decides when to override the AI? Who decides whether an AI decision was good or bad? Who is accountable when the AI makes a harmful decision? Who is responsible for ensuring the AI isn't biased? Who monitors whether the AI is still performing well weeks or months after it was deployed?

Robust AI-era governance requires five key structures. First: clear escalation paths. When does the AI's recommendation trigger a human review? For the airline, the answer should have been: any overbooking situation involving bereavement fares, holiday weekend travel, or families with young children gets escalated to a human. The AI provides its recommendation, but a human makes the call. Second: feedback loops that improve the AI. When a human overrides the AI, does that override data flow back to the data science team? Can it be used to retrain the model? If not, the human feedback is wasted, and the AI never learns.

Third: regular audits of automated decisions. If a decision is fully automated (Tier 1), you must audit it regularly for bias, drift, and unintended consequences. Are certain demographic groups being treated differently? Is the model still making good decisions, or has the real world changed such that the model's patterns no longer apply? Fourth: defined accountability. When an AI makes a bad decision, who is responsible? The data scientist who built the model? The business leader who approved its deployment? The executive who decided to automate that particular decision? Without clarity on accountability, nothing improves.

Fifth: explicit decision guidelines. For Tier 2 and Tier 3 decisions, humans need clear guidance on how to interpret the AI's recommendation and when to override it. A loan officer shouldn't have to guess at what conditions justify rejecting a "recommended approval" from the AI. The organization should articulate this upfront: in these situations, you have authority to override.

• • •

The Over-Automation Trap: The Cost of Removing Humans

The temptation to automate is powerful. If AI can make a decision, why wouldn't you automate it? The reasons are strategic, not just ethical. Over-automation creates brittleness.

A brittle system works beautifully in normal conditions. It processes decisions fast, consistently, and cheaply. But in unusual conditions — edge cases, unprecedented situations, black swan events — it fails catastrophically. The airline's overbooking system worked perfectly 98% of the time. It was the 2% of situations involving sensitive human circumstances where the lack of human judgment became a liability.

Strategic organizations deliberately keep humans in certain loops not because the AI can't do the task, but because the cost of the AI being wrong — in terms of customer relationships, brand reputation, legal liability, or employee morale — is too high. This is not a failure of the AI or the organization. It's a strategic choice. For a bank's mortgage underwriting, removing all human judgment might reduce processing time by two days and reduce costs by 10%. But it also removes the human ability to exercise discretion in unusual situations, to catch edge cases, to make judgment calls that protect the bank's long-term interests. The cost of occasional AI error in mortgage decisions — both in terms of risk and in terms of customer relationships — might be worth keeping humans in the loop.

The organizations getting the most value from AI aren't removing humans from decisions. They're positioning humans in the right place in the decision process. Humans become more productive and better informed. AI becomes more effective because humans provide oversight. The combination outperforms either in isolation.

• • •

Building Decision Intelligence

The goal of AI transformation isn't just to add AI to existing decision processes. It's to build a new organizational capability: decision intelligence. This is the systematic ability to make better decisions by intentionally combining human judgment with AI analysis. It's a capability that can be built, improved, and sustained across the organization.

Building decision intelligence requires four steps. First: catalog your key decisions. Don't try to transform everything at once. Identify the twenty or thirty decisions that matter most to your business — the ones that happen frequently, affect revenue or customer satisfaction significantly, or have strategic importance. Don't include every tiny decision. Focus on the ones where better decisions drive real value.

Second: classify each decision on the spectrum from Tier 4 (fully human) to Tier 1 (fully automated). For each key decision, ask: where should this decision sit? Is the cost of error low enough to justify full automation? Does this decision require human judgment about context, values, or ethics? Is this a situation where AI can provide excellent recommendations that humans should then approve? Be honest about where each decision belongs based on its characteristics, not just where the technology allows.

Third: design the right human-AI interaction for each tier. For a Tier 1 (fully automated) decision, build robust monitoring and escalation. For a Tier 2 decision (AI recommends, human approves), design a workflow where the AI's recommendation is front-and-center but humans have clear authority and guidelines for override. For a Tier 3 decision (AI informs, human decides), ensure the AI provides interpretable analysis that humans can actually understand and act on. Fourth: continuously improve both the AI models and the human decision-makers. Use feedback loops to improve the AI. Use training and coaching to improve the judgment of the humans in the loop.

Decision intelligence becomes a source of competitive advantage. Organizations that think about decisions at this level — systematically, deliberately, across the whole portfolio — make better decisions faster. Their decision-making scales as the organization grows. Their decisions incorporate both machine precision and human wisdom. Their AI systems produce value because they're embedded in governance systems designed for them.

• • •

The Airline's Second Act: When Redesign Works

After the PR disaster, the airline decided to redesign its overbooking decision system. It didn't abandon AI. Revenue per seat optimization was still a crucial business goal. The airline's leadership asked a different question: how can we optimize revenue while keeping humans in the loop where it matters?

They reclassified overbooking as a Tier 2 decision: AI recommends, human approves. The AI still analyzed all the same variables and provided its revenue-optimizing recommendation. But now, certain flights automatically escalated to a human decision-maker: any flight on a holiday weekend, any flight involving bereavement fares (where customers had explicitly stated travel was for a funeral), any flight where the system detected a family with young children being bumped.

The human gate agents received clear guidance: if the AI recommends bumping a family traveling to a funeral, you have authority to override and offer alternative solutions — rebooking on later flights, offering compensation to encourage voluntary bumping, or simply allowing the family to travel. This wasn't left to gut feeling. The organization articulated the principle: certain human circumstances override pure revenue optimization.

The result? Customer satisfaction scores on overbooking situations improved significantly. Customers felt the airline was treating them as humans, not just revenue units. And remarkably, revenue per seat barely changed. The escalations to human decision-makers triggered on less than 2% of flights. For the vast majority of overbooking situations — routine ones without sensitive human circumstances — the AI's recommendations still improved revenue. For the 2% where it mattered most to the customer experience, humans applied wisdom.

This is the lesson of AI transformation at the decision level. The best AI decision systems aren't the ones that remove humans. They're the ones that put humans in exactly the right place — removed from routine, repetitive decisions where machine consistency is better, but present in decisions where judgment, context, and human values matter. That's not a failure of AI. That's the mature deployment of AI.

• • •

Throughout this chapter, we've focused on decision redesign — on the system-level question of how to integrate AI and human judgment. But decisions run on data. The quality of your decisions is bounded by the quality of the information informing them. The final chapter in Part III addresses the data foundation. Without it, no amount of AI sophistication in decision-making creates lasting value.

 

CHAPTER TEN

The Data Foundation

A CPG company spent $40 million on a "data lake" that was supposed to be the foundation for AI. Three years later, data scientists call it the "data swamp." It contains petabytes of data that nobody trusts, can't easily access, and doesn't connect across business units. Meanwhile, a smaller competitor with a fraction of the data — but better data governance — is running circles around them with AI. More data is not the answer. Better data strategy is.

This story repeats itself across industries. Companies invest billions in data infrastructure, cloud platforms, and analytics tools without first asking a fundamental question: What decisions do we need to improve? What data would actually help us improve them? The result is a familiar pattern: massive investments, disappointing returns, and mounting frustration with the gap between data promise and AI reality.

The error is understandable. Data is essential to AI. Without data, machine learning models have nothing to learn from. But the sufficiency of data does not follow from its necessity. Just because you need good data doesn't mean you need lots of data, or that you should build your entire data infrastructure before you have a clear strategy for using it.

This chapter examines what constitutes an effective data foundation for AI. Not a perfect foundation — perfect is the enemy of done. But a foundation sufficient to the strategic crux you've identified, aligned with your guiding policy, and capable of supporting your early wins. The question is not 'How do we build the biggest data lake?' The question is 'What is the minimum viable data foundation for our strategy, and how do we build it without drowning in complexity?'

Data Strategy Must Serve the Guiding Policy

The sequence matters. Strategy first. Infrastructure second.

The most common data mistake is precisely inverted: companies build the infrastructure before they have a strategy. They invest in massive data warehouses, data lakes, data platforms, and modern cloud infrastructure without ever answering the question: 'What decisions does our guiding policy need us to improve? What data do those decisions require?' The result is a capable infrastructure in search of a purpose.

This happens because data infrastructure is visible and tangible. You can point to it, measure its capacity, track its implementation. A data strategy is abstract and harder to defend in a budgeting process. It's easier to green-light a $10 million data platform than a $2 million project to answer hard questions about what data you actually need.

But this creates a perverse sequence. Once the platform is built, there's institutional pressure to use it — to justify the investment by finding use cases for all that data and processing power, rather than to identify the most valuable use cases and acquire the data needed to serve them. You end up with a solution looking for problems instead of problems searching for solutions.

The correct sequence is brutally simple: Data strategy should be derived from AI strategy, not the reverse. Start with your guiding policy. Identify the key decisions you need to improve. Work backward to the data those decisions require. Work backward again to determine whether you have that data, whether you can access it, and whether it meets quality standards for the use case. Only then do you design your data infrastructure.

This approach almost always reveals that you need less data infrastructure than you thought, but you need it to be better organized, more trustworthy, and more accessible. You're not optimizing for scale. You're optimizing for relevance.

• • •

The Minimum Viable Data Foundation

Recall the concept of the minimum viable product from Chapter 3. The same logic applies to data. You don't need perfect data to start. You need sufficient data for your proximate objectives.

Define the MVDF: the minimum viable data foundation. This is the smallest data foundation that enables your Horizon 1 quick wins — the ones you identified in your guiding policy. For most companies, this typically means three components:

First, reliable data for two to three priority use cases. Not all use cases. Not perfect data. But data reliable enough that you and your stakeholders trust it. If you're building demand sensing models, it means clean historical demand data and relevant promotional data. If you're optimizing supply chain logistics, it means vehicle locations, delivery windows, and traffic patterns. If you're personalizing customer recommendations, it means preference data, purchase history, and product attributes. You're not trying to solve every possible use case. You're trying to solve your two or three highest-impact uses cases with sufficient data quality.

Second, basic governance. Someone owns each critical data element. There are documented quality standards for each data set — not abstract policies but specific, measurable definitions of what constitutes "good" data for each use case. There are access controls so that teams who need the data can get it without requiring seventeen approval processes. There are documented lineages so that people using the data understand where it comes from and how it's been transformed.

Third, integration between the two or three systems that matter most. Most organizations run data across disparate silos — sales, marketing, operations, finance. The MVDF doesn't require integrating everything. It requires integrating the specific systems needed for your priority use cases. If demand sensing requires sales data, promotional data, and weather data, you integrate those. If inventory optimization requires supplier data and warehouse location data, you integrate those. Everything else can come later.

This is profoundly different from the typical data platform roadmap, which often starts with a complete enterprise data warehouse, integration of all systems, policies covering all data, and teams working through hundreds of use cases before a single model goes into production. By that point, you've often spent two or three years and tens of millions of dollars before you've proven any value. More often, the project stalls, the team turns over, and you never deliver.

The MVDF approach delivers value within months. It creates early wins that build momentum. It generates real feedback from real users about what data is actually useful, what quality standards actually matter, and what integration patterns actually work. It creates justification for investment in later phases. And it dramatically reduces the risk that you'll build the wrong thing at massive scale.

• • •

The Chain-Link Lens on Data

Derek Rumelt's work on strategy includes a concept he calls the "chain-link model" of competitive advantage. The insight is simple but powerful: a strategy is a system of interdependent parts. Your advantage is only as strong as its weakest link. If one critical element fails, the entire advantage collapses.

Data systems are quintessential chain-link systems. Your AI system is only as good as its weakest data link. Brilliant machine learning models trained on dirty data produce confident garbage — statistically significant predictions that are completely wrong because the underlying data was corrupt. Perfect data in systems nobody can access is wasted capability; it might as well not exist. Clean data from one system disconnected from data in another produces models that fail the moment they encounter the real world.

The implication is clear: don't optimize for the strongest link. Identify the weakest link in your data chain and fix that first.

What are common weak links? Consider these patterns:

Data quality: The raw data is corrupt, incomplete, or unreliable. You have transaction records with missing fields. You have sensor data that drifts over time. You have customer records with duplicates and inconsistent identifiers. This is the original "garbage in, garbage out" problem. No amount of sophisticated modeling fixes it.

Data integration: Information is siloed across systems. Sales has one view of the customer. Marketing has another. Finance has a third. Your models can't see the full picture because the underlying data systems don't talk to each other. You end up building models that work on historical silos but break when reality requires a unified view.

Data access: Data exists but teams can't get to it. It's locked behind technical barriers that require a data engineer to extract. It's protected by permissions that are so restrictive that people give up trying. It exists in legacy systems that are so difficult to query that the access time exceeds the business value. Data in a vault you can't open is data you don't have.

Data governance: Nobody owns the data. Nobody knows where it comes from or how it's been transformed. Nobody trusts it because there's no accountability. Someone ran a SQL query on it four years ago and got one answer. Someone else ran a different query yesterday and got a different answer. There's no mechanism to reconcile them, no one to call if something is wrong, and no documentation of the assumptions behind the numbers.

Each of these weak links can doom a data initiative. Your job is to identify which one is actually weakest for your specific use case and fix that first. Usually, it's not all of them. Usually, it's one or two. Once you fix those, the next weakest links become visible, and you fix those. But if you try to fix all of them simultaneously, you're back to boiling the ocean.

• • •

Data Governance Without Bureaucracy

Data governance sounds important. It is. Most companies implement it wrong.

A typical data governance program looks like this: Create a data governance committee. Define data governance policies. Establish approval processes for data access, data changes, and data usage. Create compliance requirements. Establish audit trails. Build a governance technology platform to manage all of this. The result is a governance infrastructure that is comprehensive, well-intentioned, and extraordinarily bureaucratic.

Data scientists spend half their time filling out forms requesting access to data. Business analysts wait weeks for approvals. Engineers are blocked waiting for policy clarification. The governance program intended to improve data reliability ends up slowing down the organization and generating resentment. Over time, people work around it — creating shadow data, storing information in personal devices, extracting data outside official channels — which defeats the purpose entirely.

Effective data governance is lightweight and decision-oriented. It's governance in service of decisions, not governance for its own sake. The key principles are these:

First, assign clear ownership. Every critical data element has one owner. Not a committee. One person. That person is responsible for the accuracy of the data, the documentation of the data, and the decision about who can access the data. If something is wrong with the data, you know exactly who to call. If you need the data for a new use case, you know exactly who to ask. Ownership is the antidote to bureaucracy.

Second, define quality standards by use case, not in the abstract. Most data governance policies define data quality in the abstract: 'Data must be accurate and complete.' That's not actionable. Different use cases have different quality requirements. Data for real-time inventory management might need 99.5% accuracy and 15-minute latency. Data for quarterly business reviews might need 95% accuracy and 1-week latency. Data for executive dashboards might need 90% accuracy and 1-month latency. Define what 'good enough' means for each specific use case. Then measure against that standard.

Third, create fast paths for the priority use cases identified in your strategy. If demand sensing is your top priority, make data access and governance for demand sensing signals as frictionless as possible. If supply chain optimization is priority two, do the same. For everything else, standard governance applies. You're not creating different standards. You're creating faster approval and access for the uses cases that matter most to your strategy.

This approach generates governance that actually works. Clear ownership scales without bureaucracy. Use-case-driven quality standards are measurable and defensible. Fast paths for priority use cases keep the organization moving. And the governance infrastructure itself remains lightweight — the complexity is in the decisions, not in the process.

The metaphor is medicine. You don't give every patient every test. You use clinical judgment to decide which tests matter for this patient's presenting condition. You order those tests quickly and act on the results. Governance should work the same way. It should be proportional to the decision at stake, fast for the decisions that matter, and light-touch for everything else.

• • •

The "Our Data Is Unique" Delusion

Many companies believe their competitive advantage in AI will come from unique data. Sometimes this is true. Sometimes it's a delusion that prevents you from making honest decisions about data strategy.

There are genuine cases where proprietary data is a real moat. Companies with comprehensive transaction histories have data that reflects actual customer behavior at scale. Companies with sensor networks embedded in physical infrastructure have data that competitors can't easily replicate. Companies with decades of customer behavioral data have longitudinal information about preferences and patterns that new entrants can't immediately match. For these companies, data is a genuine source of competitive advantage.

But for many companies, the "our data is unique" belief is more aspiration than reality. Most companies' data is less unique than they think. Everyone in an industry has similar data — sales data, customer data, product data, financial data. The differences are in quality and integration, not in having data competitors don't.

Moreover, even when data is somewhat unique, it's often harder to use than companies hope. Unique data is frequently messy, poorly documented, and locked in legacy systems. The very fact that it's proprietary often means it's not integrated with external data that would make it more valuable. Unique data is often less valuable than a small amount of unique data well-integrated with commodity data.

And even when data is unique and valuable, it's less of a moat than it seems. Data moats erode. Competitors build their own data capabilities. Regulatory changes limit what you can do with proprietary data. The more successful you are with your data advantage, the more competitors are incentivized to either replicate your data or obsolete it with different data sources.

This doesn't mean data isn't strategically important. It does mean you should be honest about how unique your data actually is before committing to a data-centric strategy. Ask the hard questions: Do competitors have essentially the same data? If so, what would actually give us an advantage — integration and governance, not uniqueness. Is our data genuinely different? If so, is it better integrated than competitors' data? Can we protect it legally or operationally? Or will it be replicated quickly?

The honest assessment often reveals that your advantage won't come from having data competitors don't. It will come from using the data you do have more intelligently, integrating it more effectively, trusting it more, moving faster with it, and building better decision-making systems around it. That's less sexy than having proprietary data. It's also more achievable and more defensible.

• • •

Building Data as a Strategic Asset

For companies where data IS genuinely a source of power, the strategy shifts. You move from defensive (clean up what we have) to generative (deliberately create data advantages).

This involves three practices:

First, design products and processes to generate valuable data as a byproduct. This is different from just collecting more data. Many companies gather large amounts of data that nobody uses. The question is what data would actually be valuable if you had it, and can you design your product or process to generate it? If you operate a marketplace, can you structure transactions to capture preference data? If you operate a logistics network, can you design routes to generate location and traffic data? If you offer a service, can you instrument it to capture usage patterns? The goal is to make data generation a natural output of your business operations, not an ancillary effort.

Second, create feedback loops where AI usage generates data that improves AI performance. This is the power of deployed AI systems. The moment you deploy a recommendation model or a demand forecast, you have new data: what did the customer actually do? what actually happened in the market? Use that data to retrain your models. Build systems that capture the outcomes of AI decisions and feed them back as training data. You start with initial data sufficient for a baseline model. Each deployment generates new data that improves the next iteration. Over time, the models get better, which drives better business outcomes, which generates better training data for the next improvement cycle.

Third, build data network effects where more users create more data which creates better AI which attracts more users. This is the dynamic that powers companies like Amazon, Netflix, and Spotify. More users generate more transaction data. Better algorithms operating on that data create better recommendations and better service. Better service attracts more users. The network effect becomes defensible because it's hard to replicate — the data advantage of the incumbent is difficult to overcome.

These practices don't guarantee competitive advantage. But they do create conditions where data becomes increasingly valuable over time and increasingly difficult for competitors to replicate. They shift from a static view of data (we have this data, competitors don't) to a dynamic view (we can generate and use data faster than competitors can).

• • •

Return to the CPG company that started this chapter. After the data swamp disaster, they took a radically different approach. Instead of trying to clean up the swamp, they stepped back and asked: What are the three decisions most critical to our AI strategy? The answer came back: demand sensing (predicting what consumers will buy), promotional response (understanding what promotional tactics work), and retailer collaboration (coordinating inventory between us and our retail partners).

Those three decisions required three data sets: consumer purchase patterns, promotional effectiveness data, and retailer inventory signals. They weren't unique data sets. Competitors had similar data. But they took a minimum viable approach. They identified the specific data needed for each use case. They cleaned and integrated those three data sets only — ruthlessly ignoring everything else. They built governance focused on the quality standards those specific uses cases required. They created fast paths for data access.

Within six months, they had deployed their first demand sensing models. Within twelve months, they had improved forecast accuracy by 15%. Within two years, demand sensing accuracy outperformed the larger competitor with the bigger data lake.

The lesson from both versions of the story is the same: data strategy is not about volume. It's not about the biggest lake or the most comprehensive platform. It's about relevance, quality, and connection to the strategic crux. It's about answering a specific decision question with specific data, organized in a way you can trust, accessible to the people who need it.

With diagnosis complete, guiding policy set, and coherent actions in motion, we move to Part IV. The long game. Because the crux you've identified won't stay stable. The market will shift. Your competitive position will change. Competitors will respond. Regulation will evolve. Your job as a leader is to detect those shifts and adapt your guiding policy in response. That requires measurement systems that tell you what's actually working. It requires organizational structures that can evolve. It requires the ability to learn, not just from external data but from your own execution. Part IV addresses that challenge.

 

CHAPTER ELEVEN

Measuring What Matters

Part IV: Sustaining Advantage

• • •

The quarterly AI review had the feel of every other corporate meeting—clean slides, impressive numbers, and the kind of dashboard that looked like it belonged in a Silicon Valley war room. The Chief AI Officer scrolled through the metrics with visible pride: twenty-three machine learning models in production, four point two million API calls per day, ninety-seven point three percent average model accuracy, twelve million dollars in projected annual cost savings. The presentation was polished, the data was real, and the investments in AI infrastructure had clearly borne fruit. Everyone around the conference table nodded appreciatively at the breadth and scale of the technical achievement.

The CEO listened politely through the entire presentation, nodded at the conclusion, and then asked a single question: "Are we harder to beat than we were a year ago?" The room fell silent. Nobody had anticipated the question. The Chief AI Officer looked at the dashboard—the one that tracked everything about their AI operations—and realized it answered almost nothing about strategic advantage. They were measuring activity with impressive precision while remaining almost entirely blind to the only thing that mattered: whether their AI was actually creating competitive advantage. The dashboard was a technical monument to their AI capabilities. The answer to the CEO's question remained unknown.

This scene plays out in countless companies. They build impressive AI systems. They deploy them to production. They monitor them obsessively. And they remain fundamentally uncertain about whether any of it matters for competitive positioning. This is the measurement problem that haunts modern AI strategies: most organizations are excellent at measuring whether their AI is working, and almost entirely focused on the wrong question. They measure operational excellence while remaining blind to strategic advantage.

• • •

The Vanity Metrics Epidemic

The measurement frameworks that dominate AI strategy are almost always built by technical teams, and they measure technical things. Model accuracy. Latency. Uptime. Number of models deployed. Processing speed. Cost per prediction. Training time. These are operational metrics—they are designed to answer a single, important question: "Is the AI working?" But they are catastrophically silent on the question that matters: "Is this creating strategic value?"

It's a useful metaphor: imagine measuring the health of a car engine with meticulous precision. You check the compression ratio, the combustion temperature, the efficiency of the fuel injection system, the responsiveness of the ignition timing. All of these metrics are legitimate. They tell you the engine is functioning properly. But they tell you nothing about the only thing that matters to the driver: whether the car is going in the right direction. An engine can be perfectly optimized and still be attached to a vehicle pointed toward a cliff. The metrics miss the point entirely.

The epidemic is widespread. Walk into quarterly board meetings at technology companies across industries and you will see AI metrics that sound impressive but answer the wrong question. "Our models have improved accuracy by three percent." "We've increased throughput by forty percent." "We've deployed AI to twelve new customer segments." These are all legitimate achievements. None of them answer whether the company's competitive position has improved. The metrics have become vanity metrics—they look good in a presentation, they show that the AI team has been productive, and they are almost entirely disconnected from strategic advantage. A company could have every operational metric trending in the right direction and still be losing competitive ground to a competitor with better judgment about where AI creates leverage.

• • •

What Good AI Measurement Looks Like

Good measurement is anchored not to the technology but to the strategic framework built in the earlier steps of this process. It is measurement that connects directly back to the guiding policy—the fundamental assertion about how your AI changes the basis of competition in your industry. This connection is the difference between measurement that guides strategy and measurement that merely documents operational activities.

Good AI measurement answers three foundational questions. First: Are we making progress on the crux? The crux is the core strategic problem that AI is meant to solve. Without AI, the organization cannot move forward on its strategy. The metrics that matter are those that reveal whether the AI capability is actually unlocking progress on that problem. Second: Are our sources of power getting stronger? Sources of power are the mechanisms through which AI creates competitive advantage—data accumulation, capability building, switching costs, network effects, or economies of scale. Good measurement tracks whether these sources are actually strengthening. Are we accumulating proprietary data faster than competitors? Is our AI improving at a rate that widens our capability gap? Are we creating lock-in effects that make us harder to switch from? Third: Is our competitive position improving? This is the ultimate question. Everything else is derivative. Are we winning market share? Are we attracting better talent? Are we expanding into segments we previously couldn't serve? Are customers spending more with us? These three questions should anchor every metric that matters.

This is not a call to ignore operational metrics. Model accuracy matters. Uptime matters. Latency matters. They are necessary. But they are not sufficient, because they do not connect to strategy. They are the cost of entry to the game, not the measure of whether you're winning. The measurement framework that matters is one that sits atop the operational metrics and asks: given that our AI is functioning well, are we gaining advantage from it?

• • •

The Three Tiers of AI Metrics

A practical framework for thinking about AI measurement consists of three tiers, each answering a progressively more strategic question. These tiers are not separate measurement systems—they are connected layers of a single measurement architecture.

Tier 1: Operational Metrics (Is the AI Working?)

Operational metrics answer the technical question: Is the AI functioning as designed? This tier includes model accuracy, precision, recall, F1 scores, latency, throughput, uptime, error rates, data freshness, and all the other measures that an AI team uses to monitor the health of deployed models. These are necessary metrics. No one should deploy a model with poor accuracy or unacceptable latency. But these metrics are the equivalent of checking vital signs. They tell you the patient is breathing and the heart is beating. They do not tell you whether the patient is healthy or fit.

The risk with operational metrics is that they can be all green while the strategy is failing. A company could have deployed ten machine learning models with ninety-nine percent uptime and excellent accuracy across all of them, and still be losing competitive position because the models were focused on the wrong problems. The operational metrics became distracting—they looked like success while the actual strategy was failing quietly.

Tier 2: Impact Metrics (Is the AI Creating Business Value?)

Impact metrics move closer to business outcomes. This tier includes revenue influenced by AI, cost reduction achieved through automation, time savings in key processes, error rate improvements that reduce downstream costs, customer satisfaction scores, and retention metrics. These metrics attempt to connect the AI to actual business value. They are better than operational metrics because they move closer to something that matters beyond the AI team.

But impact metrics still have a critical limitation: they can be positive even when AI is not creating strategic advantage. A company might save five million dollars through AI-driven cost reduction without changing its competitive position. Savings are valuable, but they are not the same as advantage. A company could be extracting value from AI while simultaneously losing competitive ground to a competitor building a more defensible moat. Impact metrics answer whether AI is generating value. They do not answer whether that value is creating competitive distance.

Tier 3: Strategic Metrics (Is the AI Making Us Harder to Beat?)

Strategic metrics are where measurement gets real. This tier answers the question that matters: Is our competitive position improving? Is the gap between us and our competitors widening or narrowing? Strategic metrics include shifts in market share or market position in the segment where AI creates advantage, capability gaps with the closest competitors, data advantage growth (accumulation of proprietary datasets that competitors cannot easily replicate), customer lock-in effects (the cost or difficulty of switching from your AI-enabled solution to a competitor's), and the rate at which your learning curves compound. These are the metrics that connect directly to the guiding policy. They measure whether the sources of power that you identified in Step 2 are actually becoming stronger.

Most companies stop at Tier 1. A smaller number reach Tier 2. Almost none systematically measure Tier 3. This creates a perverse inversion of measurement priority. The metrics that are easiest to measure get the most attention. The metrics that are hardest to measure—and most important strategically—get the least. The strategic metrics are harder to measure because they move slowly, they require judgment about competitive positioning, they cannot be automated into a dashboard, and they require quarterly or annual reviews instead of real-time monitoring. But these are exactly the reasons why they matter. The easy metrics are usually lagging. The hard metrics are leading indicators of whether the strategy is actually working.

• • •

Designing Your AI Scorecard

A practical way to implement the three-tier framework is through an AI scorecard. For each priority initiative that passed the Advantage Test in Chapter 5—each AI capability that is critical to your strategy—you should define one metric from each tier. The scorecard becomes the unified view of whether that AI capability is working operationally, creating business value, and building strategic advantage. The cadence of review matters: Tier 1 metrics should be reviewed monthly or even weekly, Tier 2 metrics quarterly, and Tier 3 metrics annually, with a quarterly review focused on directional progress.

Consider a concrete example: a financial services company has built an AI-driven credit underwriting capability that is central to their strategy. The guiding policy is that AI will allow them to approve creditworthy borrowers that traditional underwriting rules reject, expanding their addressable market and creating a sustainable competitive moat through data accumulation and learning curve effects.

For this capability, the scorecard might look like this: Tier 1 operational metric is model accuracy on the holdout test set and model accuracy in production (measured through actual loan performance). Tier 2 impact metric is the percentage of loan originations that use the AI-driven underwriting system and the percentage of approved loans that default. Tier 3 strategic metric is market share in the target segment (creditworthy borrowers rejected by traditional underwriting), the rate of data accumulation in this segment, and the accuracy gap between the company's model and the best available alternative (the competitor that is closest to matching their model performance). These three metrics sit together on the scorecard. When you review monthly, you're looking at Tier 1: Is the model performing as expected in production? When you review quarterly, you're looking at Tier 2: Are we originating loans through this capability, and are they performing as expected? When you review annually, you're looking at Tier 3: Are we gaining market share, building proprietary data moats, and widening our capability gap?

This structure transforms measurement from a technical activity into a strategic activity. You are not measuring AI for its own sake. You are measuring whether the AI is executing the strategy. If Tier 1 metrics are green and Tier 2 is declining, you have a problem with adoption or with how the model is being used. If Tier 2 is strong and Tier 3 is flat, you are generating value but not building advantage. If all three tiers are strong, you have evidence that the strategic bet is paying off.

• • •

Leading vs. Lagging Indicators

Most AI metrics are lagging indicators. They tell you what already happened. By the time a lagging metric shows that something is wrong, the damage is often done. A company might realize that their model accuracy has declined only after customer complaints spike and revenue begins to drop. A company might discover that data quality has degraded only after looking back and realizing that model performance has been slowly sliding for weeks. Lagging indicators are retrospective. They confirm what you should have known earlier.

Strategic measurement requires leading indicators—signals that predict whether the strategy is working before the results are fully visible. Leading indicators are predictive. They answer the question: Given what I see today, am I on a trajectory toward success? For an AI strategy to work, certain things need to happen before the advantage becomes obvious. Before competitive advantage materializes, you should see signals that the foundations of that advantage are being built.

Consider three types of leading indicators. First, the rate of data accumulation in strategic areas is a leading indicator of compounding advantage. If your guiding policy is that data accumulation creates a moat, then the leading indicator is how quickly you are accumulating proprietary data in your core competitive domain. If you are accumulating data faster than you were last year, and faster than your closest competitors, you should be confident that advantage will compound over time. Second, the speed of model improvement cycles is a leading indicator of capability building. If your strategy depends on progressively building AI capabilities that competitors cannot match, then the leading indicator is how quickly you can move from one model version to the next, and whether each iteration is meaningfully better than the last. A team that improves their model ten percent per quarter is on a faster trajectory than a team improving five percent. Third, employee AI fluency scores are a leading indicator of organizational capability. The ability to sustain AI advantage depends on organizational capability—the ability of teams throughout the company to understand, deploy, and iterate on AI systems. If you measure AI literacy across your organization (through surveys, training completion, or certifications), you are measuring a leading indicator of whether you will be able to execute the strategy at scale.

To design leading indicators for your specific AI strategy, work backward from your Tier 3 strategic metrics. If your strategic goal is to increase market share, what leading indicators would predict that you are on track to achieve it? If your strategic goal is to build a data moat, what leading indicators would show that the foundations of that moat are being laid? The best leading indicators are ones that are measurable today, update frequently enough to inform decisions, and clearly predict the strategic outcome you care about. They should be reviewed monthly or quarterly. By the time your annual Tier 3 review happens, your leading indicators should have already told you whether you are on track.

• • •

Common Measurement Mistakes

Even with a strong framework, measurement systems fail in predictable ways. Understanding these common mistakes can help you avoid them.

First mistake: measuring what's easy instead of what's important. It is always tempting to measure operational metrics that are readily available—model accuracy, latency, uptime. These are the metrics that teams have instrumentation for, that update automatically, that can be displayed on a dashboard. But they are often the wrong metrics for answering strategic questions. It's easy to measure model accuracy. It's hard to measure whether that accuracy is creating competitive advantage. Too many organizations optimize for the wrong metric simply because it is easy to measure, and that choice cascades through the organization. Engineers are incentivized to improve accuracy even when accuracy is not the constraint on competitive advantage. Resources are allocated to accuracy improvements even when a different capability would create more advantage. All of this happens because of a simple measurement mistake: optimizing for what is easy to measure instead of what matters.

Second mistake: moving the goalposts when targets aren't met. This is a particular risk with leading indicators. When a leading indicator shows early warning signs that the strategy might not work, the temptation is often to redefine the metric or reset expectations. A team might realize halfway through the year that they will not hit their target for data accumulation, and rather than treating this as real information about the strategy, they adjust the target. This destroys the integrity of measurement. If metrics are revised whenever they show results you don't like, they cease to be measures of anything real. They become a political tool for justifying past decisions. The discipline to keep metrics consistent—even when the results are uncomfortable—is essential.

Third mistake: celebrating pilot success without measuring production impact. Pilots are useful for validating that an AI approach is technically feasible. But pilots operate in controlled environments. They have smaller scale, more engaged users, and more oversight than production systems. A pilot can look extraordinarily successful while failing spectacularly when deployed at scale. The measurement mistake is treating the pilot success as evidence of strategic value without measuring what actually happens when the capability goes live. The metric that matters is not pilot accuracy but production accuracy. Not pilot adoption but production adoption. The transition from pilot to production is where many AI initiatives lose credibility and value.

Fourth mistake: ignoring negative results instead of learning from them. When measurements show that an initiative is not delivering on expected impact, the response is often to minimize the findings, reframe the narrative, or simply stop measuring that metric. This throws away the most valuable information. Negative results are more informative than positive results because they reveal where your assumptions were wrong. If you measured that model accuracy was excellent but business impact was weak, this is extremely valuable information—it tells you that accuracy was not the bottleneck, and that you need to look elsewhere for the constraint on value creation. A measurement system that only celebrates success and ignores failure is not providing strategic guidance. It is providing reassurance.

Fifth mistake: measuring AI in isolation instead of measuring the system it's part of. AI is almost never deployed in isolation. It is part of a larger business process or customer experience. Measuring whether the AI model is performing well is different from measuring whether the system that includes the AI is delivering value. An AI model might have ninety-five percent accuracy, but if the humans using the model ignore fifty percent of its recommendations, the system is delivering only forty-seven point five percent of the value it could. Or if the model outputs are delivered to end users in a confusing way, adoption might be poor despite high model performance. Measurement has to be integrated with how the AI is actually being used in the real system, not just how the AI is performing in isolation.

Sixth mistake: over-indexing on ROI while ignoring strategic positioning. Quarterly ROI is a useful metric for operational decisions. But it can push organizations toward short-term optimization at the expense of long-term advantage building. A company might choose not to invest in data accumulation that would create a long-term moat because the data accumulation does not show positive ROI in the current quarter. They might optimize for cost reduction over capability building. They might pursue revenue opportunities that are easy and immediate over opportunities that build defensible position. Over-indexing on ROI creates a measurement system that optimizes for the visible benefit and ignores the invisible long-term advantage. The discipline to measure both—current value and future positioning—is essential.

• • •

The Quarterly Strategic Review

A measurement system is only useful if it is actually used. The way to make AI measurement matter is to institutionalize it through a specific meeting cadence and format. I recommend a quarterly strategic review—a structured conversation where the leadership team reviews the health and direction of the AI strategy.

This is not a dashboard review. It is not a presentation of operational metrics. The quarterly strategic review is a conversation structured around four questions that matter. First: What did we learn this quarter about our crux? The crux is the fundamental strategic problem. What progress did we make on the core problem AI is meant to solve? What did we learn about whether our approach to solving the crux is working? What surprised us? What did we get wrong? Second: Are our proximate objectives being achieved? In Step 3, you defined proximate objectives—the specific, measurable milestones toward solving the crux. Are we on track to hit them? If not, what is the barrier? Third: Are our strategic metrics trending in the right direction? This is where Tier 3 comes in. Pull out the metrics that measure whether your competitive position is actually improving. Are you building the sources of power you identified? Is the moat getting deeper? Are the learning curves compounding? Fourth: Has anything changed in the competitive landscape that challenges our guiding policy? Markets change. Competitors move. Technology shifts. Is the fundamental assumption behind your AI strategy—your guiding policy—still valid? Do you need to revisit it?

The discipline of this quarterly conversation is that it is about the strategy, not the technology. It is about whether the big bet is paying off, not whether the model accuracy improved. It is about competitive positioning, not operational metrics. If you find that your quarterly review is spending time on Tier 1 metrics, you have slipped back into a technical review. The quarterly strategic review is for the leadership team. It requires judgment about market positioning. It requires real understanding of the competitive landscape. It requires the kind of thinking that cannot be automated into a dashboard. This is why it happens quarterly rather than continuously—it is meant to be a disciplined conversation, not a surveillance system.

• • •

Measuring What Matters in Practice

Return to the technology company from the opening of this chapter. After the CEO's question—"Are we harder to beat than we were a year ago?"—created visible discomfort, the Chief AI Officer committed to redesigning their measurement approach. They kept the operational dashboard. Tier 1 metrics still mattered. But they added a quarterly strategic review anchored to three Tier 3 metrics: market share in their target segment (the part of the market where their AI created the most advantage), the accuracy gap between their models and the best publicly available alternative (the capability moat), and the rate at which their proprietary training data was growing (compounding advantage).

Within three quarters, these metrics told a clearer story than any operational dashboard. Market share in the target segment had grown from seven percent to eleven percent. The accuracy gap with the closest competitor had widened from two percent to four point five percent. The rate of proprietary data accumulation had accelerated by thirty percent. These were not spectacular numbers in isolation. But together, they told the story the CEO had been asking about: they were pulling ahead in their core market, the gap was widening, and the foundations of that advantage were getting stronger. That is what measuring what matters looks like.

Measurement is not the end. It is a tool for telling you where you stand and whether you are moving in the right direction. Chapter 12, the final chapter of this book, addresses what to do when the ground shifts beneath you—when measurement reveals that your assumptions have changed, or when the competitive landscape moves in unexpected ways. Measurement creates the opening for that conversation. But finding the next crux, the next source of advantage, the next way to stay ahead—that is the subject of sustaining advantage over time.

 

CHAPTER TWELVE

The Next Crux

Two years have passed since the boardroom on the forty-second floor first fell quiet with the weight of a critical decision. NovaCom's transformation has delivered beyond what even its optimistic architects imagined. The AI-driven proposal system that once seemed revolutionary now generates a win rate that would have felt impossible to the company's previous leadership. Revenue is up thirty-seven percent. The board is delighted. Wall Street has noticed. And yet, sitting alone in that same boardroom, NovaCom's CEO feels a familiar discomfort creeping back in—the same restlessness that once kept him awake at night, wondering if the company could survive the future.

The feeling has a specific trigger. Three of NovaCom's largest competitors announced last month that they now have AI-powered proposal capabilities. One hired a team of machine learning engineers away from Google. Another licensed a platform from a defense contractor. And just last week, a startup presented at an industry conference offering AI proposal tools as a service—not perfect, but good enough, and available for a fraction of the cost it took NovaCom to build. The advantage that felt decisive eighteen months ago, the kind of advantage that rewrote competitive dynamics, is starting to erode.

The CEO picks up the phone and calls his Chief Strategy Officer. The conversation is brief.

"I think we need to find the next crux."

It is a sentence that captures everything this book has been building toward.

• • •

The Impermanence of Advantage

No strategic advantage lasts forever. This is Richard Rumelt's concept of dynamics—the waves of change that continuously reshape competitive landscapes, revealing new weaknesses and new opportunities, pushing yesterday's leaders into obscurity and elevating new competitors to dominance. Kodak invented the digital camera and still went bankrupt. Blockbuster had eight thousand stores and couldn't compete with Netflix. The advantage trap is real: the more successful a company becomes with a particular strategy, the less likely its leaders are to see that the world is changing underneath them.

In AI, the dynamics are particularly fast. What was cutting-edge eighteen months ago is a commodity today. The models improve at a pace that renders previous breakthroughs quaint. The tools proliferate—what required millions in R&D investment can now be licensed, purchased, or hired as a service. The competitors catch up, often faster than anyone expects. The question isn't whether your AI advantage will erode. The question is when. And the real question beneath that one is whether you'll find the next crux before it does.

This is what separates the companies that sustain competitive advantage from those that merely enjoy a brief window of success. The sustainable advantage doesn't come from being first. It comes from understanding that being first is only the beginning of the real work—the work of continuous rediscovery, of asking the hardest diagnostic question not once but again and again, each time from a different vantage point, each time with more knowledge, each time with higher stakes.

• • •

Reading the Dynamics

Rumelt argues that great strategists are great readers of dynamics. They don't just react to change—they see the waves before they break. They understand the underlying currents that will shift competitive position in the months and years ahead. For leaders managing AI transformation, this means developing the ability to systematically observe four distinct types of dynamics:

Technology Dynamics

New models emerge with new capabilities. New architectures make possible what was previously intractable. Multimodal systems create new use cases. Open-source models democratize what once required proprietary investment. The strategic question: Are our AI advantages built on a specific model or architecture that could be obsoleted? Or are we building capabilities that will remain valuable regardless of which underlying models the market settles on?

Market Dynamics

Customer expectations change. What customers will pay for shifts. New business models become viable. Distribution channels evolve. The strategic question: Is the advantage we've built aligned with where the market is going, or are we optimizing for yesterday's customer needs?

Competitive Dynamics

Competitors build capabilities. Startups disrupt from below. Incumbents from adjacent industries move in. The strategic question: How long until the capability we've built is table stakes rather than differentiation?

Regulatory Dynamics

New rules emerge. New constraints reshape what's possible. New compliance requirements create barriers to entry. New rights (privacy, ownership, transparency) rewrite the rules of engagement. The strategic question: Are there regulatory waves we should anticipate, and do they favor our position or threaten it?

The practice of reading dynamics isn't about keeping up with AI news or attending every industry conference. It's about building a structured, quarterly scanning process. Assemble a small team—your strategy lead, your chief data officer, your head of product, one person from marketing, one from sales, one from operations. Spend four hours each quarter asking: What technology waves are we tracking? Which market expectations are shifting? Who is building what, and what does it mean for our advantage? Where is regulation moving? Then: How do these dynamics change our diagnosis? Does our crux still hold, or do we need to find the next one?

• • •

The Re-Diagnosis Cycle

Finding the next crux means returning to Step 1 of the framework—the diagnostic question—but from an entirely different position. The first time your organization asked 'Where is the critical challenge?' you were likely starting from a position of AI inexperience, limited data, nascent capabilities. Now, if you've executed well, you have AI capabilities. You have data assets that didn't exist before. You have organizational learning embedded in people and processes. You have strategic clarity about which problems matter most.

This is your advantage. And now it becomes the foundation for the next diagnosis.

NovaCom's re-diagnosis unfolded over three months. The team started by asking: What has changed in our business and our market since we built the proposal AI? They found something unexpected. Yes, their proposal win rate had improved. But the data generated by the proposal system—which AI model language performed best, which themes resonated with which customer segments, which timing patterns predicted decision velocity—was now their richest source of business intelligence. This data, combined with their market data, could answer a question that had haunted their strategic planning process for years: Which markets and segments will grow fastest? Which are early signals we're missing?

The next crux, they realized, wasn't in proposals anymore. That was becoming table stakes—valuable, but not unique. The next crux was in strategic foresight. The company that could predict where the market was moving faster and more accurately than competitors could win not just individual deals but entire market segments. They could allocate sales resources more effectively. They could develop new offerings ahead of demand. They could position for shifts competitors didn't see coming.

This is how the re-diagnosis cycle works. It's not abandoning what you've built. It's recognizing that building something valuable creates new data, new insights, new competitive positions that weren't possible before. The diagnostic question evolves. But the discipline of asking it—of reading the landscape, of naming the critical challenge, of making strategic choices—that discipline becomes a permanent feature of how the organization operates.

• • •

The Adaptive Organization

The companies that sustain AI advantage aren't the ones with the best current AI. They're the ones with the best ability to continuously find and respond to the next crux. This requires building what we might call an adaptive organization—one structured not to execute a single strategy perfectly, but to continuously diagnose, design, and execute new strategies as conditions change.

Three characteristics define the adaptive organization:

Strategic Sensing

The ability to detect changes in technology, market, competition, and regulation early. This means institutionalizing the quarterly scanning process we described above. It means creating channels through which signals from frontline employees, customers, and partners bubble up to decision-makers. It means building a library of leading indicators you track continuously—not just financial metrics, but signals about competitive moves, customer sentiment, regulatory activity, and technology breakthroughs.

Rapid Re-diagnosis

The ability to quickly reassess the crux when conditions change. This doesn't mean flip-flopping on every signal. It means having a process—a cadence, usually quarterly, sometimes more frequent in periods of rapid change—in which leadership explicitly asks: Does our diagnosis still hold? If the diagnosis has changed, what's the new crux? And if the crux has changed, what does that mean for our strategic priorities?

Execution Agility

The ability to reallocate resources and pivot priorities without organizational paralysis. This is the hardest of the three. It requires a culture in which changing course is seen as wisdom rather than wavering. It requires having enough slack in the organization—people and budget that aren't completely consumed by core operations—so that you can redirect resources when the diagnosis changes. It requires leaders who can make difficult conversations about killing or deprioritizing initiatives that no longer serve the crux.

Notice what this adaptive organization is not. It's not about being reactive, constantly shifting with every market tremor. It's not about strategic chaos disguised as agility. It's the opposite: it's about being disciplined—running the strategic framework continuously rather than once. It's about making change through the same systematic process that got you to your first advantage.

• • •

The 90-Day Starting Playbook

For readers finishing this book and ready to move from understanding to action, here is a concrete 90-day plan. This isn't the whole transformation. It's the first turn of the wheel. If you execute these ninety days with discipline, you'll have a diagnosis, a guiding policy, and your first priority initiative. The framework is designed to be repeated—each cycle builds on the last. By the end of year one, you'll have run the cycle three times, each time going deeper, each time getting closer to the sustained advantage that outlasts market cycles.

Days 1–15: Diagnosis Sprint

Assemble a small team: five to seven people, cross-functional, representing strategy, data, product, operations, and domain expertise. Your goal: conduct the diagnosis process from Step 1. What are the critical challenges facing the business? Where is AI creating new opportunity or threat? Through interviews with frontline employees, customers, and leaders, through analysis of your data, through study of competitive and market dynamics, identify three to five candidate cruxes. By day fifteen, name the crux. Write it down. Present it to the executive team for debate and validation.

Days 16–30: Advantage Assessment

Apply the Advantage Test from Step 2 to every current AI initiative and every proposed one. For each, ask: Does this build competitive advantage in a defensible way? Or is it table stakes? Or is it a distraction? Build your kill list—the initiatives that don't connect to the crux or that work against your coherence. Get leadership alignment that you'll actually kill these things, or at least deprioritize them severely.

Days 31–45: Guiding Policy Draft

Write a one-page guiding policy. It should answer: Given our crux, here's what we'll do. Here's what we won't do. Here's our build/buy/partner strategy for the top three priorities. Here are the constraints we're operating under. A one-page document. Not a hundred-page strategy. One page. Get executive sign-off. This becomes your north star.

Days 46–75: First Proximate Objective

Launch your first Horizon 1 initiative—a quick win that connects directly to the crux. Staff it properly. Give it resources. Clear the obstacles that would slow it down. This isn't just a project; it's a learning engine. By the time you reach day seventy-five, this initiative should be generating early wins, validating the diagnosis, and building organizational momentum around the framework.

Days 76–90: Measure and Learn

Establish the three-tier measurement framework from Step 4. What are you measuring for the first initiative? What are the leading indicators of success? Conduct your first strategic review. Review the diagnosis, the guiding policy, the initiative results. What's working? What's not? What have you learned about the market, your organization, or the crux? Adjust based on what you've learned. This is where the discipline meets reality, and discipline wins if you listen carefully to what the reality is telling you.

• • •

What This Book Has Argued

From the boardroom where a CEO first acknowledged that transformation was necessary, through the diagnosis that revealed the critical challenge, through the sources of power that could address it, through the guiding policy that held coherence, through the execution that turned strategy into results—this book has made a single, foundational argument:

AI transformation fails when companies treat it as a technology project. It succeeds when leaders treat it as a strategic discipline.

Technology projects have deadlines, budgets, and deliverables. Once deployed, they're done. Strategy is different. It's a continuous practice—diagnosing the landscape, making choices about where to compete and how, allocating resources accordingly, executing with discipline, and then asking the question again, informed by what you've learned.

The five-step framework has provided structure to this discipline. Step 1: Diagnosis. What's the critical challenge? Step 2: Advantage. How can AI create defensible advantage? Step 3: Guiding Policy. What's our coherent choice about how to respond? Step 4: Execution. How do we build capabilities and sustain momentum? Step 5: Adaptation. How do we learn and find the next crux?

The framework is simple. The work is hard. But the path is clear. And organizations that follow it—that ask the hard diagnostic questions, that have the discipline to focus on defensible advantage rather than scattered AI pilots, that coherently design a response, that execute with excellence, that adapt systematically—these organizations will thrive in an AI-powered future while others wonder what went wrong.

• • •

The View from the Forty-Second Floor

The CEO is in the boardroom again. But this time the spreadsheet on the table isn't a list of forty-seven disconnected pilots that couldn't be killed because someone's budget was attached to each one. It's a one-page document. Clean. Clear. A guiding policy with three priorities, each connected to the newly diagnosed crux. Next to it is a project plan for the first initiative. Next to that, a measurement framework. On the wall is a chart showing the quarterly scanning process, the next re-diagnosis scheduled for three months out.

The view from the forty-second floor hasn't changed. The city still sprawls below. The competitors still move in their patterns. The market still shifts and flows.

But the way he sees it has transformed. He's no longer looking for the next AI tool to buy, the next vendor to bring in, the next artificial shortcut to strategic clarity. He's looking at the competitive landscape with the clear eyes of someone who knows where the value is and what it takes to build and sustain it. He's asking the question that separates leaders who transform from those who merely tinker:

"What's the next crux—and are we ready?"

That question, asked with discipline and answered with strategic clarity, is the essence of AI transformation made simple.