From AI-Enabled to AI-Centered – Reimagining How Enterprises Operate

Enterprises around the world are racing to deploy generative AI. Yet most remain stuck in the pilot trap; experimenting with copilots and narrow use cases while legacy operating models, data silos, and governance structures stay intact. The results are incremental: efficiency gains without strategic reinvention.

With the rapidly developing context aware AI we also can chart different course — making AI not an add-on, but the center of how the enterprise thinks, decides, and operates. This shift, captured powerfully in The AI-Centered Enterprise (ACE) by Ram Bala, Natarajan Balasubramanian, and Amit Joshi (IMD), signals the next evolution in business design: from AI-enabled to AI-centered.

The premise is bold. Instead of humans using AI tools to perform discrete tasks, the enterprise itself becomes an intelligent system, continuously sensing context, understanding intent, and orchestrating action through networks of people and AI agents. This is the next-generation operating model for the age of context-aware intelligence and it will separate tomorrow’s leaders from those merely experimenting today.


What an AI-Centered Enterprise Is

At its core, an AI-centered enterprise is built around Context-Aware AI (CAI), systems that understand not only content (what is being said) but also intent (why it is being said). These systems operate across three layers:

  • Interaction layer: where humans and AI collaborate through natural conversation, document exchange, or digital workflow.(ACE)
  • Execution layer: where tasks and processes are performed by autonomous or semi-autonomous agents.
  • Governance layer: where policies, accountability, and ethical guardrails are embedded into the AI fabric.

The book introduces the idea of the “unshackled enterprise” — one no longer bound by rigid hierarchies and manual coordination. Instead, work flows dynamically through AI-mediated interactions that connect needs with capabilities across the organization. The result is a company that can learn, decide, and act at digital speed — not by scaling headcount, but by scaling intelligence.

This is a profound departure from current “AI-enabled” organizations, which mostly deploy AI as assistants within traditional structures. In an AI-centered enterprise, AI becomes the organizing principle, the invisible infrastructure that drives how value is created, decisions are made, and work is executed.


How It Differs from Today’s Experiments

Today’s enterprise AI landscape is dominated by point pilots and embedded copilots: productivity boosters designed onto existing processes. They enhance efficiency but rarely transform the logic of value creation.

An AI-centered enterprise, by contrast, rebuilds the transaction system of the organization around intelligence. Key differences include:

  • From tools to infrastructure: AI doesn’t automate isolated tasks; it coordinates entire workflows; from matching expertise to demand, to ensuring compliance, to optimizing outcomes.
  • From structured data to unstructured cognition: Traditional analytics rely on structured databases. AI-centered systems start with unstructured information (emails, documents, chats) extracting relationships and meaning through knowledge graphs and retrieval-augmented reasoning.
  • From pilots to internal marketplaces: Instead of predefined processes, AI mediates dynamic marketplaces where supply and demand for skills, resources, and data meet in real time, guided by the enterprise’s goals and policies.

The result is a shift from human-managed bureaucracy to AI-coordinated agility. Decision speed increases, friction falls, and collaboration scales naturally across boundaries.


What It Takes: The Capability and Governance Stack

The authors of The AI-Centered Enterprise propose a pragmatic framework for this transformation, the 3Cs: Calibrate, Clarify, and Channelize.

  1. Calibrate – Understand the types of AI your business requires. What decisions depend on structured vs. unstructured data? What precision or control is needed? This step ensures technology choices fit business context.
  2. Clarify – Map your value creation network: where do decisions happen, and how could context-aware intelligence change them? This phase surfaces where AI can augment, automate, or orchestrate work for tangible impact.
  3. Channelize – Move from experimentation to scaled execution. Build a repeatable path for deployment, governance, and continuous improvement. Focus on high-readiness, high-impact areas first to build credibility and momentum.

Underneath the 3Cs lies a capability stack that blends data engineering, knowledge representation, model orchestration, and responsible governance.

  • Context capture: unify data, documents, and interactions into a living knowledge graph.
  • Agentic orchestration: deploy systems of task, dialogue, and decision agents that coordinate across domains.
  • Policy and observability: embed transparency, traceability, and human oversight into every layer.

Organizationally, the AI-centered journey requires anchored agility — a balance between central guardrails (architecture, ethics, security) and federated innovation (business-owned use cases). As with digital transformations before it, success depends as much on leadership and learning as on technology.


Comparative Perspectives — and Where the Field Is Heading

The ideas in The AI-Centered Enterprise align with a broader shift seen across leading research and consulting work, a convergence toward AI as the enterprise operating system.

McKinsey: The Rise of the Agentic Organization

McKinsey describes the next evolution as the agentic enterprise; organizations where humans work alongside fleets of intelligent agents embedded throughout workflows. Early adopters are already redesigning decision rights, funding models, and incentives to harness this new form of distributed intelligence.
Their State of AI 2025 shows that firms capturing the most value have moved beyond pilots to process rewiring and AI governance, embedding AI directly into operations, not as a service layer.

BCG: From Pilots to “Future-Built” Firms

BCG’s 2025 research (Sep 2025) finds that only about 5% of companies currently realize sustainable AI value at scale. Those that do are “future-built”, treating AI as a capability, not a project. These leaders productize internal platforms, reuse components across business lines, and dedicate investment to AI agents, which BCG estimates already generate 17% of enterprise AI value, projected to reach nearly 30% by 2028.
This mirrors the book’s view of context-aware intelligence and marketplaces as the next sources of competitive advantage.

Harvard Business Review: Strategy and Human-AI Collaboration

HBR provides the strategic frame. In Competing in the Age of AI, Iansiti and Lakhani show how AI removes the traditional constraints of scale, scope, and learning, allowing organizations to grow exponentially without structural drag. Wilson and Daugherty’s Collaborative Intelligence adds the human dimension, redefining roles so that humans shift from operators to orchestrators of intelligent systems.

Convergence – A New Operating System for the Enterprise

Across these perspectives, the trajectory is clear:

  • AI is moving from tools to coordination system capabilities.
  • Work will increasingly flow through context-aware agents that understand intent and execute autonomously.
  • Leadership attention is shifting from proof-of-concept to operating-model redesign: governance, role architecture, and capability building.
  • The competitive gap will widen between firms that use AI to automate tasks and those that rebuild the logic of their enterprise around intelligence.

In short, the AI-centered enterprise is not a future vision — it is the direction of travel for every organization serious about reinvention in the next five years.


The AI-Centered Enterprise – A Refined Summary

The AI-Centered Enterprise (Bala, Balasubramanian & Joshi, 2025) offers one of the clearest playbooks yet for this new organisational architecture. The authors begin by defining the limitations of today’s AI adoption — fragmented pilots, structured-data basis, and an overreliance on human intermediaries to bridge data, systems, and decisions.

They introduce Context-Aware AI (CAI) as the breakthrough: AI that understands not just information but the intent and context behind it, enabling meaning to flow seamlessly across functions. CAI underpins an “unshackled enterprise,” where collaboration, decision-making, and execution happen fluidly across digital boundaries.

The book outlines three core principles:

  1. Perceive context: Use knowledge graphs and natural language understanding to derive meaning from unstructured information — the true foundation of enterprise knowledge.
  2. Act with intent: Deploy AI agents that can interpret business objectives, not just execute instructions.
  3. Continuously calibrate: Maintain a human-in-the-loop approach to governance, ensuring AI decisions stay aligned with strategy and ethics.

Implementation follows the 3C framework — Calibrate, Clarify, Channelize — enabling leaders to progress from experimentation to embedded capability.

The authors conclude that the real frontier of AI is not smarter tools but smarter enterprises; organizations designed to sense, reason, and act as coherent systems of intelligence.


Closing Reflection

For executives navigating transformation, The AI-Centered Enterprise reframes the challenge. The question is no longer how to deploy AI efficiently, but how to redesign the enterprise so intelligence becomes its organizing logic.

Those who start now, building context-aware foundations, adopting agentic operating models, and redefining how humans and machines collaborate, will not just harness AI. They will become AI-centered enterprises: adaptive, scalable, and truly intelligent by design.

The AI Strategy Imperative: Why Act Now

Two weeks ago, I completed IMD’s AI Strategy & Implementation program. It made the “act now” imperative unmistakable. In this newsletter I share the overarching insights I took away; in upcoming issues I’ll go deeper into specific topics and tools we used.


AI is no longer a tooling choice. It’s a shift in distribution, decision-making, and work design that will create new winners and losers. Leaders who move now—anchoring execution in clear problems, strong data foundations, and human–AI teaming—will compound advantage while others get trapped in pilots and platform dependency.


1) Why act now: the competitive reality

Distribution is changing. AI assistants and agentic workflows increasingly mediate buying journeys. If your brand isn’t represented in answers and automations, you forfeit visibility, traffic, and margin. This is a channel economics shift: AI determines which brands are surfaced—and which are invisible.

Platforms are consolidating power. Hyperscalers are embedding AI across their offerings. You’ll benefit from their acceleration, but your defensibility won’t come from platforms your competitors can also buy. The durable moat is your proprietary data, decision logic, and learning loops you control—not a longer vendor list.

Agents are getting real. Think of agents as “an algorithm that applies algorithms.” They decompose work into steps, call tools/APIs, and complete tasks with minimal supervision. Agent architectures will reshape processes, controls, and talent—pushing leaders to design for human–AI teams rather than bolt‑on copilots.


2) The paradox: move fast and build right

The cost of waiting. Competitors pairing people with AI deliver faster at lower cost and start absorbing activities you still outsource. As internal production costs fall faster than coordination costs, vertical integration becomes attractive—accelerated by automation. Late movers face margin pressure and share erosion.

The risk of rushing. Many efforts stall because they “build castles on quicksand”—shiny proofs‑of‑concept on weak data and process foundations. Value doesn’t materialize, trust erodes, and budgets freeze. Urgency must be paired with disciplined follow up so speed creates compounded learning.


3) A durable path to value: the 5‑Box Implementation Framework

A simple path from strategy deck to shipped value:

  1. Problem. Define a single business problem tied to P&L or experience outcomes. Write the metric up front; make the use case narrow enough to ship quickly.
  2. Data. Map sources, quality, access, and ownership. Decide what you must own versus can borrow; invest early in clean, governed data because it is the most sustainable differentiator.
  3. Tools. Choose the lightest viable model/agent and the minimum integration needed to achieve the outcome, keep it simple.
  4. People. Form cross‑functional teams (domain expertise + data + engineering + change) with one accountable owner. Team design—not individual heroics—drives performance.
  5. Feedback loops. Instrument production to compare predicted vs. actual outcomes. The delta gives valuable insights and becomes new training data.

Your defensive moat is data + people + decisions + learning loops, not your vendor list.


4) Moving the Human Workforce to more Complex Tasks

While AI absorbs simple and complicated work (routine tasks, prediction, pattern recognition), the human edge shifts decisively to complex and chaotic problems—where cause and effect are only clear in retrospect or not at all. This economic reality forces immediate investment in people as internal work is increasingly handled by AI–human teams.

The immediate talent pivot. Leaders must signal—and codify—new “complexity competencies”: adaptive problem‑solving, systems thinking, comfort with ambiguity, and AI product‑ownership (defining use cases, data needs, acceptance criteria, and evaluation).

Organizational design for learning.

  • Security: Build psychological safety so smart experiments are rewarded and failures fuel learning, not blame.
  • Convenience: Make adoption of new AI tools easy—frictionless access, clear guidance, and default enablement.
  • Process: A weak human with a tool and a better process will outperform a strong human with a tool and a worse process. Define roles, handoffs, and measurement so teams learn in the loop.

5) Where ROI shows up first

There is a lot of discussion on where AI really shows it benefits and there are four areas, where we see consistent reporting about:

Content. Marketing and knowledge operations see immediate throughput gains and more consistent quality. Treat this as a production system: govern sources, version prompts/flows, and measure impact.

Code. Assistance, testing, and remediation compress cycle time and reduce defects. Success depends on clear guardrails, reproducible evaluation, and tight feedback from production incidents into your patterns.

Customer. Service and sales enablement benefit from faster resolution and personalization at scale. Start with narrow intents, then expand coverage as accuracy and routing improve.

Creative. Design, research, and planning benefit from rapid exploration and option value. Use agentic research assistants with human review to widen the solution space before you converge.


6) Organize to scale without chaos

Govern the reality, not the slide. Shadow AI already exists. Enable it safely with approved toolkits, lightweight guardrails, and clear data rules—so exploration happens inside the tent, not outside it.

CoE vs. federation. Avoid the “cost‑center CoE” trap. Stand up a small enablement core (standards, evaluation, patterns), but push delivery into business‑owned pods that share libraries and reviews. This balances consistency with throughput.

Human + AI teams. Process design beats heroics. Make handoffs explicit, instrument outcomes, and build psychological safety so teams learn in the loop. A weak human with a machine and a better process will outperform a strong human with a machine and a worse process.


What this means for leaders

  • Move talent to handle complexity. Codify new competencies (adaptive problem‑solving, systems thinking, comfort with ambiguity, AI product‑ownership) and design organizational systems that accelerate learning (security, convenience, process).
  • Your moat is data + people + decisions + learning loops. Platforms accelerate you, but they’re available to everyone. Proprietary, well‑governed data feeding instrumented processes is what compounds.
  • Ship value early; strengthen foundations as you scale. Start where ROI is proven (content, code, customer, creative), then use that momentum to fund data quality and governance.
  • Design for agents and teams now. Architect processes assuming agents will do steps of work and humans will supervise, escalate, and improve the system. That’s how you create repeatable outcomes.

Lifelong Learning in the Age of AI – My Playbook

September 2025, I received two diplomas: IMD’s AI Strategy & Implementation and Nyenrode University’s Corporate Governance for Supervisory Boards. I am proud of both—more importantly, they cap off a period where I have deliberately rebuilt how I learn.

With AI accelerating change and putting top-tier knowledge at everyone’s fingertips, the edge goes to leaders who learn—and apply—faster than the market moves. In this issue I am not writing theory; I am sharing my learning journey of the past six months—what I did, what worked, and the routine I will keep using. If you are a leader, I hope this helps you design a learning system that fits a busy executive life.


My Learning System – 3 pillars

1) Structured learning

This helped me to gain the required depth:

  • IMD — AI Strategy & Implementation. I connected strategy to execution: where AI creates value across the business, and how to move from pilots to scaled outcomes. In upcoming newsletters, I will go share insights on specific topics we went deep on in this course.
  • Nyenrode — Corporate Governance for Supervisory Boards. I deepened my view on board-level oversight—roles and duties, risk/compliance, performance monitoring, and strategic oversight. I authored my final paper on how to close the digital gap in supervisory boards (see also my earlier article)
  • Google/Kaggle’s 5-day Generative AI Intensive. Hands-on labs demystified how large language models work: what is under the hood, why prompt quality matters, where workflows can break, and how to evaluate outputs against business goals. It gave understanding how to improve the use of these models.

2) Curated sources

This extended the breadth of my understanding of the use of AI.

2a. Books

Below I give a few examples, more book summaries/review, you can find on my website: www.bestofdigitaltransformation.com/digital-ai-insights.

  • Co-Intelligence: a pragmatic mindset for working with AI—experiment, reflect, iterate.
  • Human + Machine: how to redesign processes around human–AI teaming rather than bolt AI onto old workflows.
  • The AI-Savvy Leader: what executives need to know to steer outcomes without needing to code.

2b. Research & articles
I built a personal information base with research from: HBR, MIT, IMD, Gartner, plus selected pieces from McKinsey, BCG, Strategy&, Deloitte, and EY. This keeps me grounded in capability shifts, operating-model implications, and the evolving landscape.

2c. Podcasts & newsletters
Two that stuck: AI Daily Brief and Everyday AI. Short, practical audio overviews with companion newsletters so I can find and revisit sources. They give me a quick daily pulse without drowning in feeds.

3) AI as my tutor

I am using AI to get personalised learning support.

3a. Explain concepts. I use AI to clarify ideas, contrast approaches, and test solutions using examples from my context.
3b. Create learning plans. I ask for step-by-step learning journeys with milestones and practice tasks tailored to current projects.
3c. Drive my understanding. I use different models to create learning content, provide assignments, and quiz me on my understanding.


How my journey unfolded

Here is how it played out.

1) Started experimenting with ChatGPT.
I was not an early adopter; I joined when GPT-4 was already strong. Like many, I did not fully trust it at first. I began with simple questions and asked the model to show how it interpreted my prompts. That built confidence without creating risks/frustration.

2) Built foundations with books.
I read books like Co-Intelligence, Human + Machine, and The AI-Savvy Leader. These created a common understanding for where AI helps (and does not), how to pair humans and machines, and how to organise for impact. For all the books I created reviews, to anchor my learnings and share them in my website.

3) Added research and articles.
I set up a repository with research across HBR/MIT/IMD/Gartner and selected consulting research. This kept me anchored in evidence and applications, and helped me track the operational implications for strategy, data, and governance.

4) Tried additional models (Gemini and Claude).
Rather than picking a “winner,” I used them side by side on real tasks. The value was in contrast—seeing how different models frame the same question, then improving the final answer by combining perspectives. Letting models critique each other surfaced blind spots.

5) Went deep with Google + Kaggle.
The 5-day intensive course clarified what is under the hood: tokens/vectors, why prompts behave the way they do, where workflows tend to break, and how to evaluate outputs beyond “sounds plausible.” The exercises translated directly into better prompt design and started my understanding of how agents work.

6) Used NotebookLM for focused learning.
For my Nyenrode paper, I uploaded the key articles and interacted only with that corpus. NotebookLM generated grounded summaries, surfaced insights I might have missed, and reduced the risk of invented citations (by sticking to the uploaded resources). The auto-generated “podcast” is one of the coolest features I experienced and really helps to learn about the content.

7) Added daily podcasts/newsletters to stay current.
The news volume on AI is impossible to track end-to-end. AI Daily Brief and Everyday AI give me a quick scan each morning and links worth saving for later deep dives. This provides the difference between staying aware versus constantly feeling behind.

8) Learned new tools and patterns at IMD.

  • DeepSeek helped me debug complex requests by showing how the model with reasoning interpreted my prompt—a fantastic way to unravel complex problems.
  • Agentic models like Manus showed the next step: chaining actions and tools to complete tasks end-to-end.
  • CustomGPTs (within today’s LLMs) let me encode my context, tone, and recurring workflows, boosting consistency and speed across repeated tasks.

Bring it together with a realistic cadence.

Leaders do not need another to-do list; they need a routine that works. Here is the rhythm I am using now:

Daily

  • Skim one high-signal newsletter or listen to a podcast.
  • Capture questions to explore later.
  • Learn by doing with the various tools.

Weekly

  • Learn: read one or more papers/articles on various Ai related topics
  • Apply: use one idea on a live problem; interact with AI on going deeper
  • Share: create my weekly newsletter, based on my learnings

Monthly

  • Pick one learning topic read a number of primary sources, not just summaries.
  • Draft an experiment: with goal, scope, success metric, risks, and data needs. Using AI to pressure-test assumptions.
  • Review with a thought leaders/colleagues for challenge and alignment.

Quarterly

  • Read at least one book that expands your mental models.
  • Create a summary for my network. Teaching others cements my own understanding.

(Semi-) Annualy

  • Add a structured program or certificate to go deep and to benefit from peer debate.

Closing

The AI era compresses the shelf life of knowledge. Waiting for a single course is no longer enough. What works is a learning system: structured learning for depth, curated sources for breadth, and AI as your tutor for speed. That has been my last six months, and it is a routine I will continue.

GAINing Clarity – Demystifying and Implementing GenAI

Herewith my final summer reading book review as part of my newsletter series.
GAIN – Demystifying GenAI for Office and Home by Michael Wade and Amit Joshi offers clarity in a world filled with AI hype. Written by two respected IMD professors, this book is an accessible, structured, and balanced guide to Generative AI (GenAI), designed for a broad audience—executives, professionals, and curious individuals alike.

What makes GAIN especially valuable for leaders is its practical approach. It focuses on GenAI’s real-world relevance: what it is, what it can do, where it can go wrong, and how individuals and organizations can integrate it effectively into daily workflows and long-term strategies.

What’s especially nice is that Michael and Amit have invited several other thought and business leaders to contribute their perspectives and examples to the framework provided. (I especially liked the contribution of Didier Bonnet.)

The GAIN Framework

The book is structured into eight chapters, each forming a step in a logical journey—from understanding GenAI to preparing for its future impact. Below is a summary of each chapter’s key concepts.


Chapter 1 – EXPLAIN: What Makes GenAI Different

This chapter distinguishes GenAI from earlier AI and digital innovations. It highlights GenAI’s ability to generate original content, respond to natural-language prompts, and adapt across tasks with minimal input. Key concepts include zero-shot learning, democratized content creation, and rapid adoption. The authors stress that misunderstanding GenAI’s unique characteristics can undermine effective leadership and strategy.


Chapter 2 – OBTAIN: Unlocking GenAI Value

Wade and Joshi explore how GenAI delivers value at individual, organizational, and societal levels. It’s accessible and doesn’t require deep technical expertise to drive impact. The chapter emphasizes GenAI’s role in boosting productivity, enhancing creativity, and aiding decision-making—especially in domains like marketing, HR, and education—framing it as a powerful augmentation tool.


Chapter 3 – DERAIL: Navigating GenAI’s Risks

This chapter outlines key GenAI risks: hallucinations, privacy breaches, IP misuse, and embedded bias. The authors warn that GenAI systems are inherently probabilistic, and that outputs must be questioned and validated. They introduce the concept of “failure by design,” reminding readers that creativity and unpredictability often go hand in hand.


Chapter 4 – PREVAIL: Creating a Responsible AI Environment

Here, the focus turns to managing risks through responsible use. The authors advocate for transparency, human oversight, and well-structured usage policies. By embedding ethics and review mechanisms into workflows, organizations can scale GenAI while minimizing harm. Ultimately, it’s how GenAI is used—not just the tech itself—that defines its impact.


Chapter 5 – ATTAIN: Scaling with Anchored Agility

This chapter presents “anchored agility” as a strategy to scale GenAI responsibly. It encourages experimentation, but within a framework of clear KPIs and light-touch governance. The authors promote an adaptive, cross-functional approach where teams are empowered, and successful pilots evolve into embedded capabilities.

One of the most actionable frameworks in GAIN is the Digital and AI Transformation Journey, which outlines how organizations typically mature in their use of GenAI:

  • Silo – Individual experimentation, no shared visibility or coordination.
  • Chaos – Widespread, unregulated use. High potential but rising risk.
  • Bureaucracy – Management clamps down. Risk is reduced, but innovation stalls.
  • Anchored Agility – The desired state: innovation at scale, supported by light governance, shared learning, and role clarity.

This model is especially relevant for transformation leaders. It mirrors the organizational reality many face—not only with AI, but with broader digital initiatives. It gives leaders a language to assess their current state and a vision for where to evolve.


Chapter 6 – CONTAIN: Designing for Trust and Capability

Focusing on organizational readiness, this chapter explores structures like AI boards and CoEs. It also addresses workforce trust, re-skilling, and role evolution. Rather than replacing jobs, GenAI changes how work gets done—requiring new hybrid roles and cultural adaptation. Containment is about enabling growth, not restricting it.


Chapter 7 – MAINTAIN: Ensuring Adaptability Over Time

GenAI adoption is not static. This chapter emphasizes the need for feedback loops, continuous learning, and responsive processes. Maintenance involves both technical tasks—like tuning models—and organizational updates to governance and team roles. The authors frame GenAI maturity as an ongoing journey.


Chapter 8 – AWAIT: Preparing for the Future

The book closes with a pragmatic look ahead. It touches on near-term shifts like emerging GenAI roles, evolving regulations, and tool commoditization. Rather than speculate, the authors urge leaders to stay informed and ready to adapt, fostering a mindset of proactive anticipation.posture of informed anticipation: not reactive panic, but intentional readiness. As the GenAI field evolves, so must its players.


What GAIN Teaches Us About Digital Transformation

Beyond the specifics of GenAI, GAIN offers broader lessons that are directly applicable to digital transformation initiatives:

  • Start with shared understanding. Whether you’re launching a transformation program or exploring AI pilots, alignment starts with clarity.
  • Balance risk with opportunity. The GAIN framework models a mature transformation mindset—one that embraces experimentation while putting safeguards in place.
  • Transformation is everyone’s job. GenAI success is not limited to IT or data teams. From HR to marketing to the executive suite, value creation is cross-functional.
  • Governance must be adaptive. Rather than rigid control structures, “anchored agility” provides a model for iterative scaling—one that balances speed with oversight.
  • Keep learning. Like any transformation journey, GenAI is not linear. Feedback loops, upskilling, and cultural evolution are essential to sustaining momentum.

In short, GAIN helps us navigate the now, while preparing for what’s next. For leaders navigating digital and AI transformation, it’s a practical compass in a noisy, fast-moving world.

Amplifying the Human Advantage over AI – Lessons from Pascal Bornet’s Irreplaceable

For this holiday season I had, Pascal Bornet’s book Irreplaceable: The Art of Standing Out in the Age of Artificial Intelligence on top of my reading list. His work delivers a clear and timely message: the more digital the world becomes, the more essential our humanity is.

For executives and transformation leaders navigating the impact of AI, Bornet provides a pragmatic and optimistic blueprint. This article summarizes the core insights of Irreplaceable, explores its implications for digital transformation, and offers a practical lens for application (Insights).


AI as Enabler, Not Replacer

Bornet challenges the zero-sum narrative of “AI vs. Humans.” Instead, he positions AI as an enabler: capable of handling repetitive, structured tasks, it liberates humans to focus on what machines can’t do—leading, empathizing, creating, and judging. AI, in this view, is not the destination but the vehicle to a more human future.

Insight: Use AI to augment human roles—especially in decision-making, customer experience, and creative problem-solving—rather than replacing them.


The “Humics”: Redefining the Human Advantage

At the heart of Irreplaceable lies the concept of Humics: the uniquely human capabilities that define our irreducibility in an AI-powered world. Bornet identifies several:

  • Genuine Creativity – The capacity to generate novel ideas and innovations by drawing on intuition, imagination, and deeply personal lived experiences that machines cannot emulate.
  • Critical Thinking – The ability to evaluate information critically, reason ethically, and make contextualized decisions that reflect both logic and values.
  • Emotional Intelligence – A complex combination of self-awareness, empathy, and the ability to manage interpersonal relationships and influence with authenticity.
  • Adaptability & Resilience – The readiness to embrace change, learn continuously, and maintain performance under stress and uncertainty.
  • Social Authenticity – The human ability to create trust and meaning in relationships through transparency, shared values, and emotional connection.

Insight: Elevate Humics from soft skills to strategic assets. Build them into hiring, training, and leadership development.


The IRREPLACEABLE Framework: Three Competencies for the Future

Bornet proposes a universal framework built on three future-facing competencies:

  • AI-Ready: Develop the ability to understand and leverage AI technologies by becoming fluent in their capabilities, applications, and ethical boundaries. This involves not just using AI tools, but knowing when and how to apply them effectively.
  • Human-Ready: Focus on strengthening Humics—the inherently human skills like empathy, critical thinking, and creativity—that make people indispensable in roles where AI falls short.
  • Change-Ready: Build resilience and adaptability by fostering a growth mindset, embracing continuous learning, and staying flexible in the face of constant technological and organizational change.

Insight: These competencies should be embedded into your workforce strategies, talent models, and cultural transformation agenda.


Human-AI Synergy: The New Collaboration Model

Bornet advocates for symbiotic teams where AI and humans complement each other. Rather than compete, the two work in tandem to drive better outcomes.

  • AI delivers scale, speed, and precision.
  • Humans provide context, ethics, judgment, and empathy.

Insight: Use this pairing in high-impact roles like diagnostics, content creation, customer service, and product design.


Avoiding “AI Obesity”: The Risk of Over-Automation

Bornet warns against AI Obesity: a condition where organizations over-rely on AI, leading people to lose touch with essential human skills like critical thinking, empathy, and creativity. The solution? Regularly exercise our Humics and ensure humans remain in the loop, especially where oversight, ethics, or trust are required.

Insight: Define clear roles for human oversight, especially in ethical decisions, people management and policy enforcement.


Real-World Application: Individuals, Parents, and Businesses

Bornet offers tailored strategies for:

  • Individuals: Blend digital fluency with human depth to future-proof your career. Learn how to partner with AI tools to enhance your strengths, stay adaptable, and lead with human judgment in an increasingly automated environment.
  • Parents & Educators: Teach kids curiosity, resilience, and emotional intelligence alongside digital skills. Equip the next generation not only to use technology responsibly but also to cultivate the uniquely human traits that will help them thrive in any future scenario.
  • Businesses: Redesign roles and culture to embed AI-human collaboration, with trust and values at the core. Shift from a purely efficiency-driven mindset to one that sees AI as a co-pilot, empowering employees to do more meaningful, value-adding work.

Note: This is not just about new tools; it’s about new mindsets and behaviors across the organization.


Implications for Digital Transformation Leaders

Irreplaceable aligns seamlessly with modern transformation priorities:

  • Technology as Amplifier: Deploy AI to expand human capabilities, not to replace them.
  • Human-Centric KPIs: Add creativity, employee experience, and trust metrics to your dashboards.
  • Purpose-Driven Change: Frame digital transformation as an opportunity to become more human, not less.

How to Apply This in Practice

Start with a diagnostic: Where is human judgment undervalued in your current operating model? Then:

  1. Redesign roles with AI + Human pairings
  2. Invest in Humics through people development and learning journeys
  3. Update metrics to track human and AI impact
  4. Communicate the purpose: Align AI initiatives with a human-centered narrative

Conclusion

Pascal Bornet’s Irreplaceable offers more than optimism. It provides a strategic lens to ensure your organization thrives in the AI age—by amplifying what makes us human. For digital and transformation leaders, the message is clear: being more human is your greatest competitive advantage.

For more information you can check out: Become IRREPLACEABLE and unlock your true potential in the age of AI

When Good Intentions Fail – Why Effective Governance Is the Fix

While many organizations focus on technology, data, and capabilities, it’s the governance structures that align strategy with execution, enable informed decision-making, and ensure accountability. Without effective governance, even the most promising digital or AI initiatives risk becoming fragmented, misaligned, or unsustainable.

This article explores how governance typically evolves during transformation, drawing on a framework presented in GAIN by Michael Wade and Amit Joshi (2025). It then outlines best practices and tools for establishing effective governance at every level of transformation—portfolio, program, and project.

The Governance Journey: From Silo to Anchored Agility
Wade and Joshi identify four phases in the evolution of transformation governance:

  • Silo: In this early phase, digital and AI initiatives are isolated within departments. There is little coordination across the organization, leading to duplicated efforts and fragmented progress.
  • Chaos: As a reaction to the issues with the siloed approach, often companies start putting governance in place—but often not very effectively. Leading to a proliferation of processes, tools and platforms.
  • Bureaucracy: In response to chaos, organizations implement formal governance structures. While this reduces risk and increases control, it can also stifle innovation through over-regulation and sluggish decision-making.
  • Anchored Agility: The desired end-state. Governance becomes a strategic enabler—embedded yet flexible. It ensures alignment and control without constraining innovation. Decision-making is delegated appropriately, while strategic oversight is maintained.

Most organisations go through this journey, understanding where your organization is helps to determine what kind of actions are needed and what to improve.

Effective Governance: Moving from Bureaucracy to Anchored Agility
Most successful digital and AI transformations mature into the Bureaucracy and Anchored Agility phases. These are the phases where effective governance must strike a balance between structure and adaptability.

Two proven approaches—PMI and Agile—offer best practices to draw from:

PMI Governance Best Practices

  • Well-defined roles and responsibilities across governance layers
  • Program and project charters to formalize scope, authority, and accountability
  • Clear stage gates, with decision points tied to strategic goals
  • Risk, issue, and change control mechanisms
  • Standard reporting templates to ensure transparency and comparability

PMI’s approach works best in large, complex transformations that require strong coordination, predictable delivery, and control of interdependencies.

Agile Governance Principles

  • Empowered teams with clear decision rights
  • Frequent review cadences (e.g., sprint reviews, retrospectives, and PI planning)
  • Lightweight governance bodies focused on alignment, not control
  • Transparent backlogs and prioritization frameworks
  • Adaptability built into the governance process itself

Agile governance is ideal for fast-evolving digital or AI initiatives where experimentation, speed, and responsiveness are critical.

Moving from Bureaucracy to Anchored Agility, is not moving away from PMI to only Agile Governance principles. Your portfolio probably will have mix of initiatives which leverages one or both of the approaches.

Governance Across Levels: Portfolio, Program, Project
A layered governance model helps ensure alignment from strategy to execution:

Portfolio Level

  • Purpose: Strategic alignment, investment decisions, and value realization
  • Key Bodies: Executive Steering Committees, Digital/AI Portfolio Boards
  • Focus Areas: Prioritization, funding, overall risk and performance tracking

Program Level

  • Purpose: Coordinating multiple related projects and initiatives
  • Key Bodies: Program Boards or Program Management Offices
  • Focus Areas: Interdependencies, resource allocation, milestone tracking, issue resolution

Project Level

  • Purpose: Delivering tangible outcomes on time and on budget
  • Key Bodies: Project SteerCos, Agile team ceremonies
  • Focus Areas: Daily execution, scope management, risk and issue tracking, delivery cadence

Connecting the Layers: How Governance Interacts and Cascades
Effective governance requires more than clearly defined levels—it demands a dynamic flow of information and accountability across these layers. Strategic priorities must be translated into executable actions, while insights from execution must feed back into strategic oversight.

  • Top-down alignment: Portfolio governance sets strategic objectives, funding allocations, and key performance indicators. These are cascaded to programs and projects through charters, planning sessions, and KPIs.
  • Bottom-up reporting: Project teams surface risks, status updates, and learnings which are aggregated at the program level and escalated to the portfolio when needed.
  • Horizontal coordination: Programs often interact and depend on each other. Governance forums at program level and joint planning sessions across programs help manage these interdependencies.
  • Decision and escalation pathways: Clear routes for issue resolution and decision-making prevent bottlenecks and ensure agility across layers.

Organizations that master this governance flow operate with greater transparency, speed, and alignment.

Tools and Enablers for Good Governance
Governance is not just about structure—it’s also about enabling practices and tools that make oversight effective and efficient:

  • Terms of Reference (ToR): Define the mandate, decision rights, and meeting cadence for each governance body.
  • Collaboration & Transparency Tools: Use of platforms like Asana, Confluence, Jira, MS Teams for sharing updates, tracking decisions, and managing workflows.
  • Standardized Reporting: Leverage consistent templates for status, risks, and KPIs to create transparency and drive focus.
  • RACI Matrices: Clarify roles and decision-making authority across stakeholders, especially in cross-functional setups.
  • Governance Calendars: Synchronize key reviews, steerco meetings, and strategic checkpoints across layers.

Lessons from the Field
From my experience, common governance pitfalls include over-engineering (which stifles agility), under-resourcing (especially at the program level), and slow/unclear decision making. Successful governance relies on:

  • Aligned executive sponsorship
  • Clear ownership at all levels
  • Integration of risk, value, and resource management
  • Enabling people to act

Conclusion
In digital and AI transformation, effective governance is not about control—it’s about enablement. It provides the structure and transparency needed to drive transformation, align stakeholders, and scale success. As your organization moves toward Anchored Agility, governance becomes less of a bottleneck and more of a backbone.

Where is your organization on the governance journey—and what would it take to reach the next phase?

Why Centres of Excellence Are the Backbone of Sustainable Transformation

Real-world lessons from building CoEs across domains

In every transformation I’ve led—whether in supply chain, commercial, innovation, or enabling functions—one thing has remained constant: transformation only sticks when it becomes part of the organizational DNA. That’s where Centres of Excellence (CoEs) come in.

Over the years, I’ve built and led CoEs across foundational disciplines, transformation approaches, and specific capabilities. When set up well, they become more than just support groups—they build skill, drive continuous improvement, and scale success.

This newsletter shares how I’ve approached CoEs in three distinct forms, and what I’ve learned about setting them up for lasting impact.


What is a CoE? (From Theory to Practice)

In theory, a CoE is a group of people with expertise in a specific area, brought together to drive consistency, capability, and performance. In practice, I’ve seen them evolve into vibrant communities of practitioners where people connect to:

  • Share business challenges and solutions
  • Scale learnings and continuously evolve best practices
  • Facilitate exchange between experts and users
  • Build a knowledge base and provide education

The most successful CoEs I’ve led were about enabling people to learn from each other, work smarter, and operate more consistently.


Three Types of CoEs I’ve built and led

1. Foundational CoEs – Building Core Capabilities

These are the bedrock. Without them, transformation initiatives often lack structure and miss out on leveraging proven approaches. Examples from my experience include:

  • Program & Project Management CoE
    Built on PMI (PMBoK) and Prince2 standards, this CoE offered training, templates, mentoring, and coaching. It became the go-to place for planning and executing complex programs and projects.
  • Process Management CoE
    Using industry frameworks (e.g., APQC), platforms (ARIS, Signavio), and process mining tools (Celonis, UiPath, Signavio), this CoE helped standardize processes and enabled teams to speak a shared process language and identify improvement opportunities through data.
  • Change Management CoE
    Drawing from Kotter’s principles and other industry best practices, we developed a change playbook and toolkit. This CoE played a critical role in stakeholder alignment and adoption across transformation efforts.
  • Performance Management CoE
    Perhaps less commonly named, but highly impactful. We developed strategy-linked KPI frameworks and supported teams in embedding performance reviews into regular business rhythms.
  • Emerging: AI Enablement CoE
    Looking ahead, I believe the next foundational capability for many organizations will be the smart and responsible use of AI. I’ve begun shaping my thinking around how a CoE can support this journey—governance, tooling, education, and internal use case sharing.

2. Transformation-Focused CoEs – Orchestrating Change Across the Enterprise

Unlike foundational CoEs, these focus on embedding transformation methodologies and driving continuous improvement across functions. In my experience, they’re essential for changing both mindsets and behaviors.

  • Continuous Improvement | Lean CoE
    Anchored in Toyota’s principles, our Lean CoE supported everything from strategic Hoshin Kanri deployment to local Kaizens. It equipped teams with the tools and mindset to solve problems systemically, and offered structured learning paths for Lean certification.
  • Agile CoE
    Created during our shift from traditional project models to Agile, this CoE helped scale Agile practices—first within IT, then into business areas like marketing and product development.
  • End-to-End Transformation CoE
    One of the most impactful setups I was part of. At Philips, in collaboration with McKinsey, we created a CoE to lead 6–9 month E2E value stream transformations. It brought together Lean, Agile, and advanced analytics in a structured, cross-functional method.

3. Capability & Process CoEs – Scaling New Ways of Working

These CoEs are typically created during the scaling phase of transformation to sustain newly introduced systems and processes.

  • Supply Chain CoEs
    I’ve helped build several, covering Integrated Planning, Procurement (e.g., SRM using Coupa/Ariba), and Manufacturing Execution Systems (e.g., SAP ME). These CoEs ensured continuity and ownership post-rollout.
  • Innovation CoE
    Focused on design thinking, ideation frameworks, and Product Lifecycle Management (e.g., Windchill). It enabled structured creativity, process adoption, and skill development.
  • Commercial CoEs
    Anchored new ways of working in e-Commerce, CRM (e.g., Salesforce), and commercial AI tools—helping frontline teams continuously evolve their practices.
  • Finance CoEs
    Supported ERP deployment and harmonized finance processes across regions and business units. These CoEs were key in driving standardization, transparency, and scalability.

Lessons Learned – How to Build an Effective CoE

Having built CoEs in global organizations, here’s what I’ve found to be essential:

  • Start with a Clear Purpose
    Don’t set up a CoE just because it sounds good. Be explicit about what the CoE is solving or enabling. Clarify scope—and just as importantly, what it doesn’t cover (e.g., handling IT tickets).
  • Design the Right Engagement Model
    Successful CoEs balance push (structured knowledge and solutions) with pull (responsiveness to business needs). Two-way communication is critical.
  • Build the Community
    Experts are crucial, but practitioners keep the CoE alive. Foster interaction, feedback, and peer-to-peer learning—not just top-down communication.
  • Leverage the Right Tools
    Teams, SharePoint, Slack, Yammer, newsletters, and webcasts all support collaboration. Establish clear principles for how these tools are used.
  • Measure What Matters
    Track adoption, usage, and impact—not just activity. Set CoE-specific KPIs and regularly celebrate visible value creation.

Closing Thought

CoEs aren’t a magic fix—but they are one of the most effective ways I’ve found to institutionalize change. They help scale capabilities, sustain momentum, and embed transformation into the organization’s ways of working.

If you’re designing or refreshing your CoE strategy, I hope these reflections spark new ideas. I’m always open to exchanging thoughts.

Why Systems Thinking Is Crucial in Designing Digital Transformations

Digital transformations are hard. Despite bold ambitions, most still fall short—projects stall, teams get overwhelmed, and new technologies fail to deliver lasting value.

One major reason is that many organizations approach transformation in a fragmented, linear way—missing the underlying complexity of the systems they’re trying to change.

Systems thinking offers a powerful alternative. It equips leaders to design transformations that are more coherent, more efficient, and more likely to succeed. The approach helps teams see connections, anticipate ripple effects, and align initiatives across silos. Systems thinking increases the odds that transformation efforts won’t just launch—they’ll land, scale, and sustain real impact. It helps leaders see and communicate the bigger picture, connect the dots across silos, and design smarter interventions that actually stick.


What Is Systems Thinking?

Think of a transformation not as a set of initiatives, but as a connected system. Systems thinking helps you:

  • Spot how parts are connected—across tech, people, data, and processes.
  • Understand ripple effects—how changes in one area can help or hurt others.
  • Design smarter interventions—by targeting the right pressure points, not just the most obvious problem.

Example: Adding AI to customer support won’t drive impact unless you also rethink workflows, retrain staff, align incentives, and adjust how performance is measured. Systems thinking shows you the whole picture.


When to apply Systems Thinking?

Most transformations are cut into initiatives and get locked into “project mode” too early—jumping to solutions before fully understanding the system they aim to change.

That’s why systems thinking is most valuable during the Design and Scoping phase—from shaping strategy to turning it into an actionable plan. It helps:

  • Identify where the real bottlenecks are—not just symptoms.
  • Avoid siloed planning.
  • Create a roadmap that aligns required resources, impact, and ownership.

In the transformations I was involved in the Design and Scoping phase, I always used four angles: People, Process, Data and IT to look at the systems. In combination with bringing in the mindset to think through each of the topics, End to End, we looked across the silo’s and approached the transformation holistically.

In several of the transformations we leveraged experts in this field (like McKinsey, BCG, Accenture and Deloitte) and also publications from leading institutions were used as inspiration. Below a couple of excerpts of what they write about the topic.


What Thought Leaders Say applying System Thinking in Digital Transformation

MIT Sloan highlights that true digital transformation requires more than upgrading technology. Success depends on aligning tech, data, talent, and leadership—treating them as parts of one evolving system. Their research urges leaders to think integratively, not incrementally.

Harvard Business Review stresses the need for adaptive leadership in complex environments. Traditional linear planning falls short in today’s dynamic systems. Leaders must learn to coordinate across organizational boundaries and steer transformation with responsiveness and curiosity.

McKinsey & Company argues that managing transformation complexity demands a systems mindset. They emphasize the importance of understanding how processes, technologies, and people influence one another—revealing hidden dependencies that can derail progress if left unaddressed.

Deloitte offers practical tools to tackle so-called “messy problems” using systems thinking. They advocate mapping interactions and identifying root causes rather than reacting to surface-level symptoms. This approach is especially useful in large-scale enterprise or public-sector transformations.

Boston Consulting Group (BCG) connects systems thinking with platform innovation and agile ways of working. Their work emphasizes the importance of thinking in flows rather than functions—designing transformations around end-to-end customer, data, and value journeys.

Stanford University’s d.school and HAI combine systems thinking with design and AI ethics. Their research underscores the importance of aligning technology, people, and social systems—especially when integrating AI into existing structures. They promote a holistic view to ensure responsible and sustainable change.


How to Apply Systems Thinking in 5 Practical Steps

1. Define the Big Picture
What system are you trying to change?
Start by mapping the environment you want to influence. Identify key players, teams, technologies, and processes involved. Look at how value is created today, where it flows, and where it gets stuck. This helps frame the real scope of the challenge and ensures you don’t miss critical pieces.

2. Spot Key Connections
What influences what?
Once the landscape is clear, explore how the elements interact. Look for patterns, cause-and-effect relationships, and feedback loops. For instance, increasing automation may speed up service but also drive new types of demand. These dynamics are crucial for anticipating second-order effects.

3. Find the Pressure Points
Where can a small change make a big difference?
Focus on areas where a strategic adjustment could generate disproportionate impact. This might be a policy that shapes behavior, a workflow bottleneck, or a metric that drives priorities. The goal is to shift the system in ways that amplify positive change and reduce resistance.

4. Design the Roadmap Around the System
Move the whole system, not just parts.
Align your initiatives across domains—technology, data, process, people, and culture. Sequence interventions so that early wins unlock momentum for deeper shifts. Consider how one change enables another, and make sure efforts reinforce rather than compete with each other.

5. Build in Feedback and Learning
How will you measure and adapt?
Transformations unfold over time. Equip your teams with ways to detect what’s working, what’s not, and where unintended consequences arise. This includes system-level KPIs, qualitative insights, and space for reflection. The ability to course-correct is what makes a systems approach resilient.

Conclusion: The Payoff of Thinking in Systems

When systems thinking becomes a consistent practice, the result is not just better-designed transformation programs—it’s a smarter, more adaptive organization. Leaders begin to anticipate change instead of reacting to it. Teams work across boundaries instead of within silos. And investments create compounding value rather than isolated wins. Ultimately, systems thinking enables transformation efforts to scale with clarity, resilience, and lasting impact.

What We Can Learn from Lego: Three Transformation Lessons

During my recent visit to Lego House in Billund, I was reminded just how much more this iconic brand represents than simply being a maker of plastic bricks. Lego is a great example of smart design, purposeful transformation, and digital innovation. Organizations aiming to stay relevant in a changing market can learn a lot from Lego’s ability to reinvent itself.

In this article, I explore three interconnected dimensions of Lego’s success: how its design principles mirror modern architectural thinking, how the company has transformed while staying true to its purpose, and how it is leveraging digital and AI to lead in both product and operational innovation.


1. Architecture & Design Thinking: From Bricks to Platforms

As a child, I already experienced the smart design of Lego—how one collection of components allowed me to create countless structures. Each brick is designed with a standard interface that guarantees compatibility—regardless of shape, size, or decade of manufacture. This is the physical-world equivalent of APIs in digital architecture: enabling endless creativity through constraint-based design.

Beyond modularity, Lego also embodies platform thinking. With Lego Ideas, they invite users into the design process, allowing them to co-create and even commercialize their models. This open innovation model has helped extend Lego’s reach beyond its internal capabilities.

Lego also uses digital twins to simulate the behavior of physical Lego components and production systems. This enables the company to test product performance, optimize assembly processes, and reduce waste—before anything is physically produced.

Lesson: Embrace modularity—not only in your product and system design but in your organizational setup. Invest in simulation and digital twin technology to test, iterate, and scale with greater speed and lower risk. And treat your users not just as consumers but as contributors to your platform.


2. Organizational Transformation: Reinvention with Purpose

Lego’s transformation journey is a great example of how established companies under pressure can reinvent themselves without losing their DNA. In the early 2000s, Lego faced a financial crisis caused by over-diversification and lack of focus. The turnaround required painful choices: divesting non-core businesses, simplifying product lines, and reconnecting with the company’s core mission—”inspiring and developing the builders of tomorrow.”

But Lego didn’t stop at operational restructuring. It also launched a broader innovation strategy to stay commercially relevant to changing customers. This included launching new experiences like The Lego Movie, which reinvented the brand for a new generation, and partnering with global content leaders such as Disney, Star Wars, and Formula 1 to create product ranges that merged Lego’s design with beloved franchises. These moves helped strengthen the brand and attract new audiences without alienating loyal fans.

Sustainability has become another important dimension—especially for a company built on plastic. Lego has committed to making all core products from sustainable materials by 2032 and is investing heavily in bio-based and recyclable plastics.

Lesson: Transformation isn’t about discarding the old; it’s about strengthening your core value and building on that foundation. Focused innovation, clear communication, and a culture that supports learning, sustainability, and adaptation are crucial.


3. Digital & AI Integration: Enhancing Experience and Performance

As a customer, I’ve already experienced how Lego.com tracks and rewards my purchases. For the younger user group, they’ve developed the Lego Life platform. Here, AI is used to moderate content and create engaging digital experiences for children. Personalization engines recommend content and products based on individual preferences and behaviors.

Lego has embraced digital not just to modernize, but to structurally improve its value chain. Robotics and automation are widely implemented in both production and warehousing. Their supply chain uses real-time data, predictive analytics, and machine learning to forecast demand, optimize production, and manage global inventory.

Perhaps the most innovative example is LegoGPT, an AI model developed with Carnegie Mellon University. It allows users to describe ideas in natural language and receive buildable Lego models in return. By converting abstract intent into tangible design, LegoGPT showcases the power of generative AI to bridge imagination and engineering.

Lesson: Use digital and AI to create meaningful impact—whether by enhancing customer experiences, increasing operational agility, or unlocking new creative possibilities.


Conclusion: Building with Intent

Lego teaches us that true transformation lies at the intersection of smart innovation, strong organizational purpose, and enabling technologies. Its enduring success comes from continually reinterpreting its core principles to meet the needs of a changing world.

For transformation leaders, Lego is more than a nostalgic brand—it’s a masterclass in building the future, one brick at a time.

How AI Changes the Digital Transformation Playbook

I recently revisited David L. Rogers’ 2016 book, The Digital Transformation Playbook. This work was foundational in how I approached digital strategy in the years that followed. It helped executives move beyond viewing digital as a technology problem and instead rethink strategy for a digital enabled business. As I now reflect on the accelerating impact of artificial intelligence—especially generative and adaptive AI—I found myself asking: how would this playbook evolve if it were written today? What shifts, additions, or reinterpretations does AI demand of us?

Rogers identified five strategic domains where digital forces reshaped the rules of business: customers, competition, data, innovation, and value. These domains remain as relevant as ever—but in the age of AI, each requires a fresh lens.

In this article, I revisit each domain, beginning with Rogers’ foundational insight and then exploring how AI transforms the picture. I also propose three new strategic domains that have become essential in the AI era: workforce, governance, and culture.


1. Customers → From Networks of Relationships to Intelligent Experiences

Rogers’ Insight (2016):
In the traditional business model, customers were treated as passive recipients of value. Rogers urged companies to reconceive customers as active participants in networks—communicating, sharing, and shaping brand perceptions in real-time. The shift was toward engaging these dynamic networks, understanding behavior through data, and co-creating value through dialogue, platforms, and personalization.

AI Shift (Now):
AI enables companies to move beyond personalized communication to truly intelligent experiences. By analyzing vast datasets in real-time, AI systems can predict needs, automate responses, and tailor interactions across channels. From recommendation engines to digital agents, AI transforms customer experience into something anticipatory and adaptive—redefining engagement, loyalty, and satisfaction.


2. Competition → From Industry Ecosystems to Model-Driven Advantage

Rogers’ Insight (2016):
Rogers challenged the notion of fixed industry boundaries, arguing that digital platforms enable competition across sectors. Businesses could no longer assume their competitors would come from within their own industry. Instead, value was increasingly co-created in fluid ecosystems involving customers, partners, and even competitors.

AI Shift (Now):
Today, the competitive battlefield is increasingly defined by AI capabilities. Winning organizations are those that can develop, fine-tune, and scale AI models faster than others. Competitive advantage comes from proprietary data, high-performing models, and AI-native organizational structures. In some cases, the model itself becomes the product—shifting power to those who own or control AI infrastructure.


3. Data → From Strategic Asset to Lifeblood of Intelligent Systems

Rogers’ Insight (2016):
Data, once a by-product of operations, was reimagined as a core strategic asset. Rogers emphasized using data to understand customers, inform decisions, and drive innovation. The shift was toward capturing more data and applying analytics to create actionable insights and competitive advantage.

AI Shift (Now):
AI transforms the role of data from decision-support to system training. Data doesn’t just inform—it powers intelligent behavior. The focus is now on quality, governance, and real-time flows of data that continuously refine AI systems. New challenges around data bias, provenance, and synthetic generation raise the stakes for ethical and secure data management.


4. Innovation → From Agile Prototyping to AI-Augmented Co-Creation

Rogers’ Insight (2016):
Rogers advocated for agile, iterative approaches to innovation. Instead of long development cycles, companies needed to embrace experimentation, MVPs, and customer feedback loops. Innovation was not just about new products—it was about learning fast and adapting to change.

AI Shift (Now):
AI amplifies every step of the innovation process. Generative tools accelerate ideation, design, and prototyping. Developers and designers can co-create with AI, testing multiple solutions instantly. The loop from idea to execution becomes compressed, with AI as a creative collaborator, not just a tool.


5. Value → From Digital Delivery to Adaptive Intelligence

Rogers’ Insight (2016):
Value creation, in Rogers’ view, moved from static supply chains to fluid, digital experiences. Companies needed to rethink how they delivered outcomes—shifting from products to services, from ownership to access, and from linear value chains to responsive platforms.

AI Shift (Now):
With AI, value is increasingly delivered through systems that learn and adapt. Intelligent services personalize in real time, optimize continuously, and evolve with user behavior. The value proposition becomes dynamic—embedded in a loop of sensing, reasoning, and responding.


Why We Must Expand the Playbook: The Rise of New Strategic Domains

The original five domains remain vital. Yet AI doesn’t just shift existing strategies—it introduces entirely new imperatives. As intelligent systems become embedded in workflows and decisions, organizations must rethink how they manage talent, ensure ethical oversight, and shape organizational culture. These aren’t adjacent topics—they are central to sustainable AI transformation.


6. Workforce → From Talent Strategy to Human–AI Teaming

AI is not replacing the workforce—it is changing it. Leaders must redesign roles, workflows, and capabilities to optimize human–AI collaboration. This means upskilling for adaptability, integrating AI into daily work, and ensuring people retain agency in AI-supported decisions. Human capital strategy must now include how teams and algorithms learn and perform together.


7. Governance → From Digital Risk to Responsible AI

AI introduces new dimensions of risk: bias, security, and regulatory complexity. Governance must now ensure not only compliance but also ethical development, explainability, and trust. Boards, executive teams, and product leaders need frameworks to evaluate and oversee AI initiatives—not just for effectiveness but for responsibility.


8. Culture → From Digital Fluency to AI Curiosity and Trust

The mindset shift required to scale AI is cultural as much as technological. Organizations must foster curiosity about what AI can do, confidence in its potential, and clarity about its limits. Trust becomes a cultural asset—built through transparency, education, and inclusive experimentation. Without it, AI adoption stalls.


Conclusion: A Playbook for the AI Era

Rogers’ original playbook gave us a framework to reimagine business strategy in a digital world. That foundation still holds. But as AI redefines how we compete, create, and lead, we need a new version—one that not only shifts the lens on customers, competition, data, innovation, and value, but also adds the critical dimensions of workforce, governance, and culture. These eight domains form the new playbook for transformation in the age of intelligence.