Closing the Digital Competency Gap in the Boardroom

This article is based on a thesis I have written for the Supervisory Board program (NCC 73) at Nyenrode University, which I will complete this month. I set out to answer a practical question: how can supervisory boards close the digital competency gap so their oversight of digitalization and AI is effective and value-creating?

The research combined literature, practitioner insights, and my own experience leading large-scale digital transformations. The signal is clear: technology, data, and AI are no longer specialist topics—they shape strategy, execution, and resilience. Boards that upgrade their competence change the quality of oversight, the shape of investment, and ultimately the future of the company.


1) Business model transformation

Digital doesn’t just add channels; it rewrites how value is created and captured. The board’s role is to probe how data, platforms, and AI may alter customer problem–solution fit, value generation logic, and ecosystem position over the next 3–5–10 years. Ask management to make the trade-offs explicit: which parts of the current model should we defend, which should we cannibalize, and which new options (platform plays, data partnerships, embedded services) warrant small “option bets” now?

What to look out for: strategies that talk about “going digital” without quantifying how revenue mix, margins, or cash generation will change. Beware dependency risks (platforms, app stores, hyperscalers) that shift bargaining power over time. Leverage scenario planning and clear leading indicators—so the board can see whether the plan is working early enough to pivot or double down.

2) Operational digital transformation

The strongest programs are anchored in outcomes, not output. Boards should ask to see business results expressed in P&L and balance-sheet terms (growth, cost, capital turns), not just “go-live” milestones. Require a credible pathway from pilot to scale: gated tranches that release funding when adoption, value, and risk thresholds are met; and clear “stop/reshape” criteria to avoid sunk-costs.

What to look out for: “watermelon” reporting— that stay green while progress/adoption is behind; vendor-led roadmaps that don’t fit the architecture; and under-resourced change management. As a rule of thumb, ensure 10–15% of major transformation budgets are reserved for change, communications, and training. Ask who owns adoption metrics and how you’ll know—early—that teams are using what’s been built.

3) Organization & culture

Technology succeeds at the speed of behaviour change. The board should examine whether leadership is telling a coherent story (why/what/how/who) and whether middle management has the capacity to translate it into local action. Probe how AI will reshape roles and capabilities, and whether the company has a reskilling plan that is targeted, measurable, and linked to workforce planning.

What to look out for: assuming tools will “sell themselves,” starving change budgets, and running transformations in a shadow lane disconnected from the real business. Look for feedback loops—engagement diagnostics, learning dashboards, peer-to-peer communities—that surface resistance early and help leadership course-correct before adoption stalls.

4) Technology investments

Oversight improves dramatically when the board insists on a North Star architecture that makes trade-offs visible: which data foundations come first, how integration will work, and how security/privacy are designed in. Investments should be staged, with each tranche linked to outcome evidence and risk mitigation, and with conscious decisions about vendor lock-in and exit options.

What to look out for: shiny-tool syndrome, financial engineering that ignore lifetime Total Cost of Ownership (TCO), and weak vendor due diligence. Ask for risk analysis (e.g., cloud and vendor exposure) and continuity plans that are actually tested. Expect architecture reviews by independent experts on mission-critical choices, so the board gets a clear view beyond vendor narratives.

5) Security & compliance

Cyber, privacy, and emerging AI regulation must be treated as enterprise-level risks with clear ownership, KPIs, and tested recovery playbooks. Boards should expect regular exercises and evidence that GDPR, NIS2, and AI governance are embedded in product and process design—not bolted on at the end.

What to look out for: “tick-the-box” compliance that produces documents rather than resilience, infrequent or purely theoretical drills, and untested backups. Probe third-party and supply-chain exposure as seriously as internal controls. The standard is not perfection; it’s informed preparedness, repeated practice, and to learn from near-misses.


Seven structural moves that work

  1. Make digital explicit in board profiles. Use a competency matrix that distinguishes business-model, data/AI, technology, and cyber/compliance fluency. Recruit to close gaps or appoint external advisors—don’t hide digital under a generic “technology” label.
  2. Run periodic board maturity assessments. Combine self-assessment with executive feedback to identify capability gaps. Tie development plans to the board calendar (e.g., pre-strategy masterclasses, deep-dives before major investments).
  3. Hard-wire digital/AI into the agenda. Move from ad-hoc updates to a cadence: strategy and scenario sessions, risk and resilience reviews, and portfolio health checks. Make room for bad news early so issues surface before they become expensive.
  4. Adopt a board-level Digital & IT Cockpit. Track six things concisely: run-the-business efficiency, risk posture, innovation enablement, strategy alignment, value creation, and future-proofing (change control, talent, and architecture). Keep trends visible across quarters.
  5. Establish a Digital | AI Committee (where applicable). This complements—not replaces—the Audit Committee. Mandate: opportunities and threats, ethics and risk, investment discipline, and capability building. The committee prepares the ground; the full board takes the decisions.
  6. Use independent expertise by default on critical choices. Commission targeted reviews (architecture, vendor due diligence, cyber resilience) to challenge internal narratives. Independence is not a luxury; it’s how you avoid groupthink and discover blind spots in time.
  7. Onboard and upskill continuously. Provide a digital/AI onboarding for new members; schedule briefings with external experts; and use site visits to see real adoption. Treat learning like risk management: systematic, scheduled, and recorded.

Do you need a separate “Digital Board”?

My reflection: competence helps, but time and attention are the true scarcities. In digitally intensive businesses—where data platforms, AI-enabled operations, and cyber exposure shape enterprise value and are moving fast—a separate advisory or oversight body can deepen challenge and accelerate learning. It creates space for structured debate on architecture, ecosystems, and regulation without crowding out other board duties.

This isn’t a universal prescription. In companies where digital is material but not defining, strengthening the main board with a committee and better rhythms is usually sufficient. But when the operating model’s future rests on technology bets, a dedicated Digital Board (or equivalent advisory council) can bring the needed altitude, continuity, and specialized challenge to help the supervisory board make better, faster calls.


What this means for your next board cycle

The practical message from the thesis is straightforward: digital oversight is a core board responsibility that can be institutionalised. Start by clarifying the capability you need (the competency matrix), then hard-wire the conversation into the board’s rhythms (the agenda and cockpit), and raise the quality of decisions (staged investments, independent challenge, real adoption metrics). Expect a culture shift: from project status to value realization, from tool choice to architecture, from compliance as paperwork to resilience as practice.

Most importantly, treat this as a journey. Boards that improve a little each quarter—on fluency, on the sharpness of their questions, on the discipline of their investment decisions—create compounding advantages. The gap closes not with a single appointment or workshop, but with deliberate governance that learns, adapts, and holds itself to the same standard it asks of management.

Why 95% of AI Pilots Fail (MIT Study) – And How to Beat the Odds

Last week, a MIT study sent shockwaves through the AI and business community: 95% of AI pilots fail to deliver measurable business returns. Headlines spread fast, with investors and executives questioning whether enterprise AI is a bubble.

But behind the headlines lies a more nuanced story. The study doesn’t show that AI lacks potential—it shows that most organizations are not yet equipped to turn AI experiments into real business impact.


Myth vs. Reality: What Other Research Tells Us

While the MIT report highlights execution gaps, other studies paint a more balanced picture:

  • McKinsey (2025): AI adoption is rising fast, with value emerging where firms rewire processes and governance.
  • Stanford AI Index (2025): Investment and adoption continue to accelerate, signaling confidence in the long-term upside.
  • Field studies: Copilots in customer service and software engineering deliver double-digit productivity gains—but only when properly integrated.
  • MIT SMR–BCG: Companies that give individuals tangible benefits from AI—and track the right KPIs—are 6x more likely to see financial impact.

The picture is clear: AI works, but only under the right conditions.


Why AI Projects Fail (The 10 Traps)

1. No learning loop
Many AI pilots are clever demos that never improve once deployed. Without feedback mechanisms and continuous learning, the system remains static—and users quickly revert to old ways of working.

2. Integration gaps
AI may deliver great results in a sandbox, but in production it often fails to connect with core systems like CRM or ERP. Issues with identity management, permissions, and latency kill adoption.

3. Vanity pilots
Executives often prioritize flashy use cases—like marketing campaigns or customer-facing chatbots—while ignoring back-office automations. The result: excitement without measurable cash impact.

4. Build-first reflex
Organizations rush to build their own AI tools, underestimating the complexity of User eXperience (UX), guardrails, data pipelines, and monitoring. Specialist partners often outperform in speed and quality.

5. Six-month ROI traps
Leadership expects visible returns within half a year. But AI adoption follows a J-curve: disruption comes first, with benefits only materializing once processes and people adapt.

6. Weak KPIs
Too many pilots measure activity—such as number of prompts or usage time—rather than outcomes like error reduction, cycle time improvements, or cost savings. Without the right metrics, it’s impossible to prove value.

7. No product owner
AI projects often sit “between” IT, data, and the business, leaving no single accountable leader. Without an empowered product owner with a P&L target, projects stall in pilot mode.

8. Change ignored
Technology is deployed, but users aren’t engaged. Poor UX, lack of training, and trust concerns mean adoption lags. In response, employees turn to consumer AI tools instead of sanctioned ones.

9. Data & policy drag
Even when the AI works, poor data quality, fragmented sources, and unclear governance delay rollouts. Legal and compliance teams often block scaling because policies are not defined early enough.

10. Wrong first bets
Too many companies start with complex tasks. Early success is more likely in “thin-slice” repetitive processes—like call summarization or contract intake—that can prove value quickly.


How to Beat the Odds (10 Fixes That Work)

1. Design for learning
Build AI systems with memory, feedback capture, and regular improvement cycles. If a tool cannot learn and adapt in production, it should never progress beyond pilot stage.

2. Fix integration before inference
Prioritize robust connections into your CRM, ERP, and ticketing systems. AI without seamless workflow integration is just an isolated chatbot with no business impact.

3. Pick quick-win use cases
Target repetitive, document- and conversation-heavy flows—like claims processing, contract extraction, or helpdesk queries. These areas deliver ROI within 90–120 days and build momentum.

4. Appoint an AI Product Owner
Every use case should have a leader with budget, KPIs, and authority. This person is responsible for hitting targets and driving the project through pilot, limited production, and full scale-up.

5. Measure outcomes, not activity
Define 3–5 hard business KPIs (e.g., −25% contract cycle time, −20% cost per contact) and track adoption leading indicators. Publish a regular value scorecard to make progress visible.

6. Buy speed, build advantage
Use specialist vendors for modular, non-differentiating tasks. Save your in-house resources for proprietary applications where AI can become a true competitive edge.

7. Rebalance your portfolio
Shift investments away from glossy front-office showcases. Focus on back-office operations and service processes where AI can cut costs and generate visible ROI quickly.

8. Make change a deliverable
Adoption doesn’t happen automatically. Co-design solutions with frontline users, train them actively, and make fallback paths obvious. Manage trust as carefully as the technology itself.

9. Educate the board on the J-curve
Set realistic expectations that ROI takes more than six months. Pilot fast, but give production deployments time to stabilize, improve, and demonstrate sustained results.

10. Prove, then scale
Choose two or three use cases, set clear ROI targets up front, and scale only after success is proven. This disciplined sequencing builds credibility and prevents overreach.


The Broader Reflection

The 95% failure rate is not a verdict on AI’s future—it’s a warning about execution risk. Today’s picture is simple: adoption and investment are accelerating, productivity impacts are real, but enterprise-scale returns require a more professional approach.

We’ve seen this pattern before. Just as with earlier waves of digital transformation, leaders tend to overestimate short-term results and underestimate mid- to long-term impact.

Learning with AI – Unlocking Capability at Every Level

AI is Changing How We Learn! We’re entering a new era where learning and AI are deeply intertwined. Whether it’s a university classroom, a manufacturing site, or your own weekend learning project, AI is now part of how we access knowledge, gain new skills, and apply them faster.

The impact is real. In formal education, AI supported tutors are already showing measurable learning gains. In the workplace, embedded copilots help teams learn in the flow of work. And at the organizational level, smart knowledge systems can reduce onboarding time and improve consistency.

But like any tool, AI’s value depends on how we use it. In this article, I’ll explore four areas where AI is transforming learning — and share some insights from my own recent experiences along the way.


1. Formal Education — From Study Assistant to Writing Coach

AI is showing clear value in helping students and professionals deepen understanding, organize ideas, and communicate more effectively.

In my recent Supervisory Board program, I used NotebookLM to upload course materials and interact with them — asking clarifying questions and summarizing key insights. For my final paper, I turned to ChatGPT and Claude for review and editing — helping me sharpen my arguments and improve readability without losing my voice.

The benefit? More focused learning time, better written output, and higher engagement with the material.

How to get the most from AI in education:

  • Use AI to test understanding, not just provide answers
  • Let it structure thoughts and give feedback — like a sounding board
  • Ensure use remains aligned with academic integrity standards

Recent research supports this approach: Harvard studies show students using structured AI tutors learn more in less time when guardrails guide the interaction toward reasoning — not shortcuts.


2. Learning on the Job — From Static Training to Smart Assistance

In many workplaces, AI is no longer something you log into — it’s embedded directly into your tools, helping you solve problems, write faster, or learn new procedures while working.

Take Siemens, for example. Their industrial engineers now use an AI copilot integrated into their software tools to generate, troubleshoot, and optimize code for production machinery. Instead of searching manuals or waiting for expert support, engineers are guided step-by-step by an assistant that understands both the code and the task.

The benefit? People learn while doing — and become more capable with every task.

How to get the most from AI on the job:

  • Start with tasks that benefit from examples (e.g. writing, code, cases)
  • Let the AI model good practice, then ask the user to adapt or explain
  • Use real-time feedback to reinforce learning and reduce rework

Well-implemented, AI tools don’t replace training — they become the cornerstone of the training.


3. Organizational Learning — Turning Knowledge into an Exchange

As organizations accumulate more policies, procedures, and playbooks, the challenge isn’t just creating knowledge — it’s making it accessible. This is where AI can fundamentally change the game.

PwC is a leading example. They’ve deployed ChatGPT Enterprise to 100,000 employees, combined with internal GPTs trained on company-specific content. This transforms how people access information: instead of digging through files, they ask a question and get a consistent, governed answer — instantly.

The benefit? Faster onboarding, fewer escalations, and more confident decision-making across the board.

How to build this in your organization:

  • Start with high-value content (e.g., SOPs, onboarding, policies)
  • Assign content owners to keep AI knowledge up to date
  • Monitor questions and feedback to identify knowledge gaps

Done right, this turns your organization into a living learning system.


4. Personal Learning — Exploring New Skills with AI as a Guide

Outside of work and formal learning, many people are using AI to explore entirely new topics. Whether it’s a new technology, management concept, or even a language, tools like ChatGPT, Gemini and Claude make it easy to start — and to go deep.

Let’s say you want to learn about cloud architecture. You can ask AI to:

  • Create a 4-week plan tailored to your experience level
  • Suggest reading material and create quick explainers
  • Generate test questions or even simulate an interview

The benefit? Structured, personalized, and frictionless learning — anytime, anywhere.

To make it effective:

  • Be specific: define your goals and time frame
  • Ask for exercises or cases to apply what you learn
  • Use reflection prompts and feedback to deepen understanding

The key is to treat AI as a learning coach, not just a search engine.


Looking Ahead — Opportunities, Risks, and What Leaders Can Do

AI can make learning faster, broader, and more accessible. But like any capability shift, it introduces both upside and new risks:

Opportunities

  • Faster time to skill through real-time, contextual learning
  • Scaling of expert knowledge across global teams
  • Better engagement and confidence among learners at all levels

Risks

  • Over-reliance on AI can lead to shallow understanding
  • Inaccurate or outdated responses risk reinforcing errors
  • Uneven adoption can widen capability gaps inside teams

How to mitigate the risks

  • Introduce guardrails that promote reasoning and reduce blind copying
  • Keep AI tools connected to curated, up-to-date knowledge
  • Build adoption playbooks tailored to roles, not just tools

Final Thought — Treat AI as Part of Your Learning System

The most successful organizations aren’t just giving people access to AI — they’re designing learning systems around it.

That means using AI to model best practice, challenge thinking, and reduce time-to-competence. AI is not just a productivity tool — it’s a capability accelerator.

Those who treat it that way will upskill faster, build smarter teams, and stay more adaptable in the face of constant change.

Agents vs. Automation – How to Choose the Right Tool for the Job

As AI agents storm the market and automation technologies mature, transformation leaders face a critical question: Not just what to automate — but how.

From RPA and low-code platforms to intelligent agents and native automation tools, the choices are expanding fast.

This article offers a practical framework to help you make the right decisions — and build automation that scales with your organization.


A Layered View of the Automation Landscape

Modern automation isn’t a single tool — it’s leveraging a full stack. Here are the key layers:

🔹 1. Digital Core Platforms

Systems like SAP, Salesforce, ServiceNow and Workday host your enterprise data and business processes. They often come with native automation tools (e.g., Salesforce Flow, SAP BTP), ideal for automating workflows within the platform.

🔹 2. Integration Platforms (iPaaS)

Tools like MuleSoft, Boomi, and Microsoft Power Platform play a foundational role in enterprise automation. These Integration Platforms as a Service (iPaaS) connect applications, data sources, and services across your IT landscape — allowing automation to function seamlessly across systems rather than in silos.

🔹 3. Automation Tools

  • RPA (e.g., UiPath) automates rule-based, repetitive tasks
  • Workflow Automation manages structured, multi-step business processes
  • Low-/No-Code Platforms (e.g., Power Apps, Mendix) empower teams to build lightweight apps and automations with minimal IT support

🔹 4. AI Agents

Tools and platforms like OpenAI Agents, Microsoft Copilot Studio, Google Vertex AI Agent Builder, and LangChain enable reasoning, adaptability, and orchestration — making them well-suited for knowledge work, decision support, and dynamic task execution.


Choosing the Right Tool for the Job

No single tool is right for every use case. Here’s how to decide:

ScenarioBest Fit
Rule-based, repetitive workRPA
Structured, approval-based flowsWorkflow Automation
Inside one platform (e.g., CRM/ERP)Native Platform Automation
Cross-system data & process flowsIntegration Platforms (iPaaS)
Lightweight cross-platform appsLow-/No-Code Platforms
Knowledge-driven or dynamic tasksAI Agents

The most effective automation strategies are hybrid — combining multiple tools for end-to-end value.


Implementation Roadmaps: One Journey, Many Paths

While all automation projects follow a shared journey — identify, pilot, scale — each tool requires a slightly different approach.


1. Identify the Right Opportunities

  • Native Platform Tools: Start with what’s already built into Salesforce, SAP, etc.
  • iPaaS: Identify silos where data must flow between systems
  • RPA: Use process/task mining to find repeatable, rule-based activities
  • Workflow: Focus on bottlenecks, exceptions, and handoffs
  • Low-/No-Code: Empower teams to surface automation needs and prototype fast
  • AI Agents: Look for unstructured, knowledge-heavy processes

2. Design for Fit and Governance

Each automation type requires a different design mindset — based on scope, user ownership, and risk profile.

  • Native Platform Automation: Stay aligned with vendor architecture and update cycles
  • iPaaS: Build secure, reusable data flows
  • RPA: Design for stability, handle exceptions
  • Workflow: Focus on roles, rules, and user experience
  • Low-/No-Code Platforms: Enable speed, but embed clear guardrails
  • AI Agents: Use iterative prompt design, test for reliability

Key distinction:

  • Native platform automation is ideal for secure, internal process flows.
  • Low-/no-code platforms are better for lightweight, cross-functional solutions — but they need structure to avoid sprawl.

3. Pilot, Learn, and Iterate

  • Platform-native pilots are quick to deploy and low-risk
  • RPA pilots deliver fast ROI but require careful exception handling
  • Workflow Automation start with one process and involve users early to validate flow and adoption.
  • Low-/no-code pilots accelerate innovation, especially at the edge
  • iPaaS pilots often work quietly in the background — but are critical for scale
  • AI agent pilots demand close supervision and feedback loops

4. Scale with Structure

To scale automation, focus not just on tools, but on governance:

  • Workflow and Low-Code: Set up federated ownership or Centres of Excellence
  • RPA and iPaaS: Track usage, manage lifecycles, prevent duplication
  • AI Agents: Monitor for performance, hallucination, and compliance
  • Native Platform Tools: Coordinate with internal admins and platform owners

The most successful organizations won’t just automate tasks — they’ll design intelligent ecosystems that scale innovation, decision-making, and value creation.


Conclusion: Architect the Ecosystem

Automation isn’t just about efficiency — it’s about scaling intelligence across the enterprise.

  • Use native platform tools when speed, security, and process alignment matter most
  • Use low-/no-code platforms to empower teams and accelerate delivery
  • Use RPA and workflows for high-volume or structured tasks
  • Use AI agents to enhance decision-making and orchestrate knowledge work
  • Use integration platforms to stitch it all together

The winners will be the ones who build coherent, adaptive automation ecosystems — with the right tools, applied the right way, at the right time.

GAINing Clarity – Demystifying and Implementing GenAI

Herewith my final summer reading book review as part of my newsletter series.
GAIN – Demystifying GenAI for Office and Home by Michael Wade and Amit Joshi offers clarity in a world filled with AI hype. Written by two respected IMD professors, this book is an accessible, structured, and balanced guide to Generative AI (GenAI), designed for a broad audience—executives, professionals, and curious individuals alike.

What makes GAIN especially valuable for leaders is its practical approach. It focuses on GenAI’s real-world relevance: what it is, what it can do, where it can go wrong, and how individuals and organizations can integrate it effectively into daily workflows and long-term strategies.

What’s especially nice is that Michael and Amit have invited several other thought and business leaders to contribute their perspectives and examples to the framework provided. (I especially liked the contribution of Didier Bonnet.)

The GAIN Framework

The book is structured into eight chapters, each forming a step in a logical journey—from understanding GenAI to preparing for its future impact. Below is a summary of each chapter’s key concepts.


Chapter 1 – EXPLAIN: What Makes GenAI Different

This chapter distinguishes GenAI from earlier AI and digital innovations. It highlights GenAI’s ability to generate original content, respond to natural-language prompts, and adapt across tasks with minimal input. Key concepts include zero-shot learning, democratized content creation, and rapid adoption. The authors stress that misunderstanding GenAI’s unique characteristics can undermine effective leadership and strategy.


Chapter 2 – OBTAIN: Unlocking GenAI Value

Wade and Joshi explore how GenAI delivers value at individual, organizational, and societal levels. It’s accessible and doesn’t require deep technical expertise to drive impact. The chapter emphasizes GenAI’s role in boosting productivity, enhancing creativity, and aiding decision-making—especially in domains like marketing, HR, and education—framing it as a powerful augmentation tool.


Chapter 3 – DERAIL: Navigating GenAI’s Risks

This chapter outlines key GenAI risks: hallucinations, privacy breaches, IP misuse, and embedded bias. The authors warn that GenAI systems are inherently probabilistic, and that outputs must be questioned and validated. They introduce the concept of “failure by design,” reminding readers that creativity and unpredictability often go hand in hand.


Chapter 4 – PREVAIL: Creating a Responsible AI Environment

Here, the focus turns to managing risks through responsible use. The authors advocate for transparency, human oversight, and well-structured usage policies. By embedding ethics and review mechanisms into workflows, organizations can scale GenAI while minimizing harm. Ultimately, it’s how GenAI is used—not just the tech itself—that defines its impact.


Chapter 5 – ATTAIN: Scaling with Anchored Agility

This chapter presents “anchored agility” as a strategy to scale GenAI responsibly. It encourages experimentation, but within a framework of clear KPIs and light-touch governance. The authors promote an adaptive, cross-functional approach where teams are empowered, and successful pilots evolve into embedded capabilities.

One of the most actionable frameworks in GAIN is the Digital and AI Transformation Journey, which outlines how organizations typically mature in their use of GenAI:

  • Silo – Individual experimentation, no shared visibility or coordination.
  • Chaos – Widespread, unregulated use. High potential but rising risk.
  • Bureaucracy – Management clamps down. Risk is reduced, but innovation stalls.
  • Anchored Agility – The desired state: innovation at scale, supported by light governance, shared learning, and role clarity.

This model is especially relevant for transformation leaders. It mirrors the organizational reality many face—not only with AI, but with broader digital initiatives. It gives leaders a language to assess their current state and a vision for where to evolve.


Chapter 6 – CONTAIN: Designing for Trust and Capability

Focusing on organizational readiness, this chapter explores structures like AI boards and CoEs. It also addresses workforce trust, re-skilling, and role evolution. Rather than replacing jobs, GenAI changes how work gets done—requiring new hybrid roles and cultural adaptation. Containment is about enabling growth, not restricting it.


Chapter 7 – MAINTAIN: Ensuring Adaptability Over Time

GenAI adoption is not static. This chapter emphasizes the need for feedback loops, continuous learning, and responsive processes. Maintenance involves both technical tasks—like tuning models—and organizational updates to governance and team roles. The authors frame GenAI maturity as an ongoing journey.


Chapter 8 – AWAIT: Preparing for the Future

The book closes with a pragmatic look ahead. It touches on near-term shifts like emerging GenAI roles, evolving regulations, and tool commoditization. Rather than speculate, the authors urge leaders to stay informed and ready to adapt, fostering a mindset of proactive anticipation.posture of informed anticipation: not reactive panic, but intentional readiness. As the GenAI field evolves, so must its players.


What GAIN Teaches Us About Digital Transformation

Beyond the specifics of GenAI, GAIN offers broader lessons that are directly applicable to digital transformation initiatives:

  • Start with shared understanding. Whether you’re launching a transformation program or exploring AI pilots, alignment starts with clarity.
  • Balance risk with opportunity. The GAIN framework models a mature transformation mindset—one that embraces experimentation while putting safeguards in place.
  • Transformation is everyone’s job. GenAI success is not limited to IT or data teams. From HR to marketing to the executive suite, value creation is cross-functional.
  • Governance must be adaptive. Rather than rigid control structures, “anchored agility” provides a model for iterative scaling—one that balances speed with oversight.
  • Keep learning. Like any transformation journey, GenAI is not linear. Feedback loops, upskilling, and cultural evolution are essential to sustaining momentum.

In short, GAIN helps us navigate the now, while preparing for what’s next. For leaders navigating digital and AI transformation, it’s a practical compass in a noisy, fast-moving world.

Fusion Strategy – How real-time Data and AI will Power the Industrial Future

This book by Vijay Govindarajan and Venkat Venkatraman gives excellent insights on how industrial companies can become leaders in this Data and AI-driven age.

Rather than discarding legacy strengths, the book shows how to fuse physical assets with digital intelligence to create new value, drive outcomes, and redefine business models. It gives a compelling and well-structured roadmap for industrial companies to get ready and lead through this digital transformation


From Pipeline to Fusion: A New Strategic Paradigm

Traditional industrial firms have long operated with a pipeline mindset – designing, building, and selling physical products through linear value chains. But in a world where customer needs change in real-time, and where data flows continuously from connected devices, this model is no longer sufficient.

Fusion Strategy introduces a new playbook: combine your physical strengths with digital capabilities to compete on adaptability, outcomes, and ecosystem value. It’s about integrating the trust and scale of industrial operations with the intelligence and speed of digital platforms.


Competing in the Four Fusion Battlegrounds

At the core of the book is a powerful matrix: four battlegrounds where industrial firms must compete – and four strategic levers to win in each: Architect, Organize, Accelerate, and Monetize.

Fusion Products – Embedding intelligence into physical products

This battleground focuses on evolving the traditional product into a smart, connected version that delivers value through both physical functionality and digital enhancements. It shifts the value proposition from one-time transactions to continuous value creation.

  • Architect: Build connected products with embedded sensors and software.
  • Organize: Create cross-functional product-data-software teams.
  • Accelerate: Use real-world usage data to improve iterations and performance.
  • Monetize: Shift to usage-based pricing, subscription models, or data-informed upgrades.

Example: John Deere integrates GPS, sensors, and machine learning into its agricultural equipment, enabling precision farming and monetizing through subscription-based services.

Fusion Services – Creating new layers of customer value

This battleground addresses the transformation from product-centric to outcome-centric offerings. Services become digitally enabled and proactively delivered, increasing customer stickiness and long-term revenue potential.

  • Architect: Design service layers that improve uptime, efficiency, or experience.
  • Organize: Stand up service delivery and customer success capabilities.
  • Accelerate: Leverage AI to scale and automate service interactions.
  • Monetize: Offer predictive maintenance, remote diagnostics, or outcomes-as-a-service.

Example: Caterpillar offers remote monitoring and predictive maintenance for its heavy equipment fleet, increasing operational uptime and generating recurring service revenues.

Fusion Systems – Transforming internal operations

This battleground focuses on using data and AI to reengineer internal processes, improve agility, and reduce cost-to-serve. Real-time operational intelligence becomes a source of competitive advantage.

  • Architect: Digitize plants, supply chains, and operations with real-time visibility.
  • Organize: Break down functional silos; design around data flows.
  • Accelerate: Use AI to optimize scheduling, energy use, or resource allocation.
  • Monetize: Drive efficiency gains and free up capital for reinvestment.

Example: Schneider Electric uses digital twins and data-driven energy management to optimize operations and reduce downtime in its global manufacturing network.

Fusion Solutions – Building platforms and ecosystems

This battleground is about building broader solutions that integrate products, services, and partners. It opens new avenues for value creation through platforms, data sharing, and co-innovation.

  • Architect: Offer modular solutions with open APIs and partner integration.
  • Organize: Orchestrate partner ecosystems that create mutual value.
  • Accelerate: Foster external innovation through developer communities.
  • Monetize: Sell analytics, data products, or platform access.

Example: Tesla is reimagining mobility not just as a product (cars) but as an integrated solution combining electric vehicles, software, energy management, autonomous driving, insurance and charging/energy infrastructure.


The Role of Data Graphs in Fusion Strategy

One of the foundational concepts emphasized throughout Fusion Strategy is the importance of data graphs. These are strategic tools that connect data across silos and enable intelligent, real-time insights.

A data graph is a semantic structure that maps relationships between entities—machines, sensors, people, processes, and locations—into a flexible and navigable format. In fusion strategy, data graphs link physical and digital domains, enabling smarter operations and decisions.

How to build a data graph:

  1. Collect data from operational systems – sensors, ERP -, CRM systems, etc.
  2. Define key entities and relationships – focus on what matters most.
  3. Create semantic linkages – use metadata and business context.
  4. Ensure real-time updates – to maintain situational awareness.
  5. Enable access – for both humans and AI systems.

Why data graphs matter:

  • Provide context for AI and analytics.
  • Enable real-time visibility across assets and systems.
  • Power predictive services, digital twins, and platform innovation.

According to the authors, data graphs are essential for scaling fusion strategies. Without them, it’s difficult to unify insights, drive automation, or deliver integrated digital experiences


Why This Book Stands Out

This is book does not start from the successful digital native companies, but from the leader of the industrial age point of view, describing on how they can become leaders in the digital age.

The structure is what makes it so useful:

  • It gives executives a language to discuss digital opportunities in operational and financial terms.
  • It balances the long-term vision with near-term execution levers.
  • It connects customer value, technology, organization, and monetization in one integrated model.

It’s a strategy-led, boardroom-level guide to competing in the AI era.


My Reflections

  • Applying Fusion Strategy is a shift in how to re-architect your products and business. It requires rewiring how you create, deliver, and capture value.
  • You don’t need to become a tech company. You need to become a fusion company – one that blends operational excellence with digital innovation.
  • Winning in Fusion means rethinking strategy, governance, talent, and incentives – all at once in other words, enabling full transformation.

Fusion Strategy is essential reading for any industrial executive seeking to lead their company through this era of accelerated transformation. It’s not about jumping on the latest AI trend – it’s about designing a future-ready business, grounded in strategy.

The battlegrounds are clear. The tools are available. The time is now.

Amplifying the Human Advantage over AI – Lessons from Pascal Bornet’s Irreplaceable

For this holiday season I had, Pascal Bornet’s book Irreplaceable: The Art of Standing Out in the Age of Artificial Intelligence on top of my reading list. His work delivers a clear and timely message: the more digital the world becomes, the more essential our humanity is.

For executives and transformation leaders navigating the impact of AI, Bornet provides a pragmatic and optimistic blueprint. This article summarizes the core insights of Irreplaceable, explores its implications for digital transformation, and offers a practical lens for application (Insights).


AI as Enabler, Not Replacer

Bornet challenges the zero-sum narrative of “AI vs. Humans.” Instead, he positions AI as an enabler: capable of handling repetitive, structured tasks, it liberates humans to focus on what machines can’t do—leading, empathizing, creating, and judging. AI, in this view, is not the destination but the vehicle to a more human future.

Insight: Use AI to augment human roles—especially in decision-making, customer experience, and creative problem-solving—rather than replacing them.


The “Humics”: Redefining the Human Advantage

At the heart of Irreplaceable lies the concept of Humics: the uniquely human capabilities that define our irreducibility in an AI-powered world. Bornet identifies several:

  • Genuine Creativity – The capacity to generate novel ideas and innovations by drawing on intuition, imagination, and deeply personal lived experiences that machines cannot emulate.
  • Critical Thinking – The ability to evaluate information critically, reason ethically, and make contextualized decisions that reflect both logic and values.
  • Emotional Intelligence – A complex combination of self-awareness, empathy, and the ability to manage interpersonal relationships and influence with authenticity.
  • Adaptability & Resilience – The readiness to embrace change, learn continuously, and maintain performance under stress and uncertainty.
  • Social Authenticity – The human ability to create trust and meaning in relationships through transparency, shared values, and emotional connection.

Insight: Elevate Humics from soft skills to strategic assets. Build them into hiring, training, and leadership development.


The IRREPLACEABLE Framework: Three Competencies for the Future

Bornet proposes a universal framework built on three future-facing competencies:

  • AI-Ready: Develop the ability to understand and leverage AI technologies by becoming fluent in their capabilities, applications, and ethical boundaries. This involves not just using AI tools, but knowing when and how to apply them effectively.
  • Human-Ready: Focus on strengthening Humics—the inherently human skills like empathy, critical thinking, and creativity—that make people indispensable in roles where AI falls short.
  • Change-Ready: Build resilience and adaptability by fostering a growth mindset, embracing continuous learning, and staying flexible in the face of constant technological and organizational change.

Insight: These competencies should be embedded into your workforce strategies, talent models, and cultural transformation agenda.


Human-AI Synergy: The New Collaboration Model

Bornet advocates for symbiotic teams where AI and humans complement each other. Rather than compete, the two work in tandem to drive better outcomes.

  • AI delivers scale, speed, and precision.
  • Humans provide context, ethics, judgment, and empathy.

Insight: Use this pairing in high-impact roles like diagnostics, content creation, customer service, and product design.


Avoiding “AI Obesity”: The Risk of Over-Automation

Bornet warns against AI Obesity: a condition where organizations over-rely on AI, leading people to lose touch with essential human skills like critical thinking, empathy, and creativity. The solution? Regularly exercise our Humics and ensure humans remain in the loop, especially where oversight, ethics, or trust are required.

Insight: Define clear roles for human oversight, especially in ethical decisions, people management and policy enforcement.


Real-World Application: Individuals, Parents, and Businesses

Bornet offers tailored strategies for:

  • Individuals: Blend digital fluency with human depth to future-proof your career. Learn how to partner with AI tools to enhance your strengths, stay adaptable, and lead with human judgment in an increasingly automated environment.
  • Parents & Educators: Teach kids curiosity, resilience, and emotional intelligence alongside digital skills. Equip the next generation not only to use technology responsibly but also to cultivate the uniquely human traits that will help them thrive in any future scenario.
  • Businesses: Redesign roles and culture to embed AI-human collaboration, with trust and values at the core. Shift from a purely efficiency-driven mindset to one that sees AI as a co-pilot, empowering employees to do more meaningful, value-adding work.

Note: This is not just about new tools; it’s about new mindsets and behaviors across the organization.


Implications for Digital Transformation Leaders

Irreplaceable aligns seamlessly with modern transformation priorities:

  • Technology as Amplifier: Deploy AI to expand human capabilities, not to replace them.
  • Human-Centric KPIs: Add creativity, employee experience, and trust metrics to your dashboards.
  • Purpose-Driven Change: Frame digital transformation as an opportunity to become more human, not less.

How to Apply This in Practice

Start with a diagnostic: Where is human judgment undervalued in your current operating model? Then:

  1. Redesign roles with AI + Human pairings
  2. Invest in Humics through people development and learning journeys
  3. Update metrics to track human and AI impact
  4. Communicate the purpose: Align AI initiatives with a human-centered narrative

Conclusion

Pascal Bornet’s Irreplaceable offers more than optimism. It provides a strategic lens to ensure your organization thrives in the AI age—by amplifying what makes us human. For digital and transformation leaders, the message is clear: being more human is your greatest competitive advantage.

For more information you can check out: Become IRREPLACEABLE and unlock your true potential in the age of AI

If AI Is So Smart, Why Are We Struggling to Use It?

The human-side barriers to AI adoption — and how to overcome them

In my previous newsletter, “Where AI is Already Making a Significant Impact on Business Process Execution – 15 Areas Explained,” we explored how AI is streamlining tasks from claims processing to customer segmentation. But despite these breakthroughs, one question keeps surfacing:

If AI is delivering so much value… why are so many organizations struggling to actually adopt it?

The answer isn’t technical — it’s human.

In this edition, I explore ten people-related reasons AI initiatives stall or underdeliver. Each barrier is followed by a practical example and suggestions for how to overcome it.


1. Fear of Job Loss and Role Redundancy

Employees fear AI will replace them, leading to resistance or disengagement. This is especially prevalent in operational roles and shared services.

Example: An EY survey found 75% of US workers worry about AI replacing their jobs. In several large organizations, process experts quietly slow-roll automation to protect their roles.

How to mitigate: Communicate early and often. Frame AI as augmentation, not replacement. Highlight opportunities for upskilling and create pathways for digitally enabled roles.


2. Loss of Meaning and Professional Identity

Even if employees accept AI won’t replace them, they may fear it will erode the craftsmanship and meaning of their work.

Example: In legal and editorial teams, professionals report reluctance to use generative AI tools because they feel it “cheapens” their contribution or downplays their expertise.

How to mitigate: Position AI as a creative partner, not a substitute. Focus on use cases that enhance quality and amplify human strengths.


3. Low AI Literacy and Confidence

Many knowledge workers don’t feel equipped to understand or apply AI tools. This leads to underutilization or misuse.

Example: I’ve seen this firsthand: employees hesitate to rely on AI tools and default to old ways of working out of discomfort or lack of clarity.

How to mitigate: Launch AI literacy programs tailored to roles. Give people space to experiment, and build a shared language for AI in the organization.


4. Skills Gap: Applying AI to Domain Work

Beyond literacy, many employees lack the applied skills needed to integrate AI into their actual workflows. They may know what AI can do — but not how to adapt it to their role.

Example: In a global supply chain function, team members were aware of AI’s capabilities but struggled to translate models into usable scenarios like demand sensing or inventory risk prediction.

How to mitigate: Invest in practical upskilling: scenario-based training, role-specific accelerators, and coaching. Empower cross-functional “AI translators” to bridge tech and business.


5. Trust and Explainability Concerns

Employees and managers hesitate to rely on AI if they don’t understand “how” it reached its output — especially in decision-making contexts.

Example: A global logistics firm paused the rollout of AI-based demand forecasting after regional leaders questioned unexplained fluctuations in output.

How to mitigate: Prioritize transparency for critical use cases. Use interpretable models where possible, and combine AI output with human judgment.


6. Middle Management Resistance

Mid-level managers may perceive AI as a threat to their control or relevance. They can become blockers, slowing momentum.

Example: In a consumer goods company, digital leaders struggled to scale AI pilots because local managers didn’t support or prioritize the initiatives.

How to mitigate: Involve middle managers in co-creation. Tie their success metrics to AI-enabled outcomes and make them champions of transformation.


7. Change Fatigue and Initiative Overload

Teams already dealing with hybrid work, restructurings, or system rollouts may see AI as just another corporate initiative on top of their daily work.

Example: A pharmaceutical company with multiple digital programs saw frontline disengagement with AI pilots due to burnout and lack of clear value.

How to mitigate: Embed AI within existing transformation goals. Focus on a few high-impact use cases, and consistently communicate their benefit to teams.


8. Lack of Inclusion in Design and Rollout

When AI tools are developed in technical silos, end users often feel the solutions don’t reflect their workflows or needs.

Example: A banking chatbot failed in deployment because call center staff hadn’t been involved in the design phase — leading to confusion and distrust.

How to mitigate: Involve users early and often. Use participatory design approaches and validate tools in real working environments.


9. Ethical Concerns and Mistrust

Some employees worry AI may reinforce bias, lack fairness, or be used inappropriately — especially in sensitive areas like HR, compliance, or performance assessment.

Example: An AI-based resume screener was withdrawn by a tech firm after internal concerns about gender and ethnicity bias, even before public rollout.

How to mitigate: Establish clear ethical guidelines for AI. Be transparent about data usage, and create safe channels for feedback and concerns.


10. Peer Friction: “They Let the AI Do Their Job”

Even when AI is used effectively, friction can arise when colleagues feel others are “outsourcing their thinking” or bypassing effort by relying on AI tools.

Example: In a shared services team, tension grew when some employees drafted client reports with AI in minutes — while others insisted on traditional methods, feeling their contributions were undervalued.

How to mitigate: Create shared norms around responsible AI use. Recognize outcomes, not effort alone, and encourage knowledge sharing across teams.


Final Thought: It’s Not the Tech — It’s the Trust

Successful AI adoption isn’t about algorithms or infrastructure — it’s about mindsets, motivation, and meaning.

If we want people to embrace AI, we must:

  • Empower them with knowledge, skills, and confidence
  • Engage them as co-creators in the journey
  • Ensure they see personal and professional value in change

Human-centered adoption isn’t the soft side of transformation — it’s the hard edge of success. Let’s create our transformation plans with that in mind.

Where AI is Already Making a Significant Impact on Business Process Execution – 15 Areas Explained

After exploring a wide range of expert sources—and drawing from my own experience—I collaborated with AI tools (ChatGPT, Gemini, Claude) to create a concise overview of where AI is currently having the biggest impact on business processes. The aim: to bring together the most referenced success areas across functions and reflect on why these domains are leading the way. Recognizing these patterns can help us anticipate where AI is likely to deliver its next wave of value (at the end of the article).

Below are 15 high-impact application areas where AI is already delivering significant value—each explained with clear benefits and real-world examples.


Marketing & Sales

1. Smarter Customer Service Automation
AI-powered chatbots and virtual agents are now central to handling customer inquiries. They can resolve a majority of tickets without human intervention, enabling 24/7 service while reducing costs and improving customer experience. Beyond just scripted replies, these agents learn from interactions to provide increasingly accurate and personalized support, allowing human teams to focus on complex or emotionally sensitive requests.
Example: Industry-wide AI adoption in contact centers, with 88% of firms reporting improved resolution times and reduced overhead (Statista, McKinsey).

2. Personalised Marketing at Scale
AI recommendation engines tailor content and product offerings based on individual browsing behavior, purchase history, and contextual data. This creates more relevant experiences for users and lifts conversion rates. Example, Amazon’s recommendation engine contributes over a third of its e-commerce revenue, proving the model’s commercial impact.

3. Sales Acceleration with AI
AI is transforming sales operations by taking over repetitive tasks like data entry, scheduling, and opportunity scoring. It also enables more informed decisions through predictive analytics, guiding sales teams to focus on leads with the highest conversion potential. Example: Salesforce research reveals that 83% of AI-enabled sales teams saw revenue growth versus 66% without AI. Besides this Salesforce example, I also can share from my personal working experience at Brenntag that AI solutions to guide “next best actions”  for salespeople drives significant impact.


Operations, Manufacturing & Supply Chain

4. Predictive Maintenance Efficiency
Traditional maintenance schedules often lead to unnecessary downtime or surprise equipment failures. AI flips the model by continuously analyzing sensor data to detect anomalies before breakdowns occur. This helps manufacturers schedule maintenance only when needed, extending equipment life and minimizing disruption.
Example: Mitsubishi and others use predictive maintenance tools that have led to up to 50% reduction in unplanned downtime.

5. AI-Powered Quality Control
In industries where product consistency is crucial, AI-enhanced computer vision inspects goods in real time for even the tiniest defects. These systems outperform the human eye in speed and accuracy, ensuring higher product quality and reducing waste from production errors.
Example: Automotive and electronics manufacturers now use AI to identify surface defects, alignment issues, or functional flaws instantly on the line.

6. Smarter Inventory Optimization
AI brings new precision to inventory planning by factoring in historical sales, seasonal trends, macroeconomic indicators, and real-time customer demand. This ensures businesses maintain optimal stock levels—avoiding both overstock and stockouts—while reducing working capital.
Example: Companies using AI in supply chain forecasting report inventory reductions of up to 35% (McKinsey).

7. Logistics Route Optimization
AI’s real-time route planning considers traffic, weather, delivery windows, and driver availability to suggest the most efficient routes. This leads to faster deliveries, fuel savings, and higher customer satisfaction. It also helps logistics providers scale without proportionally increasing operational complexity.
Example: DHL’s AI-driven routing platform reduces mileage per package and improves on-time delivery.


Finance, Accounting & Risk Management

8. Touchless Document Processing
Invoice entry and document reconciliation are among the most repetitive and error-prone tasks in finance. AI automates these workflows by reading, validating, and recording data with high accuracy, drastically reducing processing time and human error.
Example: Large enterprises report cutting invoice processing time by 80% and lowering cost per invoice by over 60%.

9. Smarter Fraud Detection Systems
Modern fraud schemes evolve too rapidly for traditional rules-based systems to catch. AI models can continuously learn from new data and detect suspicious behaviors in real time, flagging anomalies that might otherwise go unnoticed.
Example: A global bank using AI to process checks in real time saw a 50% drop in fraud and saved over $20M annually.

10. Automating Financial Controls
AI supports internal audit and compliance by automatically flagging unusual transactions, reconciling financial data, and generating traceable logs for auditors. This not only boosts confidence in regulatory reporting but reduces the burden on finance teams.
Example:  Deloitte finds AI-led controls improve accuracy, reduce audit costs, and streamline compliance workflows.


HR & Administration

11. Accelerated AI Recruitment
Hiring at scale is time-consuming and prone to bias. AI now supports end-to-end recruitment by screening CVs, analyzing video interviews, and predicting candidate-job fit based on past data. This enables faster, fairer hiring decisions and a better candidate experience.
Example: Unilever’s AI-powered hiring cut time-to-hire by 90%, reduced recruiter workload, and increased hiring diversity by 16%.

12. AI-Powered Admin Assistance
Whether it’s helping employees navigate HR policies or resetting passwords, AI bots respond instantly to internal requests. They resolve issues efficiently and learn from interactions to improve over time, reducing dependency on HR and IT service desks.
Example: AT&T’s HR bot answers thousands of employee questions per month, freeing up support teams and reducing internal wait times.


Software Development & IT Operations

13. AI Code Generation & Testing
AI-assisted development tools help engineers write code, suggest improvements, and run automated tests. This shortens development cycles, reduces bugs, and improves overall code quality. It also democratizes coding by assisting less-experienced developers with best practices.
Example: Enterprises report 20–30% faster feature delivery using AI-assisted development environments.

14. Intelligent IT Service Management
From incident triage to root cause analysis, AI is embedded in IT Service Management platforms to help resolve tech issues automatically. Predictive insights help prevent outages and minimize disruption across business-critical systems.
Example: Leading digitally enabled firms see average resolution time drop by 50%, with improved system reliability and user satisfaction.

15. AI-Driven DevOps Optimization
By analyzing telemetry data and past deployments, AI optimizes build pipelines, monitors production systems, and predicts future resource needs. This ensures smoother rollouts and better infrastructure planning.
Example: Cloud-native companies use AI to reduce deployment failures and improve performance-to-cost ratios in real time.


Why AI Wins in These Areas

Despite the diversity of domains, these success stories share clear commonalities:

  • High process volume: Tasks that are frequent and repetitive gain the most from automation.
  • Structured and semi-structured data: AI performs best where input data is clean or can be normalized.
  • Clear Return On Investment (ROI) levers: The efficiency gains are measurable—reduced cycle time, lowered cost, or increased accuracy.
  • Repeatable workflows: Standardized or rules-based processes allow for predictable automation.

In essence, AI is most effective where complexity meets scale. As more enterprises embed AI into their operations, it is not just making processes faster—it’s reshaping them for quality, agility, and scale in a digital-first world.

Looking ahead, the next wave of AI impact is likely to emerge in areas where unstructured data and human judgment still dominate today. Examples include:

  • Legal and contract management, where AI is starting to support contract drafting, review, and risk flagging.
  • Strategy and decision support, where generative AI can synthesize market trends, customer feedback, and financial data to help leaders shape better strategies.
  • Sustainability tracking, where AI can analyze supply chain and operational data to monitor and reduce environmental impact.

As models become more capable and context-aware, these higher-value and less-structured domains may soon follow the path of automation and augmentation already seen in the 15 areas above.

When Good Intentions Fail – Why Effective Governance Is the Fix

While many organizations focus on technology, data, and capabilities, it’s the governance structures that align strategy with execution, enable informed decision-making, and ensure accountability. Without effective governance, even the most promising digital or AI initiatives risk becoming fragmented, misaligned, or unsustainable.

This article explores how governance typically evolves during transformation, drawing on a framework presented in GAIN by Michael Wade and Amit Joshi (2025). It then outlines best practices and tools for establishing effective governance at every level of transformation—portfolio, program, and project.

The Governance Journey: From Silo to Anchored Agility
Wade and Joshi identify four phases in the evolution of transformation governance:

  • Silo: In this early phase, digital and AI initiatives are isolated within departments. There is little coordination across the organization, leading to duplicated efforts and fragmented progress.
  • Chaos: As a reaction to the issues with the siloed approach, often companies start putting governance in place—but often not very effectively. Leading to a proliferation of processes, tools and platforms.
  • Bureaucracy: In response to chaos, organizations implement formal governance structures. While this reduces risk and increases control, it can also stifle innovation through over-regulation and sluggish decision-making.
  • Anchored Agility: The desired end-state. Governance becomes a strategic enabler—embedded yet flexible. It ensures alignment and control without constraining innovation. Decision-making is delegated appropriately, while strategic oversight is maintained.

Most organisations go through this journey, understanding where your organization is helps to determine what kind of actions are needed and what to improve.

Effective Governance: Moving from Bureaucracy to Anchored Agility
Most successful digital and AI transformations mature into the Bureaucracy and Anchored Agility phases. These are the phases where effective governance must strike a balance between structure and adaptability.

Two proven approaches—PMI and Agile—offer best practices to draw from:

PMI Governance Best Practices

  • Well-defined roles and responsibilities across governance layers
  • Program and project charters to formalize scope, authority, and accountability
  • Clear stage gates, with decision points tied to strategic goals
  • Risk, issue, and change control mechanisms
  • Standard reporting templates to ensure transparency and comparability

PMI’s approach works best in large, complex transformations that require strong coordination, predictable delivery, and control of interdependencies.

Agile Governance Principles

  • Empowered teams with clear decision rights
  • Frequent review cadences (e.g., sprint reviews, retrospectives, and PI planning)
  • Lightweight governance bodies focused on alignment, not control
  • Transparent backlogs and prioritization frameworks
  • Adaptability built into the governance process itself

Agile governance is ideal for fast-evolving digital or AI initiatives where experimentation, speed, and responsiveness are critical.

Moving from Bureaucracy to Anchored Agility, is not moving away from PMI to only Agile Governance principles. Your portfolio probably will have mix of initiatives which leverages one or both of the approaches.

Governance Across Levels: Portfolio, Program, Project
A layered governance model helps ensure alignment from strategy to execution:

Portfolio Level

  • Purpose: Strategic alignment, investment decisions, and value realization
  • Key Bodies: Executive Steering Committees, Digital/AI Portfolio Boards
  • Focus Areas: Prioritization, funding, overall risk and performance tracking

Program Level

  • Purpose: Coordinating multiple related projects and initiatives
  • Key Bodies: Program Boards or Program Management Offices
  • Focus Areas: Interdependencies, resource allocation, milestone tracking, issue resolution

Project Level

  • Purpose: Delivering tangible outcomes on time and on budget
  • Key Bodies: Project SteerCos, Agile team ceremonies
  • Focus Areas: Daily execution, scope management, risk and issue tracking, delivery cadence

Connecting the Layers: How Governance Interacts and Cascades
Effective governance requires more than clearly defined levels—it demands a dynamic flow of information and accountability across these layers. Strategic priorities must be translated into executable actions, while insights from execution must feed back into strategic oversight.

  • Top-down alignment: Portfolio governance sets strategic objectives, funding allocations, and key performance indicators. These are cascaded to programs and projects through charters, planning sessions, and KPIs.
  • Bottom-up reporting: Project teams surface risks, status updates, and learnings which are aggregated at the program level and escalated to the portfolio when needed.
  • Horizontal coordination: Programs often interact and depend on each other. Governance forums at program level and joint planning sessions across programs help manage these interdependencies.
  • Decision and escalation pathways: Clear routes for issue resolution and decision-making prevent bottlenecks and ensure agility across layers.

Organizations that master this governance flow operate with greater transparency, speed, and alignment.

Tools and Enablers for Good Governance
Governance is not just about structure—it’s also about enabling practices and tools that make oversight effective and efficient:

  • Terms of Reference (ToR): Define the mandate, decision rights, and meeting cadence for each governance body.
  • Collaboration & Transparency Tools: Use of platforms like Asana, Confluence, Jira, MS Teams for sharing updates, tracking decisions, and managing workflows.
  • Standardized Reporting: Leverage consistent templates for status, risks, and KPIs to create transparency and drive focus.
  • RACI Matrices: Clarify roles and decision-making authority across stakeholders, especially in cross-functional setups.
  • Governance Calendars: Synchronize key reviews, steerco meetings, and strategic checkpoints across layers.

Lessons from the Field
From my experience, common governance pitfalls include over-engineering (which stifles agility), under-resourcing (especially at the program level), and slow/unclear decision making. Successful governance relies on:

  • Aligned executive sponsorship
  • Clear ownership at all levels
  • Integration of risk, value, and resource management
  • Enabling people to act

Conclusion
In digital and AI transformation, effective governance is not about control—it’s about enablement. It provides the structure and transparency needed to drive transformation, align stakeholders, and scale success. As your organization moves toward Anchored Agility, governance becomes less of a bottleneck and more of a backbone.

Where is your organization on the governance journey—and what would it take to reach the next phase?