How AI is Reshaping Human Work, Teams, and Organisational Design

The implications of AI are profound: when individuals can deliver team-level output with AI, organisations must rethink not just productivity, but the very design of work and teams. A recent Harvard Business School and Wharton field experiment titled The Cybernetic Teammate offers one of the clearest demonstrations of this shift. Conducted with 776 professionals at Procter & Gamble, the study compared individuals and teams working on real product-innovation challenges, both with and without access to generative AI.

The results were striking:

  • Individuals using AI performed as well as/better than human teams without AI.
  • Teams using AI performed best of all.
  • AI also balanced out disciplinary biases—commercial and technical professionals produced more integrated, higher-quality outputs when assisted by AI.

In short, AI amplified human capability at both the individual and collective level. It became a multiplier of creativity, insight, and balance—reshaping the traditional boundaries of teamwork and expertise.

The Evidence Is Converging

Other large-scale studies reinforce this picture. A Harvard–BCG experiment showed consultants using GPT-4 were 12% more productive, 25% faster, and delivered work rated 40% higher in quality for tasks within the model’s “competence frontier


How Work Will Be Done Differently

These findings signal a fundamental redesign in how work is organised. The dominant model—teams collaborating to produce output—is evolving toward individual-with-AI first, followed by team integration and validation.

A typical workflow may now look like this:

AI-assisted ideation → human synthesis → AI refinement → human decision.

Work becomes more iterative, asynchronous, and cognitively distributed. Human collaboration increasingly occurs through the medium of AI: teams co-create ideas, share prompt libraries, and build upon each other’s AI-generated drafts.

The BCG study introduces a useful distinction:

  • Inside the AI frontier: tasks within the model’s competence—ideation, synthesis, summarisation—where AI can take the lead.
  • Outside the AI frontier: tasks requiring novel reasoning, complex judgment, or proprietary context—where human expertise must anchor the process.

Future roles will be defined less by function and more by how individuals navigate that frontier: knowing when to rely on AI and when to override it. Skills like critical reasoning, verification, and synthesis will matter more than rote expertise.


Implications for Large Enterprises

For established organisations, the shift toward AI-augmented work changes the anatomy of structure, leadership, and learning.

1. Flatter, more empowered structures.
AI copilots widen managerial spans by automating coordination and reporting. However, they also increase the need for judgmental oversight—requiring managers who coach, review, and integrate rather than micromanage.

2. Redefined middle-management roles.
The traditional coordinator role gives way to integrator and quality gatekeeper. Managers become stewards of method and culture rather than traffic controllers.

3. Governance at the “AI frontier.”
Leaders must define clear rules of engagement: what tasks can be automated, which require human review, and what data or models are approved. This “model–method–human” control system ensures both productivity and trust.

4. A new learning agenda.
Reskilling moves from technical training to cognitive fluency: prompting, evaluating, interpreting, and combining AI insights with business judgment. The AI-literate professional becomes the new organisational backbone.

5. Quality and performance metrics evolve.
Volume and throughput give way to quality, cycle time, rework reduction, and bias detection—metrics aligned with the new blend of human and machine contribution.

In short, AI doesn’t remove management—it redefines it around sense-making, coaching, and cultural cohesion.


Implications for Startups and Scale-Ups

While enterprises grapple with governance and reskilling, startups are already operating in an AI-native way.

Evidence from recent natural experiments shows that AI-enabled startups raise funding faster and with leaner teams. The cost of experimentation drops, enabling more rapid iteration but also more intense competition.

The typical AI-native startup now runs with a small human core and an AI-agent ecosystem handling customer support, QA, and documentation. The operating model is flatter, more fluid, and relentlessly data-driven.

Yet the advantage is not automatic. As entry barriers fall, differentiation depends on execution, brand, and customer intimacy. Startups that harness AI for learning loops—testing, improving, and scaling through real-time feedback—will dominate the next wave of digital industries.


Leadership Imperatives – Building AI-Enabled Work Systems

For leaders, the challenge is no longer whether to use AI, but how to design work and culture around it. Five imperatives stand out:

  1. Redesign workflows, not just add tools. Map where AI fits within existing processes and where human oversight is non-negotiable.
  2. Build the complements. Create shared prompt libraries, custom GPTs,  structured review protocols, and access to verified data.
  3. Run controlled pilots. Test AI augmentation in defined workstreams, measure speed, quality, and engagement, and scale what works.
  4. Empower and educate. Treat AI literacy as a strategic skill—every employee a prompt engineer, every manager a sense-maker.
  5. Lead the culture shift. Encourage experimentation, transparency, and open dialogue about human-machine collaboration.

Closing Thought

AI will not replace humans or teams. But it will transform how humans and teams create value together.

The future belongs to organisations that treat AI not as an external technology, but as an integral part of their work design and learning system. The next generation of high-performing enterprises—large and small—will be those that master this new choreography between human judgment and machine capability.

AI won’t replace teams—but teams that know how to work with AI will outperform those that don’t.

More on this in one of my next newsletters.

The AI Strategy Imperative: Why Act Now

Two weeks ago, I completed IMD’s AI Strategy & Implementation program. It made the “act now” imperative unmistakable. In this newsletter I share the overarching insights I took away; in upcoming issues I’ll go deeper into specific topics and tools we used.


AI is no longer a tooling choice. It’s a shift in distribution, decision-making, and work design that will create new winners and losers. Leaders who move now—anchoring execution in clear problems, strong data foundations, and human–AI teaming—will compound advantage while others get trapped in pilots and platform dependency.


1) Why act now: the competitive reality

Distribution is changing. AI assistants and agentic workflows increasingly mediate buying journeys. If your brand isn’t represented in answers and automations, you forfeit visibility, traffic, and margin. This is a channel economics shift: AI determines which brands are surfaced—and which are invisible.

Platforms are consolidating power. Hyperscalers are embedding AI across their offerings. You’ll benefit from their acceleration, but your defensibility won’t come from platforms your competitors can also buy. The durable moat is your proprietary data, decision logic, and learning loops you control—not a longer vendor list.

Agents are getting real. Think of agents as “an algorithm that applies algorithms.” They decompose work into steps, call tools/APIs, and complete tasks with minimal supervision. Agent architectures will reshape processes, controls, and talent—pushing leaders to design for human–AI teams rather than bolt‑on copilots.


2) The paradox: move fast and build right

The cost of waiting. Competitors pairing people with AI deliver faster at lower cost and start absorbing activities you still outsource. As internal production costs fall faster than coordination costs, vertical integration becomes attractive—accelerated by automation. Late movers face margin pressure and share erosion.

The risk of rushing. Many efforts stall because they “build castles on quicksand”—shiny proofs‑of‑concept on weak data and process foundations. Value doesn’t materialize, trust erodes, and budgets freeze. Urgency must be paired with disciplined follow up so speed creates compounded learning.


3) A durable path to value: the 5‑Box Implementation Framework

A simple path from strategy deck to shipped value:

  1. Problem. Define a single business problem tied to P&L or experience outcomes. Write the metric up front; make the use case narrow enough to ship quickly.
  2. Data. Map sources, quality, access, and ownership. Decide what you must own versus can borrow; invest early in clean, governed data because it is the most sustainable differentiator.
  3. Tools. Choose the lightest viable model/agent and the minimum integration needed to achieve the outcome, keep it simple.
  4. People. Form cross‑functional teams (domain expertise + data + engineering + change) with one accountable owner. Team design—not individual heroics—drives performance.
  5. Feedback loops. Instrument production to compare predicted vs. actual outcomes. The delta gives valuable insights and becomes new training data.

Your defensive moat is data + people + decisions + learning loops, not your vendor list.


4) Moving the Human Workforce to more Complex Tasks

While AI absorbs simple and complicated work (routine tasks, prediction, pattern recognition), the human edge shifts decisively to complex and chaotic problems—where cause and effect are only clear in retrospect or not at all. This economic reality forces immediate investment in people as internal work is increasingly handled by AI–human teams.

The immediate talent pivot. Leaders must signal—and codify—new “complexity competencies”: adaptive problem‑solving, systems thinking, comfort with ambiguity, and AI product‑ownership (defining use cases, data needs, acceptance criteria, and evaluation).

Organizational design for learning.

  • Security: Build psychological safety so smart experiments are rewarded and failures fuel learning, not blame.
  • Convenience: Make adoption of new AI tools easy—frictionless access, clear guidance, and default enablement.
  • Process: A weak human with a tool and a better process will outperform a strong human with a tool and a worse process. Define roles, handoffs, and measurement so teams learn in the loop.

5) Where ROI shows up first

There is a lot of discussion on where AI really shows it benefits and there are four areas, where we see consistent reporting about:

Content. Marketing and knowledge operations see immediate throughput gains and more consistent quality. Treat this as a production system: govern sources, version prompts/flows, and measure impact.

Code. Assistance, testing, and remediation compress cycle time and reduce defects. Success depends on clear guardrails, reproducible evaluation, and tight feedback from production incidents into your patterns.

Customer. Service and sales enablement benefit from faster resolution and personalization at scale. Start with narrow intents, then expand coverage as accuracy and routing improve.

Creative. Design, research, and planning benefit from rapid exploration and option value. Use agentic research assistants with human review to widen the solution space before you converge.


6) Organize to scale without chaos

Govern the reality, not the slide. Shadow AI already exists. Enable it safely with approved toolkits, lightweight guardrails, and clear data rules—so exploration happens inside the tent, not outside it.

CoE vs. federation. Avoid the “cost‑center CoE” trap. Stand up a small enablement core (standards, evaluation, patterns), but push delivery into business‑owned pods that share libraries and reviews. This balances consistency with throughput.

Human + AI teams. Process design beats heroics. Make handoffs explicit, instrument outcomes, and build psychological safety so teams learn in the loop. A weak human with a machine and a better process will outperform a strong human with a machine and a worse process.


What this means for leaders

  • Move talent to handle complexity. Codify new competencies (adaptive problem‑solving, systems thinking, comfort with ambiguity, AI product‑ownership) and design organizational systems that accelerate learning (security, convenience, process).
  • Your moat is data + people + decisions + learning loops. Platforms accelerate you, but they’re available to everyone. Proprietary, well‑governed data feeding instrumented processes is what compounds.
  • Ship value early; strengthen foundations as you scale. Start where ROI is proven (content, code, customer, creative), then use that momentum to fund data quality and governance.
  • Design for agents and teams now. Architect processes assuming agents will do steps of work and humans will supervise, escalate, and improve the system. That’s how you create repeatable outcomes.

Lifelong Learning in the Age of AI – My Playbook

September 2025, I received two diplomas: IMD’s AI Strategy & Implementation and Nyenrode University’s Corporate Governance for Supervisory Boards. I am proud of both—more importantly, they cap off a period where I have deliberately rebuilt how I learn.

With AI accelerating change and putting top-tier knowledge at everyone’s fingertips, the edge goes to leaders who learn—and apply—faster than the market moves. In this issue I am not writing theory; I am sharing my learning journey of the past six months—what I did, what worked, and the routine I will keep using. If you are a leader, I hope this helps you design a learning system that fits a busy executive life.


My Learning System – 3 pillars

1) Structured learning

This helped me to gain the required depth:

  • IMD — AI Strategy & Implementation. I connected strategy to execution: where AI creates value across the business, and how to move from pilots to scaled outcomes. In upcoming newsletters, I will go share insights on specific topics we went deep on in this course.
  • Nyenrode — Corporate Governance for Supervisory Boards. I deepened my view on board-level oversight—roles and duties, risk/compliance, performance monitoring, and strategic oversight. I authored my final paper on how to close the digital gap in supervisory boards (see also my earlier article)
  • Google/Kaggle’s 5-day Generative AI Intensive. Hands-on labs demystified how large language models work: what is under the hood, why prompt quality matters, where workflows can break, and how to evaluate outputs against business goals. It gave understanding how to improve the use of these models.

2) Curated sources

This extended the breadth of my understanding of the use of AI.

2a. Books

Below I give a few examples, more book summaries/review, you can find on my website: www.bestofdigitaltransformation.com/digital-ai-insights.

  • Co-Intelligence: a pragmatic mindset for working with AI—experiment, reflect, iterate.
  • Human + Machine: how to redesign processes around human–AI teaming rather than bolt AI onto old workflows.
  • The AI-Savvy Leader: what executives need to know to steer outcomes without needing to code.

2b. Research & articles
I built a personal information base with research from: HBR, MIT, IMD, Gartner, plus selected pieces from McKinsey, BCG, Strategy&, Deloitte, and EY. This keeps me grounded in capability shifts, operating-model implications, and the evolving landscape.

2c. Podcasts & newsletters
Two that stuck: AI Daily Brief and Everyday AI. Short, practical audio overviews with companion newsletters so I can find and revisit sources. They give me a quick daily pulse without drowning in feeds.

3) AI as my tutor

I am using AI to get personalised learning support.

3a. Explain concepts. I use AI to clarify ideas, contrast approaches, and test solutions using examples from my context.
3b. Create learning plans. I ask for step-by-step learning journeys with milestones and practice tasks tailored to current projects.
3c. Drive my understanding. I use different models to create learning content, provide assignments, and quiz me on my understanding.


How my journey unfolded

Here is how it played out.

1) Started experimenting with ChatGPT.
I was not an early adopter; I joined when GPT-4 was already strong. Like many, I did not fully trust it at first. I began with simple questions and asked the model to show how it interpreted my prompts. That built confidence without creating risks/frustration.

2) Built foundations with books.
I read books like Co-Intelligence, Human + Machine, and The AI-Savvy Leader. These created a common understanding for where AI helps (and does not), how to pair humans and machines, and how to organise for impact. For all the books I created reviews, to anchor my learnings and share them in my website.

3) Added research and articles.
I set up a repository with research across HBR/MIT/IMD/Gartner and selected consulting research. This kept me anchored in evidence and applications, and helped me track the operational implications for strategy, data, and governance.

4) Tried additional models (Gemini and Claude).
Rather than picking a “winner,” I used them side by side on real tasks. The value was in contrast—seeing how different models frame the same question, then improving the final answer by combining perspectives. Letting models critique each other surfaced blind spots.

5) Went deep with Google + Kaggle.
The 5-day intensive course clarified what is under the hood: tokens/vectors, why prompts behave the way they do, where workflows tend to break, and how to evaluate outputs beyond “sounds plausible.” The exercises translated directly into better prompt design and started my understanding of how agents work.

6) Used NotebookLM for focused learning.
For my Nyenrode paper, I uploaded the key articles and interacted only with that corpus. NotebookLM generated grounded summaries, surfaced insights I might have missed, and reduced the risk of invented citations (by sticking to the uploaded resources). The auto-generated “podcast” is one of the coolest features I experienced and really helps to learn about the content.

7) Added daily podcasts/newsletters to stay current.
The news volume on AI is impossible to track end-to-end. AI Daily Brief and Everyday AI give me a quick scan each morning and links worth saving for later deep dives. This provides the difference between staying aware versus constantly feeling behind.

8) Learned new tools and patterns at IMD.

  • DeepSeek helped me debug complex requests by showing how the model with reasoning interpreted my prompt—a fantastic way to unravel complex problems.
  • Agentic models like Manus showed the next step: chaining actions and tools to complete tasks end-to-end.
  • CustomGPTs (within today’s LLMs) let me encode my context, tone, and recurring workflows, boosting consistency and speed across repeated tasks.

Bring it together with a realistic cadence.

Leaders do not need another to-do list; they need a routine that works. Here is the rhythm I am using now:

Daily

  • Skim one high-signal newsletter or listen to a podcast.
  • Capture questions to explore later.
  • Learn by doing with the various tools.

Weekly

  • Learn: read one or more papers/articles on various Ai related topics
  • Apply: use one idea on a live problem; interact with AI on going deeper
  • Share: create my weekly newsletter, based on my learnings

Monthly

  • Pick one learning topic read a number of primary sources, not just summaries.
  • Draft an experiment: with goal, scope, success metric, risks, and data needs. Using AI to pressure-test assumptions.
  • Review with a thought leaders/colleagues for challenge and alignment.

Quarterly

  • Read at least one book that expands your mental models.
  • Create a summary for my network. Teaching others cements my own understanding.

(Semi-) Annualy

  • Add a structured program or certificate to go deep and to benefit from peer debate.

Closing

The AI era compresses the shelf life of knowledge. Waiting for a single course is no longer enough. What works is a learning system: structured learning for depth, curated sources for breadth, and AI as your tutor for speed. That has been my last six months, and it is a routine I will continue.

Consultancy, Rewired: AI’s Impact on consultancy firms and what their clients should expect

The bottom line: consulting is not going away. It is changing—fast. AI removes a lot of manual work and shifts the focus to speed, reusable tools, and results that can be measured. This has consequences for how firms are organised and how clients buy and use consulting.


What HBR says

The main message: AI is reshaping the structure of consulting firms. Tasks that used to occupy many junior people—research, analysis, and first-pass modelling—are now largely automated. Teams get smaller and more focused. Think of a move from a wide pyramid to a slimmer column.

New human roles matter more: people who frame the problem, translate AI insights into decisions, and work with executives to make change happen. HBR also points to a new wave of AI-native boutiques. These firms start lean, build reusable assets, and aim for outcomes rather than volume of slides.

What The Economist says

The emphasis here is on client expectations and firm economics. Clients want proof of impact, not page counts. If AI can automate a lot of the production work, large firms must show where they still create unique value. That means clearer strategies, simpler delivery models, and pricing that links fees to outcomes.

The coverage also suggests this is a structural shift, not a short-term cycle. Big brands will need to combine their access and experience with technology, reusable assets, and strong governance to stay ahead.


What AI can do in consulting — now vs. next (practical view)

Now

  • Discovery & synthesis. AI can sweep through filings, research, transcripts, and internal knowledge bases to cluster themes, extract evidence with citations, and surface red flags. This compresses the preparation phase of understanding so teams spend time on framing the problem and implications.
  • First-pass quantification & modelling. It produces draft market models and sensitivity analyses that consultants then stress-test. The benefit isn’t perfect numbers; it’s cycle-time—from question to a defendable starting point—in hours, not days.
  • Deliverables at speed. From storylines to slide drafts and exhibits, AI enforces structure and house style, handles versioning, and catches inconsistencies. Human effort shifts to message clarity, executive alignment, and implications for decision makers.
  • Program operations & governance. Agents can maintain risk and issue logs, summarize meetings, chase actions, and prepare steering packs. Leaders can use meeting time for choices, not status updates.
  • Knowledge retrieval & reuse. Firm copilots bring up relevant cases, benchmarks, and experts. Reuse becomes normal, improving speed and consistency across engagements.

Next (12–24 months)

  • Agentic due diligence. Multi-agent pipelines will triage vast data sets (news, filings, call transcripts), propose claims with evidence trails, and flag anomalies for partner review—compressing weeks to days while keeping human judgment in the loop.
  • Scenario studios and digital twins. Reusable models (pricing, supply, workforce) will let executives explore “what-ifs” choices live, improving decision speed and buy-in.
  • Operate / managed AI. Advisory will bundle with run-time AI services (build-run-transfer), priced on SLAs or outcome measuress, linking fees to performance after go-live.
  • Scaled change support. Chat-based enablement and role-tailored nudges will help people adopt new behaviors at scale; consultants curate and calibrate content and finetune interventions instead of running endless classroom sessions.

Reality check: enterprise data quality, integration, and model-risk constraints keep humans firmly in the loop. The best designs make this explicit with approvals, audit trails, and guardrails.


Five industry scenarios (2025–2030)

  1. AI-Accelerated Classic. The big firms keep CXO access but run leaner teams; economics rely on IP based assets and pricing shifts from hours to outcomes.
  2. Hourglass Market Strong positions at the top (large integrators) and at the bottom (specialist boutiques). The middle gets squeezed as clients self-serve standard analysis.
  3. Productised & Operate. Advice comes with data, models, and managed services. Contracts include service levels and shared-savings, tying value to real-world results.
  4. Client-First Platforms. Companies build internal AI studios and bring in targeted experts. Firms must plug into client platforms and compete on speed, trust, and distinctive assets.
  5. AI-Native Agencies Rise. New entrants born with automation-first workflows and thin layers scale quickly—resetting expectations of speed, price-performance, and what a “team” looks like.

What clients should ask for (and firms should offer)

  • Ask for assets, not documents. Ask for reusable data, models, and playbooks that you keep using after the engagement. —and specify this in the SOW.
  • Insist on transparency. Demand visibility into data sources, prompt chains, evaluation methods, and guardrails so you can trust, govern, and scale what’s built.
  • Design for capability transfer. Make enablement, documentation, and handover part of the scope with clear acceptance criteria.
  • Outcome-linked pricing where possible. Start with a pilot and clear success metrics; scale with contracts tied to results or service levels.

Close

AI is changing both the shape of consulting firms and the way organisations use them. Smaller teams, reusable assets, and outcome focus will define the winners.

From Org Charts to Work Charts – Designing for Hybrid Human–Agent Organisations

The org chart is no longer the blueprint for how value gets created. As Microsoft’s Asha Sharma puts it, “the org chart needs to become the work chart.” When AI agents begin to own real slices of execution—preparing customer interactions, triaging tickets, validating invoices—structure must follow the flow of work, not the hierarchy of titles. This newsletter lays out what that means for leaders and how to move, decisively, from boxes to flows.


Why this is relevant now

Agents are leaving the lab. The conversation has shifted from “pilot a chatbot” to “re-architect how we deliver outcomes.” Boards and executive teams are pushing beyond experiments toward embedded agents in sales, service, finance, and supply chain. This is not a tooling implementation—it’s an operating-model change.

Hierarchy is flattening. When routine coordination and status reporting are automated, you need fewer layers to move information and make decisions. Roles compress; accountabilities become clearer; cycle times shrink. The management burden doesn’t disappear—it changes. Leaders spend less time collecting updates and more time setting direction, coaching, and owning outcomes.

Enterprises scale. AI-native “tiny teams” design around flows—the sequence of steps that create value—rather than traditional functions. Large organizations shouldn’t copy their size; they should copy this unit of design. Work Charts make each flow explicit, assign human and agent owners, and let you govern and scale it across the enterprise.


What is a Work Chart?

A Work Chart is a living map of how value is produced—linking outcomes → end-to-end flows → tasks → handoffs—and explicitly assigning human owners and agent operators at each step. Where an org chart shows who reports to whom, a Work Chart shows:

  • Where the work happens – the flow and its stages
  • Who is accountable – named human owners of record
  • What is automated – agents with charters and boundaries
  • Which systems/data/policies apply – the plumbing and guardrails
  • How performance is measured – SLAs, exceptions, error/rework, latency

Work Chart is your work graph made explicit—connecting goals, people, and permissions so agents can act with context and leaders can govern outcomes.


Transformation at every level

Board / Executive Committee
Set policy for non-human resources (NHRs) just as you do for capital and people. Define decision rights, guardrails, and budgets (compute/tokens). Require blended KPIs—speed, cost, risk, quality—reported for human–agent flows, not just departments. Make Work Charts a standing artifact in performance reviews.

Enterprise / Portfolio
Shift from function-first projects to capability platforms (retrieval, orchestration, evaluation, observability) that any BU can consume. Keep a registry of approved agents and a flow inventory so portfolio decisions always show which flows, agents, and data they affect. Treat major flow changes like product releases—versioned, reversible, and measured.

Business Units / Functions
Turn priority processes into agent-backed services with clear SLAs and a named human owner. Publish inputs/outputs, boundaries (what the agent may and may not do), and escalation paths. You are not “installing AI”; you’re standing up services that can be governed and improved.

Teams
Maintain an Agent Roster (purpose, inputs, outputs, boundaries, logs). Fold Work Chart updates into existing rituals (standups, QBRs). Managers spend less time on status and more on coaching, exception handling, and continuous improvement of the flow.

Individuals
Define personal work charts for each role (the 5–7 recurring flows they own) and the agents they orchestrate. Expect role drift toward judgment, relationships, and stewardship of AI outcomes.


Design principles – what “good” looks like

  1. Outcome-first. Start from customer journeys and Objective – Key Results (OKRs); redesign flows to meet them.
  2. Agents as first-class actors. Every agent has a charter, a named owner, explicit boundaries, and observability from day one.
  3. Graph your work. Connect people, permissions, and policies so agents operate with context and least-privilege access.
  4. Version the flow. Treat flow changes like product releases—documented, tested, reversible, and measured.
  5. Measure continuously. Track time-to-outcome, error/rework, exception rates, and SLA adherence—reviewed where leadership already looks (business reviews, portfolio forums).

Implementation tips

1) Draw the Work Chart for mission-critical journeys
Pick one customer journey, one financial core process, and one internal productivity flow. Map outcome → stages → tasks → handoffs. Mark where agents operate and where humans remain owners of record. This becomes the executive “single source” for how the work actually gets done.

2) Create a Work Chart Registry
Create a lightweight, searchable registry that lists every flow, human owner, agent(s), SLA, source, and data/permission scope. Keep it in the systems people already use (e.g., your collaboration hub) so it becomes a living reference, not a slide deck.

3) Codify the Agent Charters
For each agent on the Work Chart, publish a one-pager: Name, Purpose, Inputs, Outputs, Boundaries, Owner, Escalation Path, Log Location. Version control these alongside the Work Chart so changes are transparent and auditable.

4) Measure where the work happens.
Instrument every node with flow health metrics—latency, error rate, rework, exception volume. Surface them in the tools leaders already use (BI dashboards, exec scorecards). The goal is to manage by flow performance, not anecdotes.

5) Shift budgeting from headcount to flows
Attach compute/SLA budgets to the flows in your Work Chart. Review them at portfolio cadence. Fund increases when there’s demonstrable improvement in speed, quality, or risk. This aligns investment with value creation rather than with org boxes.

6) Communicate the new social contract
Use the Work Chart in town halls and leader roundtables to explain what’s changing, why it matters, and how roles evolve. Show before/after charts for one flow to make the change tangible. Invite feedback; capture exceptions; iterate.


Stop reorganizing boxes – start redesigning flows. Mandate that each executive publishes the first Work Chart for one mission-critical journey—complete with agent charters, SLAs, measurements, and named owners of record. Review it with the same rigor you apply to budget and risk. Organizations that do this won’t just “adopt AI”; they’ll build a living structure that mirrors how value is created—and compounds it.

Closing the Digital Competency Gap in the Boardroom

This article is based on a thesis I have written for the Supervisory Board program (NCC 73) at Nyenrode University, which I will complete this month. I set out to answer a practical question: how can supervisory boards close the digital competency gap so their oversight of digitalization and AI is effective and value-creating?

The research combined literature, practitioner insights, and my own experience leading large-scale digital transformations. The signal is clear: technology, data, and AI are no longer specialist topics—they shape strategy, execution, and resilience. Boards that upgrade their competence change the quality of oversight, the shape of investment, and ultimately the future of the company.


1) Business model transformation

Digital doesn’t just add channels; it rewrites how value is created and captured. The board’s role is to probe how data, platforms, and AI may alter customer problem–solution fit, value generation logic, and ecosystem position over the next 3–5–10 years. Ask management to make the trade-offs explicit: which parts of the current model should we defend, which should we cannibalize, and which new options (platform plays, data partnerships, embedded services) warrant small “option bets” now?

What to look out for: strategies that talk about “going digital” without quantifying how revenue mix, margins, or cash generation will change. Beware dependency risks (platforms, app stores, hyperscalers) that shift bargaining power over time. Leverage scenario planning and clear leading indicators—so the board can see whether the plan is working early enough to pivot or double down.

2) Operational digital transformation

The strongest programs are anchored in outcomes, not output. Boards should ask to see business results expressed in P&L and balance-sheet terms (growth, cost, capital turns), not just “go-live” milestones. Require a credible pathway from pilot to scale: gated tranches that release funding when adoption, value, and risk thresholds are met; and clear “stop/reshape” criteria to avoid sunk-costs.

What to look out for: “watermelon” reporting— that stay green while progress/adoption is behind; vendor-led roadmaps that don’t fit the architecture; and under-resourced change management. As a rule of thumb, ensure 10–15% of major transformation budgets are reserved for change, communications, and training. Ask who owns adoption metrics and how you’ll know—early—that teams are using what’s been built.

3) Organization & culture

Technology succeeds at the speed of behaviour change. The board should examine whether leadership is telling a coherent story (why/what/how/who) and whether middle management has the capacity to translate it into local action. Probe how AI will reshape roles and capabilities, and whether the company has a reskilling plan that is targeted, measurable, and linked to workforce planning.

What to look out for: assuming tools will “sell themselves,” starving change budgets, and running transformations in a shadow lane disconnected from the real business. Look for feedback loops—engagement diagnostics, learning dashboards, peer-to-peer communities—that surface resistance early and help leadership course-correct before adoption stalls.

4) Technology investments

Oversight improves dramatically when the board insists on a North Star architecture that makes trade-offs visible: which data foundations come first, how integration will work, and how security/privacy are designed in. Investments should be staged, with each tranche linked to outcome evidence and risk mitigation, and with conscious decisions about vendor lock-in and exit options.

What to look out for: shiny-tool syndrome, financial engineering that ignore lifetime Total Cost of Ownership (TCO), and weak vendor due diligence. Ask for risk analysis (e.g., cloud and vendor exposure) and continuity plans that are actually tested. Expect architecture reviews by independent experts on mission-critical choices, so the board gets a clear view beyond vendor narratives.

5) Security & compliance

Cyber, privacy, and emerging AI regulation must be treated as enterprise-level risks with clear ownership, KPIs, and tested recovery playbooks. Boards should expect regular exercises and evidence that GDPR, NIS2, and AI governance are embedded in product and process design—not bolted on at the end.

What to look out for: “tick-the-box” compliance that produces documents rather than resilience, infrequent or purely theoretical drills, and untested backups. Probe third-party and supply-chain exposure as seriously as internal controls. The standard is not perfection; it’s informed preparedness, repeated practice, and to learn from near-misses.


Seven structural moves that work

  1. Make digital explicit in board profiles. Use a competency matrix that distinguishes business-model, data/AI, technology, and cyber/compliance fluency. Recruit to close gaps or appoint external advisors—don’t hide digital under a generic “technology” label.
  2. Run periodic board maturity assessments. Combine self-assessment with executive feedback to identify capability gaps. Tie development plans to the board calendar (e.g., pre-strategy masterclasses, deep-dives before major investments).
  3. Hard-wire digital/AI into the agenda. Move from ad-hoc updates to a cadence: strategy and scenario sessions, risk and resilience reviews, and portfolio health checks. Make room for bad news early so issues surface before they become expensive.
  4. Adopt a board-level Digital & IT Cockpit. Track six things concisely: run-the-business efficiency, risk posture, innovation enablement, strategy alignment, value creation, and future-proofing (change control, talent, and architecture). Keep trends visible across quarters.
  5. Establish a Digital | AI Committee (where applicable). This complements—not replaces—the Audit Committee. Mandate: opportunities and threats, ethics and risk, investment discipline, and capability building. The committee prepares the ground; the full board takes the decisions.
  6. Use independent expertise by default on critical choices. Commission targeted reviews (architecture, vendor due diligence, cyber resilience) to challenge internal narratives. Independence is not a luxury; it’s how you avoid groupthink and discover blind spots in time.
  7. Onboard and upskill continuously. Provide a digital/AI onboarding for new members; schedule briefings with external experts; and use site visits to see real adoption. Treat learning like risk management: systematic, scheduled, and recorded.

Do you need a separate “Digital Board”?

My reflection: competence helps, but time and attention are the true scarcities. In digitally intensive businesses—where data platforms, AI-enabled operations, and cyber exposure shape enterprise value and are moving fast—a separate advisory or oversight body can deepen challenge and accelerate learning. It creates space for structured debate on architecture, ecosystems, and regulation without crowding out other board duties.

This isn’t a universal prescription. In companies where digital is material but not defining, strengthening the main board with a committee and better rhythms is usually sufficient. But when the operating model’s future rests on technology bets, a dedicated Digital Board (or equivalent advisory council) can bring the needed altitude, continuity, and specialized challenge to help the supervisory board make better, faster calls.


What this means for your next board cycle

The practical message from the thesis is straightforward: digital oversight is a core board responsibility that can be institutionalised. Start by clarifying the capability you need (the competency matrix), then hard-wire the conversation into the board’s rhythms (the agenda and cockpit), and raise the quality of decisions (staged investments, independent challenge, real adoption metrics). Expect a culture shift: from project status to value realization, from tool choice to architecture, from compliance as paperwork to resilience as practice.

Most importantly, treat this as a journey. Boards that improve a little each quarter—on fluency, on the sharpness of their questions, on the discipline of their investment decisions—create compounding advantages. The gap closes not with a single appointment or workshop, but with deliberate governance that learns, adapts, and holds itself to the same standard it asks of management.

Why 95% of AI Pilots Fail (MIT Study) – And How to Beat the Odds

Last week, a MIT study sent shockwaves through the AI and business community: 95% of AI pilots fail to deliver measurable business returns. Headlines spread fast, with investors and executives questioning whether enterprise AI is a bubble.

But behind the headlines lies a more nuanced story. The study doesn’t show that AI lacks potential—it shows that most organizations are not yet equipped to turn AI experiments into real business impact.


Myth vs. Reality: What Other Research Tells Us

While the MIT report highlights execution gaps, other studies paint a more balanced picture:

  • McKinsey (2025): AI adoption is rising fast, with value emerging where firms rewire processes and governance.
  • Stanford AI Index (2025): Investment and adoption continue to accelerate, signaling confidence in the long-term upside.
  • Field studies: Copilots in customer service and software engineering deliver double-digit productivity gains—but only when properly integrated.
  • MIT SMR–BCG: Companies that give individuals tangible benefits from AI—and track the right KPIs—are 6x more likely to see financial impact.

The picture is clear: AI works, but only under the right conditions.


Why AI Projects Fail (The 10 Traps)

1. No learning loop
Many AI pilots are clever demos that never improve once deployed. Without feedback mechanisms and continuous learning, the system remains static—and users quickly revert to old ways of working.

2. Integration gaps
AI may deliver great results in a sandbox, but in production it often fails to connect with core systems like CRM or ERP. Issues with identity management, permissions, and latency kill adoption.

3. Vanity pilots
Executives often prioritize flashy use cases—like marketing campaigns or customer-facing chatbots—while ignoring back-office automations. The result: excitement without measurable cash impact.

4. Build-first reflex
Organizations rush to build their own AI tools, underestimating the complexity of User eXperience (UX), guardrails, data pipelines, and monitoring. Specialist partners often outperform in speed and quality.

5. Six-month ROI traps
Leadership expects visible returns within half a year. But AI adoption follows a J-curve: disruption comes first, with benefits only materializing once processes and people adapt.

6. Weak KPIs
Too many pilots measure activity—such as number of prompts or usage time—rather than outcomes like error reduction, cycle time improvements, or cost savings. Without the right metrics, it’s impossible to prove value.

7. No product owner
AI projects often sit “between” IT, data, and the business, leaving no single accountable leader. Without an empowered product owner with a P&L target, projects stall in pilot mode.

8. Change ignored
Technology is deployed, but users aren’t engaged. Poor UX, lack of training, and trust concerns mean adoption lags. In response, employees turn to consumer AI tools instead of sanctioned ones.

9. Data & policy drag
Even when the AI works, poor data quality, fragmented sources, and unclear governance delay rollouts. Legal and compliance teams often block scaling because policies are not defined early enough.

10. Wrong first bets
Too many companies start with complex tasks. Early success is more likely in “thin-slice” repetitive processes—like call summarization or contract intake—that can prove value quickly.


How to Beat the Odds (10 Fixes That Work)

1. Design for learning
Build AI systems with memory, feedback capture, and regular improvement cycles. If a tool cannot learn and adapt in production, it should never progress beyond pilot stage.

2. Fix integration before inference
Prioritize robust connections into your CRM, ERP, and ticketing systems. AI without seamless workflow integration is just an isolated chatbot with no business impact.

3. Pick quick-win use cases
Target repetitive, document- and conversation-heavy flows—like claims processing, contract extraction, or helpdesk queries. These areas deliver ROI within 90–120 days and build momentum.

4. Appoint an AI Product Owner
Every use case should have a leader with budget, KPIs, and authority. This person is responsible for hitting targets and driving the project through pilot, limited production, and full scale-up.

5. Measure outcomes, not activity
Define 3–5 hard business KPIs (e.g., −25% contract cycle time, −20% cost per contact) and track adoption leading indicators. Publish a regular value scorecard to make progress visible.

6. Buy speed, build advantage
Use specialist vendors for modular, non-differentiating tasks. Save your in-house resources for proprietary applications where AI can become a true competitive edge.

7. Rebalance your portfolio
Shift investments away from glossy front-office showcases. Focus on back-office operations and service processes where AI can cut costs and generate visible ROI quickly.

8. Make change a deliverable
Adoption doesn’t happen automatically. Co-design solutions with frontline users, train them actively, and make fallback paths obvious. Manage trust as carefully as the technology itself.

9. Educate the board on the J-curve
Set realistic expectations that ROI takes more than six months. Pilot fast, but give production deployments time to stabilize, improve, and demonstrate sustained results.

10. Prove, then scale
Choose two or three use cases, set clear ROI targets up front, and scale only after success is proven. This disciplined sequencing builds credibility and prevents overreach.


The Broader Reflection

The 95% failure rate is not a verdict on AI’s future—it’s a warning about execution risk. Today’s picture is simple: adoption and investment are accelerating, productivity impacts are real, but enterprise-scale returns require a more professional approach.

We’ve seen this pattern before. Just as with earlier waves of digital transformation, leaders tend to overestimate short-term results and underestimate mid- to long-term impact.

Learning with AI – Unlocking Capability at Every Level

AI is Changing How We Learn! We’re entering a new era where learning and AI are deeply intertwined. Whether it’s a university classroom, a manufacturing site, or your own weekend learning project, AI is now part of how we access knowledge, gain new skills, and apply them faster.

The impact is real. In formal education, AI supported tutors are already showing measurable learning gains. In the workplace, embedded copilots help teams learn in the flow of work. And at the organizational level, smart knowledge systems can reduce onboarding time and improve consistency.

But like any tool, AI’s value depends on how we use it. In this article, I’ll explore four areas where AI is transforming learning — and share some insights from my own recent experiences along the way.


1. Formal Education — From Study Assistant to Writing Coach

AI is showing clear value in helping students and professionals deepen understanding, organize ideas, and communicate more effectively.

In my recent Supervisory Board program, I used NotebookLM to upload course materials and interact with them — asking clarifying questions and summarizing key insights. For my final paper, I turned to ChatGPT and Claude for review and editing — helping me sharpen my arguments and improve readability without losing my voice.

The benefit? More focused learning time, better written output, and higher engagement with the material.

How to get the most from AI in education:

  • Use AI to test understanding, not just provide answers
  • Let it structure thoughts and give feedback — like a sounding board
  • Ensure use remains aligned with academic integrity standards

Recent research supports this approach: Harvard studies show students using structured AI tutors learn more in less time when guardrails guide the interaction toward reasoning — not shortcuts.


2. Learning on the Job — From Static Training to Smart Assistance

In many workplaces, AI is no longer something you log into — it’s embedded directly into your tools, helping you solve problems, write faster, or learn new procedures while working.

Take Siemens, for example. Their industrial engineers now use an AI copilot integrated into their software tools to generate, troubleshoot, and optimize code for production machinery. Instead of searching manuals or waiting for expert support, engineers are guided step-by-step by an assistant that understands both the code and the task.

The benefit? People learn while doing — and become more capable with every task.

How to get the most from AI on the job:

  • Start with tasks that benefit from examples (e.g. writing, code, cases)
  • Let the AI model good practice, then ask the user to adapt or explain
  • Use real-time feedback to reinforce learning and reduce rework

Well-implemented, AI tools don’t replace training — they become the cornerstone of the training.


3. Organizational Learning — Turning Knowledge into an Exchange

As organizations accumulate more policies, procedures, and playbooks, the challenge isn’t just creating knowledge — it’s making it accessible. This is where AI can fundamentally change the game.

PwC is a leading example. They’ve deployed ChatGPT Enterprise to 100,000 employees, combined with internal GPTs trained on company-specific content. This transforms how people access information: instead of digging through files, they ask a question and get a consistent, governed answer — instantly.

The benefit? Faster onboarding, fewer escalations, and more confident decision-making across the board.

How to build this in your organization:

  • Start with high-value content (e.g., SOPs, onboarding, policies)
  • Assign content owners to keep AI knowledge up to date
  • Monitor questions and feedback to identify knowledge gaps

Done right, this turns your organization into a living learning system.


4. Personal Learning — Exploring New Skills with AI as a Guide

Outside of work and formal learning, many people are using AI to explore entirely new topics. Whether it’s a new technology, management concept, or even a language, tools like ChatGPT, Gemini and Claude make it easy to start — and to go deep.

Let’s say you want to learn about cloud architecture. You can ask AI to:

  • Create a 4-week plan tailored to your experience level
  • Suggest reading material and create quick explainers
  • Generate test questions or even simulate an interview

The benefit? Structured, personalized, and frictionless learning — anytime, anywhere.

To make it effective:

  • Be specific: define your goals and time frame
  • Ask for exercises or cases to apply what you learn
  • Use reflection prompts and feedback to deepen understanding

The key is to treat AI as a learning coach, not just a search engine.


Looking Ahead — Opportunities, Risks, and What Leaders Can Do

AI can make learning faster, broader, and more accessible. But like any capability shift, it introduces both upside and new risks:

Opportunities

  • Faster time to skill through real-time, contextual learning
  • Scaling of expert knowledge across global teams
  • Better engagement and confidence among learners at all levels

Risks

  • Over-reliance on AI can lead to shallow understanding
  • Inaccurate or outdated responses risk reinforcing errors
  • Uneven adoption can widen capability gaps inside teams

How to mitigate the risks

  • Introduce guardrails that promote reasoning and reduce blind copying
  • Keep AI tools connected to curated, up-to-date knowledge
  • Build adoption playbooks tailored to roles, not just tools

Final Thought — Treat AI as Part of Your Learning System

The most successful organizations aren’t just giving people access to AI — they’re designing learning systems around it.

That means using AI to model best practice, challenge thinking, and reduce time-to-competence. AI is not just a productivity tool — it’s a capability accelerator.

Those who treat it that way will upskill faster, build smarter teams, and stay more adaptable in the face of constant change.

Agents vs. Automation – How to Choose the Right Tool for the Job

As AI agents storm the market and automation technologies mature, transformation leaders face a critical question: Not just what to automate — but how.

From RPA and low-code platforms to intelligent agents and native automation tools, the choices are expanding fast.

This article offers a practical framework to help you make the right decisions — and build automation that scales with your organization.


A Layered View of the Automation Landscape

Modern automation isn’t a single tool — it’s leveraging a full stack. Here are the key layers:

🔹 1. Digital Core Platforms

Systems like SAP, Salesforce, ServiceNow and Workday host your enterprise data and business processes. They often come with native automation tools (e.g., Salesforce Flow, SAP BTP), ideal for automating workflows within the platform.

🔹 2. Integration Platforms (iPaaS)

Tools like MuleSoft, Boomi, and Microsoft Power Platform play a foundational role in enterprise automation. These Integration Platforms as a Service (iPaaS) connect applications, data sources, and services across your IT landscape — allowing automation to function seamlessly across systems rather than in silos.

🔹 3. Automation Tools

  • RPA (e.g., UiPath) automates rule-based, repetitive tasks
  • Workflow Automation manages structured, multi-step business processes
  • Low-/No-Code Platforms (e.g., Power Apps, Mendix) empower teams to build lightweight apps and automations with minimal IT support

🔹 4. AI Agents

Tools and platforms like OpenAI Agents, Microsoft Copilot Studio, Google Vertex AI Agent Builder, and LangChain enable reasoning, adaptability, and orchestration — making them well-suited for knowledge work, decision support, and dynamic task execution.


Choosing the Right Tool for the Job

No single tool is right for every use case. Here’s how to decide:

ScenarioBest Fit
Rule-based, repetitive workRPA
Structured, approval-based flowsWorkflow Automation
Inside one platform (e.g., CRM/ERP)Native Platform Automation
Cross-system data & process flowsIntegration Platforms (iPaaS)
Lightweight cross-platform appsLow-/No-Code Platforms
Knowledge-driven or dynamic tasksAI Agents

The most effective automation strategies are hybrid — combining multiple tools for end-to-end value.


Implementation Roadmaps: One Journey, Many Paths

While all automation projects follow a shared journey — identify, pilot, scale — each tool requires a slightly different approach.


1. Identify the Right Opportunities

  • Native Platform Tools: Start with what’s already built into Salesforce, SAP, etc.
  • iPaaS: Identify silos where data must flow between systems
  • RPA: Use process/task mining to find repeatable, rule-based activities
  • Workflow: Focus on bottlenecks, exceptions, and handoffs
  • Low-/No-Code: Empower teams to surface automation needs and prototype fast
  • AI Agents: Look for unstructured, knowledge-heavy processes

2. Design for Fit and Governance

Each automation type requires a different design mindset — based on scope, user ownership, and risk profile.

  • Native Platform Automation: Stay aligned with vendor architecture and update cycles
  • iPaaS: Build secure, reusable data flows
  • RPA: Design for stability, handle exceptions
  • Workflow: Focus on roles, rules, and user experience
  • Low-/No-Code Platforms: Enable speed, but embed clear guardrails
  • AI Agents: Use iterative prompt design, test for reliability

Key distinction:

  • Native platform automation is ideal for secure, internal process flows.
  • Low-/no-code platforms are better for lightweight, cross-functional solutions — but they need structure to avoid sprawl.

3. Pilot, Learn, and Iterate

  • Platform-native pilots are quick to deploy and low-risk
  • RPA pilots deliver fast ROI but require careful exception handling
  • Workflow Automation start with one process and involve users early to validate flow and adoption.
  • Low-/no-code pilots accelerate innovation, especially at the edge
  • iPaaS pilots often work quietly in the background — but are critical for scale
  • AI agent pilots demand close supervision and feedback loops

4. Scale with Structure

To scale automation, focus not just on tools, but on governance:

  • Workflow and Low-Code: Set up federated ownership or Centres of Excellence
  • RPA and iPaaS: Track usage, manage lifecycles, prevent duplication
  • AI Agents: Monitor for performance, hallucination, and compliance
  • Native Platform Tools: Coordinate with internal admins and platform owners

The most successful organizations won’t just automate tasks — they’ll design intelligent ecosystems that scale innovation, decision-making, and value creation.


Conclusion: Architect the Ecosystem

Automation isn’t just about efficiency — it’s about scaling intelligence across the enterprise.

  • Use native platform tools when speed, security, and process alignment matter most
  • Use low-/no-code platforms to empower teams and accelerate delivery
  • Use RPA and workflows for high-volume or structured tasks
  • Use AI agents to enhance decision-making and orchestrate knowledge work
  • Use integration platforms to stitch it all together

The winners will be the ones who build coherent, adaptive automation ecosystems — with the right tools, applied the right way, at the right time.

GAINing Clarity – Demystifying and Implementing GenAI

Herewith my final summer reading book review as part of my newsletter series.
GAIN – Demystifying GenAI for Office and Home by Michael Wade and Amit Joshi offers clarity in a world filled with AI hype. Written by two respected IMD professors, this book is an accessible, structured, and balanced guide to Generative AI (GenAI), designed for a broad audience—executives, professionals, and curious individuals alike.

What makes GAIN especially valuable for leaders is its practical approach. It focuses on GenAI’s real-world relevance: what it is, what it can do, where it can go wrong, and how individuals and organizations can integrate it effectively into daily workflows and long-term strategies.

What’s especially nice is that Michael and Amit have invited several other thought and business leaders to contribute their perspectives and examples to the framework provided. (I especially liked the contribution of Didier Bonnet.)

The GAIN Framework

The book is structured into eight chapters, each forming a step in a logical journey—from understanding GenAI to preparing for its future impact. Below is a summary of each chapter’s key concepts.


Chapter 1 – EXPLAIN: What Makes GenAI Different

This chapter distinguishes GenAI from earlier AI and digital innovations. It highlights GenAI’s ability to generate original content, respond to natural-language prompts, and adapt across tasks with minimal input. Key concepts include zero-shot learning, democratized content creation, and rapid adoption. The authors stress that misunderstanding GenAI’s unique characteristics can undermine effective leadership and strategy.


Chapter 2 – OBTAIN: Unlocking GenAI Value

Wade and Joshi explore how GenAI delivers value at individual, organizational, and societal levels. It’s accessible and doesn’t require deep technical expertise to drive impact. The chapter emphasizes GenAI’s role in boosting productivity, enhancing creativity, and aiding decision-making—especially in domains like marketing, HR, and education—framing it as a powerful augmentation tool.


Chapter 3 – DERAIL: Navigating GenAI’s Risks

This chapter outlines key GenAI risks: hallucinations, privacy breaches, IP misuse, and embedded bias. The authors warn that GenAI systems are inherently probabilistic, and that outputs must be questioned and validated. They introduce the concept of “failure by design,” reminding readers that creativity and unpredictability often go hand in hand.


Chapter 4 – PREVAIL: Creating a Responsible AI Environment

Here, the focus turns to managing risks through responsible use. The authors advocate for transparency, human oversight, and well-structured usage policies. By embedding ethics and review mechanisms into workflows, organizations can scale GenAI while minimizing harm. Ultimately, it’s how GenAI is used—not just the tech itself—that defines its impact.


Chapter 5 – ATTAIN: Scaling with Anchored Agility

This chapter presents “anchored agility” as a strategy to scale GenAI responsibly. It encourages experimentation, but within a framework of clear KPIs and light-touch governance. The authors promote an adaptive, cross-functional approach where teams are empowered, and successful pilots evolve into embedded capabilities.

One of the most actionable frameworks in GAIN is the Digital and AI Transformation Journey, which outlines how organizations typically mature in their use of GenAI:

  • Silo – Individual experimentation, no shared visibility or coordination.
  • Chaos – Widespread, unregulated use. High potential but rising risk.
  • Bureaucracy – Management clamps down. Risk is reduced, but innovation stalls.
  • Anchored Agility – The desired state: innovation at scale, supported by light governance, shared learning, and role clarity.

This model is especially relevant for transformation leaders. It mirrors the organizational reality many face—not only with AI, but with broader digital initiatives. It gives leaders a language to assess their current state and a vision for where to evolve.


Chapter 6 – CONTAIN: Designing for Trust and Capability

Focusing on organizational readiness, this chapter explores structures like AI boards and CoEs. It also addresses workforce trust, re-skilling, and role evolution. Rather than replacing jobs, GenAI changes how work gets done—requiring new hybrid roles and cultural adaptation. Containment is about enabling growth, not restricting it.


Chapter 7 – MAINTAIN: Ensuring Adaptability Over Time

GenAI adoption is not static. This chapter emphasizes the need for feedback loops, continuous learning, and responsive processes. Maintenance involves both technical tasks—like tuning models—and organizational updates to governance and team roles. The authors frame GenAI maturity as an ongoing journey.


Chapter 8 – AWAIT: Preparing for the Future

The book closes with a pragmatic look ahead. It touches on near-term shifts like emerging GenAI roles, evolving regulations, and tool commoditization. Rather than speculate, the authors urge leaders to stay informed and ready to adapt, fostering a mindset of proactive anticipation.posture of informed anticipation: not reactive panic, but intentional readiness. As the GenAI field evolves, so must its players.


What GAIN Teaches Us About Digital Transformation

Beyond the specifics of GenAI, GAIN offers broader lessons that are directly applicable to digital transformation initiatives:

  • Start with shared understanding. Whether you’re launching a transformation program or exploring AI pilots, alignment starts with clarity.
  • Balance risk with opportunity. The GAIN framework models a mature transformation mindset—one that embraces experimentation while putting safeguards in place.
  • Transformation is everyone’s job. GenAI success is not limited to IT or data teams. From HR to marketing to the executive suite, value creation is cross-functional.
  • Governance must be adaptive. Rather than rigid control structures, “anchored agility” provides a model for iterative scaling—one that balances speed with oversight.
  • Keep learning. Like any transformation journey, GenAI is not linear. Feedback loops, upskilling, and cultural evolution are essential to sustaining momentum.

In short, GAIN helps us navigate the now, while preparing for what’s next. For leaders navigating digital and AI transformation, it’s a practical compass in a noisy, fast-moving world.