AI in 2026: From Experimentation to Implementation

2026 will mark the transition from AI experimentation to pragmatic implementation with significant emphasis on return on investment, governance, and agentic AI systems. The hype bubble has deflated, replaced by hard-nosed business requirements and measurable outcomes. CFOs become AI gatekeepers, speculative pilots get killed, and the discussion moves to “which AI projects drive profit?” In that context, strategic shifts matter most for boards and executive teams—and seven conditions will separate winners from the rest.


Shift 1 – From Hype to Hard Work: AI Factories in an ROI-Driven World

The first shift is financial discipline. Analysts expect enterprises will defer roughly 25% of planned AI spend into 2027 as CFOs insist on clear value, not proof-of-concept experiments. Only a small minority of organisations can currently point to material EBIT impact from AI, despite wide adoption.

The era of “let’s fund ten pilots and see what sticks” is ending. Funding flows to organisations that behave more like AI factories: they standardise how use cases are sourced, evaluated, industrialised and governed, with shared platforms rather than bespoke experiments.

What this means for leadership in 2026

  • Every AI initiative needs explicit, P&L-linked metrics (revenue, cost, margin) and a timebox for showing impact.
  • Expect your CFO to become a co-owner of the AI portfolio—approving not just spend, but the value logic.
  • The key maturity question is shifting from “Do we use AI?” to “How many AI use cases are scaled, reused and governed?”

Shift 2 – AI Teammates in Every Role: Work Gets Re-Architected

By the end of 2026, around 40% of enterprise applications are expected to embed task-specific AI agents, and a similar share of roles will involve working with those agents. These are not just chatbots; they are digital colleagues handling end-to-end workflows in sales, service, finance, HR and operations.

Research from McKinsey and BCG suggests a simple rule of thumb: successful AI transformations are roughly 10% algorithms, 20% technology and data, and 70% people and processes. High performers are three times more likely to fundamentally redesign workflows than to automate existing ones.

What this means for leadership in 2026

  • Ask less “Which copilot can we roll out?” and more “What would this process look like if we assumed agents from day one?”
  • Measure success in cycle time, error rates and processes eliminated, not just productivity per FTE.
  • Treat “working effectively with agents” as a core competency for managers and professionals.

Shift 3 – New Org Structures: CAIOs, AI CoEs and Agent Ops

As AI moves into the core of the business, organisational design is following. A small but growing share of large companies now appoint a dedicated AI leader (CAIO or equivalent), accountable for turning AI strategy into business outcomes and for managing risk.

The workforce pyramid is shifting as well. Entry-level positions are “quietly disappearing”—not through layoffs, but through non-renewal—while AI-skilled workers command wage premiums of 50%+ in some markets and rising.

This drives three structural moves:

  • AI Centres of Excellence evolve from advisory teams into delivery engines that provide reference architectures, reusable agents and enablement.
  • “Agent ops” capabilities emerge—teams tasked with monitoring, tuning and governing fleets of agents across the enterprise.
  • Career paths split between traditional functional tracks and “AI orchestrator” tracks.

What this means for leadership in 2026

  • Clarify who owns AI at ExCo level—and whether they have the mandate to say no as well as yes.
  • Ensure your AI CoE is set up to ship and scale, not just write guidelines.
  • Start redesigning roles, spans of control and career paths on the assumption that agents will take over a significant share of routine work.

Shift 4 – Governance and Risk: From Optional to Existential

By the end of 2026, AI governance will be tested in courtrooms and regulators’ offices, not only in internal committees. Analysts expect thousands of AI-related legal claims globally, with organisations facing lawsuits, fines and in some cases leadership changes due to inadequate governance.

At the same time, frameworks like the EU AI Act move to enforcement, particularly in high-risk domains such as healthcare, finance, HR and public services. In parallel, many organisations are introducing “AI free” assessments to counter concerns about over-reliance and erosion of critical thinking.

What this means for leadership in 2026

  • Treat AI as a formal risk class alongside cyber and financial risk, with explicit classifications, controls and reporting.
  • Expect to demonstrate traceability, explainability and human oversight for consequential use cases.
  • Recognise that governance failures can quickly become CEO- and board-level issues, not just CIO problems.

Shift 5 – The Data Quality Bottleneck

The fifth shift is about the constraint that matters most: data quality. Across multiple sources, “AI-ready data” emerges as the primary bottleneck. Companies that neglect it could see productivity losses of 15% or more, with widespread AI initiatives missing their ROI targets due to poor foundations.

Most companies have data. Few have AI-ready data: unified, well-governed, timely, with clear definitions and ownership.

On the infrastructure side, expect a shift from “cloud-first” to “cloud where appropriate,” with organisations seeking more control over cost, jurisdiction and resilience. On the environmental side, data-centre power consumption is becoming a visible topic in ESG discussions, forcing hard choices about which workloads truly deserve the energy and capital they consume.

What this means for leadership in 2026

  • Treat critical data domains as products with clear owners and SLAs, not as exhaust from processes and applications.
  • Make data readiness a gating criterion for funding AI use cases.
  • Infrastructure and model choices are now strategic bets, not just IT sourcing decisions.

Seven Conditions for Successful AI Implementation in 2026

Pulling these shifts together, here are seven conditions that separate winners from the rest:

FINANCIAL FOUNDATIONS

1. Financial discipline first

  • Tie every AI initiative to specific P&L metrics and realistic value assumptions.
  • Kill or re-scope projects that cannot demonstrate credible impact within 12–18 months.

2. Build an AI factory

  • Standardise how you source, prioritise and industrialise use cases.
  • Focus on a small number of high-value domains and build shared platforms and solution libraries instead of one-off solutions.

OPERATIONAL EXCELLENCE

3. Redesign workflows around agents (the 10–20–70 rule)

  • Assume that only 10% of success is the model and 20% is tech/data; the remaining 70% is people and process.
  • Measure progress in terms of processes simplified or eliminated, not just tasks automated.

4. Treat data as a product

  • Invest in “AI-ready data”: unified, well-governed, timely, with clear definitions and ownership.
  • Make data readiness a gating criterion for funding AI use cases.

5. Governance by design, not retrofit

  • Mandate governance from day one: model inventories, risk classification, human-in-the-loop for high-impact decisions.
  • Build transparency, explainability and audit trails into systems upfront.

ORGANISATIONAL CAPABILITY

6. Organise for AI: leadership, CoEs and agent operations

  • Clarify executive ownership (CAIO or equivalent), empower an AI CoE to execute, and stand up agent-ops capabilities to monitor and steer your digital workforce.

7. Commit to continuous upskilling

  • Assume roughly 44% of current skills will materially change over the next five years; treat AI literacy and orchestration skills as mandatory.
  • Invest more in upskilling existing talent than in recruiting “unicorns.”

The Bottom Line

The defining question for 2026 is no longer “Should we adopt AI?” but “How do we create measurable value from AI while managing its risks?”

The performance gap is widening fast: companies redesigning workflows are pulling three to five times ahead of those merely automating existing processes. By 2027, this gap will be extremely hard to close.

Boards and executive teams that answer this through focused implementation, genuine workflow redesign, responsible governance and continuous workforce development will set the pace for the rest of the decade. Those that continue treating AI as experimentation will find themselves competing against organisations operating at multiples of their productivity, a gap will be very hard to recover from.


Five AI Breakthroughs From 2025 That Will Show Up in Your P&L

A year ago, if you asked an AI to handle a complex customer refund, it might draft an email for you to send.

As 2025 comes to a close, AI agents in some organisations can now check the order history, verify the policy, process the refund, update several systems, and send the confirmation. That is not just a better copilot; it is a different category of capability.

Throughout 2025, the story has shifted from “we are running pilots” to where AI is quietly creating real value inside the enterprise: agents that execute multi-step workflows, voice AI that resolves problems end-to-end, multimodal AI that works on the messy mix of enterprise information, sector-specific applications in life sciences and healthcare, industrial and manufacturing, consumer industries and professional services, and more reliable systems that leaders are prepared to trust with high-stakes work.

This newsletter focuses on what is genuinely possible by the end of 2025 that was hard, or rare at the end of 2024 and where new value pools are emerging.


1. From copilots to autonomous workflows

End of 2024, most enterprise AI lived in copilots and Q&A over knowledge bases. You prompted; the system responded, one step at a time.

By the end of 2025, leading organisations are using AI agents that can run a full workflow: collect inputs, make decisions under constraints, act in multiple systems, and report back to humans at defined checkpoints. They combine memory (what has already been done), tool use (which systems to use), and orchestration (what to do next) in a way that was rare a year ago.

New value pools

  • Life sciences and healthcare: automating  start-up administration, safety case intake, and medical information requests so clinical and medical teams focus on judgement, not paperwork.
  • Industrial and manufacturing: agents handling order-to-cash or maintenance workflows end-to-end. From reading emails and work orders to updating ERP and scheduling technicians.
  • Professional services: agents that move proposals, statements of work, and deliverables through review, approval and filing, improving margin discipline and cycle time.

2. Voice AI as a frontline automation channel

At the end of 2024, voice AI mostly meant smarter voice responses: slightly better menus, obvious hand-offs to humans, and limited ability to handle edge cases.

By the end of 2025, voice agents can hold natural two-way conversations, look up context across systems in real time, and execute the simple parts of a process while the customer is still on the line. For a growing part of the call mix, “talking to AI” is now an acceptable – sometimes preferred – experience.

New value pools

  • Consumer industries: automating high-volume inbound queries such as order status, returns, bookings, and loyalty program questions, with seamless escalation for the calls that truly need an expert.
  • Life sciences and healthcare: patient scheduling, pre-visit questionnaires, follow-up reminders, and simple triage flows, integrated with clinical and scheduling systems.
  • Cross-industry internal support: IT and HR helpdesks where a voice agent resolves routine issues, captures clean tickets, and routes only non-standard requests to human staff.

3. Multimodal AI and enterprise information

Most early deployments of generative AI operated in a text-only world. The reality of large organisations, however, is multimodal: PDFs, decks, images, spreadsheets, emails, screenshots, sensor data, and more.

By the end of 2025, leading systems can read, interpret, and act across all of these. They can navigate screens, and combine text, tables, and images in a single reasoning chain. On the creation side, they can generate on-brand images and videos with consistent characters and scenes, good enough for many marketing and learning use cases.

New value pool

  • Life sciences and healthcare: preparing regulatory and clinical submission packs by extracting key data and inconsistencies across hundreds of pages of protocols, reports, and correspondence.
  • Industrial and manufacturing: combining images, sensor readings, and maintenance logs to flag quality issues or emerging equipment failures before they hit output.
  • Consumer and professional services: producing localised campaigns, product explainers, and internal training content in multiple languages and formats without linear increases in agency spend.

4. Sector-specific impact in the P&L

In 2024, many sector examples of AI looked impressive on slides but were limited in scope. By the end of 2025, AI is starting to move core economics in several industries.

In life sciences and healthcare, AI-driven protein and molecule modelling shortens early discovery cycles and improves hit rates, while diagnostic support tools help clinicians make better real-time decisions. In industrial and manufacturing businesses, AI is layered onto predictive maintenance, scheduling, and quality control to improve throughput and reduce downtime. Consumer businesses are using AI to personalise offers, content, and service journeys at scale. Professional services firms are using AI for research, drafting, and knowledge reuse.

New value pools

  • Faster innovation and time-to-market: from earlier drug discovery milestones to quicker design and testing cycles for industrial products and consumer propositions.
  • Operational excellence: higher asset uptime, fewer defects, better utilisation of people and equipment across plants, networks, and service operations.
  • Revenue and margin uplift: more profitable micro-segmentation in consumer industries, and higher matter throughput and realisation rates in professional and legal services.

5. When AI became trustworthy enough for high stakes work

Through 2023 and much of 2024, most organisations treated generative AI as an experiment.

By the end of 2025, two developments make it more realistic to use AI in critical workflows. First, dedicated reasoning models can work step by step on complex problems in code, data, or law, and explain how they arrived at an answer. Second, governance has matured: outputs are checked against source documents, policies are encoded as guardrails, and model risk is treated like any other operational risk.

New value pools

  • Compliance and risk: automated checks of policies, procedures, and documentation, with AI flagging exceptions and assembling evidence packs for human review.
  • Legal and contract operations: first pass drafts and review of contracts, research memos, and standard documents, with lawyers focusing on negotiation and high judgement work.
  • Financial and operational oversight: anomaly detection, narrative reporting, and scenario analysis that give CFOs and COOs a clearer view of where to intervene.

What this sets up for 2026

Everything above is the backdrop for 2026 – a year that will be less about experimentation and more about pragmatic implementation under real financial and regulatory scrutiny.

In my next newsletter, I will zoom in on:

  • Five strategic shifts – including the move from hype to “AI factories” with CFOs as gatekeepers, agents embedded in everyday roles, new organisational structures (CAIOs, AI CoEs, agent ops), governance moving from optional to existential, and the data-quality bottleneck that will decide who can actually scale.
  • Seven conditions for success – the financial, operational, and organisational foundations that separate companies who turn AI into EBIT from those who stay stuck in pilots.

Rather than extend this piece with another checklist, I will leave you with one question as 2025 closes:

Are you treating today’s AI capabilities as isolated experiments – or as the building blocks of the AI factory, governance, data foundations, and workforce that your competitors will be operating in 2026?

In the next edition, we will explore what it takes to answer that question convincingly.

Why 88% of Companies Use AI but Only 6% See Real Results: What McKinsey’s Research Really Tells Us

Over the past year, McKinsey – itself busy reinventing its business model with AI – has published a constant flow of AI research: adoption surveys, sector deep-dives, workforce projections, technology roadmaps. I’ve read these at different moments in time. For this newsletter, I synthesized 25 of those reports into one overview (leveraging NotebookLM).

The picture that emerges is both clearer and more confronting than any of the individual pieces on their own.

The headline is simple: AI is now everywhere, but real value is highly concentrated. A small group of “AI high performers” is pulling away from the pack—economically, organizationally, and technologically. The gap is about to widen further as we move from today’s generative tools to tomorrow’s agentic, workflow-orchestrating systems.

This isn’t a technology story. It’s a strategy, operating model, and governance story.


AI is everywhere – value is not

McKinsey’s research shows that almost 9 in 10 organizations now use AI somewhere in the business, typically in one function or a handful of use cases. Yet only about a third are truly scaling AI beyond pilots, and just 6% can attribute 5% or more EBIT uplift to AI.

Most organizations are stuck in what I call the “pilot loop”:

  1. Launch a promising proof of concept.
  2. Prove that “AI works” in a narrow setting.
  3. Hit organizational friction – ownership, data, process, risk.
  4. Park the use case and start another pilot.

On paper, these companies look active and innovative. In reality, they are accumulating “AI debt”: a growing gap between what they could achieve and what the real leaders are already realizing in terms of growth, margin, and capability.

The research is clear: tools are no longer a differentiator. Your competitive position is defined by your ability to industrialize AI – to embed it deeply into how work is done, not just where experiments are run.


The 6% success factors: what AI high performers actually do

The small cohort of high performers behaves in systematically different ways. Four contrasts stand out:

  1. They pursue growth, not just efficiency
    Most organizations still frame AI as a cost and productivity story. High performers treat efficiency as table stakes and put equal weight on new revenue, new offerings, and new business models. AI is positioned as a growth engine, not a shared-service optimization tool.
  2. They redesign workflows, not just add tools
    This is the single biggest differentiator. High performers are almost three times more likely to fundamentally redesign workflows around AI. They are willing to change decision rights, process steps, roles, and controls so that AI is embedded at the core of how work flows end-to-end.
  3. They lead from the C-suite
    In high performers, AI is not owned by a digital lab, an innovation team, or a single function. It has visible, direct sponsorship from the CEO or a top-team member, with clear, enterprise-wide mandates. That sponsorship is about more than budget approval; it’s about breaking silos and forcing trade-offs.
  4. They invest at scale and over time
    Over a third of high performers dedicate more than 20% of their digital budgets to AI. Crucially, that spend is not limited to models and tools. It funds data foundations, workflow redesign, change management, and talent.

Taken together, these behaviours show that AI leadership is a management choice, not a technical one The playbook is available to everyone, but only a few are willing to fully commit.


The workforce is already shifting – and we’re still early

McKinsey’s data also cuts through a lot of speculation about jobs and skills. Three signals are particularly important:

  • Workforce impact is real and rising
    In the past year, a median of 17% of respondents reported workforce reductions in at least one function due to AI. Looking ahead, that number jumps to 30% expecting reductions in the next year as AI scales further.
  • The impact is uneven by function
    The biggest expected declines are in service operations and supply chain management, where processes are structured and outcomes are measurable. In other areas, hiring and reskilling are expected to offset much of the displacement.
  • New roles and skills are emerging fast
    Organizations are already hiring for roles like AI compliance, model risk, and AI ethics, and expect reskilling efforts to ramp up significantly over the next three years.

The message for leaders is not “AI will take all the jobs,” but rather:

If you’re not deliberately designing a human–AI workforce strategy that covers role redesign, reskilling, mobility, governance implications, it will happen to you by default.


The next wave: from copilots to co-workers

Most of the current adoption story is still about generative tools that assist individual knowledge workers: drafting content, summarizing documents, writing code.

McKinsey’s research points to the next phase: Agentic AI – systems that don’t just respond to prompts but plan, orchestrate, and execute multi-step workflows with limited human input.

Three shifts matter here:

  1. From tasks to workflows
    We move from “AI helps write one email” to “AI manages the full case resolution process”—from intake to investigation, decision, and follow-up.
  2. From copilots to virtual co-workers
    Agents will interact with systems, trigger actions, call APIs, and collaborate with other agents. Humans move further upstream (framing, oversight, escalation) and downstream (relationship, judgement, exception handling).
  3. From generic tools to deep verticalization
    The most impactful agents will be highly tailored to sector and context: claims orchestration in insurance, demand planning in manufacturing, clinical operations in pharma, and so on.

Today, around six in ten organizations are experimenting with AI agents, but fewer than one in ten is scaling them in any function. The gap between high performers and everyone else is set to widen dramatically as agents move from proof of concept to production.


So what should leaders actually do?

The gap between high performers and everyone else is widening now, not in five years. As agentic AI moves from proof of concept to production, the organizations still running pilots will find themselves competing against fundamentally different operating models—ones that are faster, more scalable, and structurally more profitable.

If you sit on an executive committee or board, you might start with these questions:

  1. Ambition – Are we using AI mainly to cut cost, or do we have a clear thesis on how it will create new revenue, offerings, and business models?
  2. Workflow rewiring – For our top 5–10 value pools, have we actually redesigned end-to-end workflows around AI, or are we just bolting tools onto legacy processes?
  3. Ownership – Who on the top team is truly accountable for AI as an enterprise-wide agenda—not just for “experiments,” but for operating model, risk, and value delivery?
  4. Workforce strategy – Do we have a concrete plan for role redesign, reskilling, and new AI governance roles over the next 3–5 years, backed by budget?
  5. Foundations and governance – Are we treating data, infrastructure, and sustainability as strategic assets, with the same rigor as financial capital and cybersecurity?

The era of casual experimentation is over. McKinsey’s research makes one thing brutally clear: the organizations that will dominate the agentic era won’t be those with the most impressive demos or the longest list of pilots. but those willing to answer “yes” to all five questions – and back those answers with real budget, real accountability, and real organizational change.

The 6% are already there. The question is whether you’ll join them—or explain to your board why you didn’t.

How to use AI whilst keeping your Data Private and Safe

AI can pay off quickly—copilots that accelerate knowledge work, smarter customer operations, and faster software delivery. The risk is not AI itself; it is how you handle data. Look at privacy (what you expose), security (who can access), compliance (what you can prove), and sovereignty (where processing happens) as separate lenses. The playbook is simple: classify the data you’ll touch; choose one of four deployment models; apply a few guardrails—identity, logging, and simple rules people understand; then measure value and incidents. Start “as open as safely possible” with the less sensitive cases for speed, and move to tighter control as sensitivity increases.


What “Private & Safe” actually means

Private and safe AI means using the least amount of sensitive information, tightly controlling who and what AI can access, proving that your handling meets legal and industry obligations, and ensuring processing happens in approved locations. In practice you minimise exposure, authenticate users, encrypt and log activity, and keep a clear record of decisions and data flows so auditors and customers can trust the outcome.

To make this work across the enterprise, bring the right people together around each use case. The CIO and CISO own the platform choices and controls; the CDO curates which data sources are approved; Legal sets lawful use and documentation; business owners define value and success; HR and Works Council get involved where employee data or work patterns change. Run a short, repeatable intake: describe the use case, identify the data, select the deployment model, confirm the controls, and agree how quality and incidents will be monitored.


How to classify “Sensitive Data” – a simple four-tier guide

Not all data is equal. Classifying it upfront tells you how careful you need to be and which setup to use.

Tier 1 – Low sensitivity. Think public information or generic content such as first drafts of marketing copy. Treat this as the training ground for speed: use packaged tools, keep records of usage, and avoid connecting unnecessary internal sources.

Decision check: “Could this appear on our website tomorrow?”Yes = Tier 1

Tier 2 – Internal. Everyday company knowledge—policy summaries, project notes, internal wikis. Allow AI to read from approved internal sources, but restrict access to teams who need it and retain basic logs so you can review what was asked and answered.

Decision check: “Would sharing this externally require approval?”Yes = Tier 2+

Tier 3 – Confidential. Material that would harm you or your customers if leaked—client lists, pricing models, source code. Use controlled company services that you manage, limit which repositories can be searched, keep detailed activity records, and review results for quality and leakage before scaling.

Decision check: “Would leakage breach a contract or NDA?”Yes = Tier 3+

Tier 4 – Restricted or regulated. Legally protected or mission-critical information—patient or financial records, trade secrets, M&A. Run in tightly controlled environments you operate, separate this work from general productivity tools, test thoroughly before go-live, and document decisions for auditors and boards.

Decision check: “Is this regulated or business-critical?”Yes = Tier 4


Common mistakes – and how to fix them

Using personal AI accounts with company data.
This bypasses your protections and creates invisible risk. Make it company accounts only, block personal tools on the network, and provide approved alternatives that people actually want to use.

Assuming “enterprise tier” means safe by default.
Labels vary and settings differ by vendor. Ask for clear terms: your questions and documents are not used to improve public systems, processing locations are under your control, and retention of queries and answers is off unless you choose otherwise.

Building clever assistants without seeing what actually flows.
Teams connect documents and systems, then no one reviews which questions, files, or outputs move through the pipeline. Turn on logging, review usage, and allow only a short list of approved data connections.

Skipping basic training and a simple policy.
People guess what’s allowed, leading to inconsistent—and risky—behaviour. Publish a one-page “how we use AI here,” include it in onboarding, and name owners who check usage and costs.


AI Deployment Models

Model 1 — Secure packaged tools (fastest path to value).
Ready-made apps with business controls—ideal for broad productivity on low-to-moderate sensitivity work such as drafting, summarising, meeting notes, and internal Q&A. Examples: Microsoft Copilot for Microsoft 365, Google Workspace Gemini, Notion AI, Salesforce Einstein Copilot, ServiceNow Now Assist. Use this when speed matters and the content is not highly sensitive; step up to other models for regulated data or deeper system connections.

Model 2 — Enterprise AI services from major providers.
You access powerful models through your company’s account; your inputs aren’t used to train public systems and you can choose where processing happens. Well-suited to building your own assistants and workflows that read approved internal data. Examples: Azure OpenAI, AWS Bedrock, Google Vertex AI, OpenAI Enterprise, Anthropic for Business. Choose this for flexibility without running the underlying software yourself; consider Model 3 if you need stronger control and detailed records.

Model 3 — Managed models running inside your cloud.
The models and search components run within your own cloud environment, giving you stronger control and visibility while the vendor still manages the runtime. A good fit for confidential or regulated work where oversight and location matter. Examples: Bedrock in your AWS account, Vertex AI in your Google Cloud Platform, Azure OpenAI in your subscription, Databricks Mosaic AI, Snowflake Cortex. Use this when you need enterprise-grade control with fewer operational burdens than full self-hosting.

Model 4 — Self-hosted and open-source models.
You operate the models yourself—on-premises or in your cloud. This gives maximum control and sovereignty, at the cost of more engineering, monitoring, and testing. Suits the most sensitive use cases or IP-heavy R&D. Examples: Llama, Mistral, DBRX—supported by platforms such as Databricks, Nvidia NIM, VMware Private AI, Hugging Face, and Red Hat OpenShift AI. Use this when the business case and risk profile justify the investment and you have the talent to run it safely.


Building Blocks and How to Implement (by company size)

Essential Building blocks

A few building blocks change outcomes more than anything else. Connect AI to approved data sources through a standard “search-then-answer” approach—often called Retrieval-Augmented Generation (RAG), where the AI first looks up facts in your trusted sources and only then drafts a response.

This reduces the need to copy data into the AI system and keeps authority with your original records. Add a simple filter to remove personal or secret information before questions are sent. Control access with single sign-on and clear roles. Record questions and answers so you can review quality, fix issues, and evidence compliance. Choose processing regions deliberately and, where possible, manage your own encryption keys. Keep costs in check with team budgets and a monthly review of usage and benefits.

Large enterprises

Move fastest with a dual approach. Enable packaged tools for day-to-day productivity, and create a central runway based on enterprise AI services for most custom assistants. For sensitive domains, provide managed environments inside your cloud with the standard connection pattern, built-in filtering, and ready-made quality tests. Reserve full self-hosting for the few cases that genuinely need it. Success looks like rapid adoption, measurable improvements in time or quality, and no data-handling incidents.

Mid-market organisations

Get leverage by standardising on one enterprise AI service from their primary cloud, while selectively enabling packaged tools where they clearly save time. Offer a single reusable pattern for connecting to internal data, with logging and simple redaction built in. Keep governance light: a short policy, a quarterly review of model quality and costs, and a named owner for each assistant.

Small-Mid sized companies

Should keep it simple. Use packaged tools for daily work and a single enterprise AI service for tasks that need internal data. Turn off retention of questions and answers where available, restrict connections to a small list of approved sources, and keep work inside the company account—no personal tools or copying content out. A one-page “how we use AI here,” plus a monthly check of usage and spend, is usually enough.


What success looks like

Within 90 days, 20–40% of knowledge workers are using AI for routine tasks. Teams report time saved or quality improved on specific workflows. You have zero data-handling incidents and can show auditors your data flows, access controls, and review process. Usage and costs are tracked monthly, and you’ve refined your approved-tools list based on what actually gets adopted.

You don’t need a bespoke platform or a 200-page policy to use AI safely. You need clear choices, a short playbook, and the discipline to apply it.

Where AI Is Creating the Most Value (Q4 2025)

There’s still a value gap—but leaders are breaking away. In the latest BCG work, top performers report around five times more revenue uplift and three times deeper cost reduction from AI than peers. The common thread: they don’t bolt AI onto old processes—they rewire the work. As BCG frames it, the 10-20-70 rule applies: roughly 10% technology, 20% data and models, and 70% process and organizational change. That’s where most of the value is released.

This article is for leaders deciding where to place AI bets in 2025. If you’re past “should we do AI?” and into “where do we make real money?”, this is your map.


Where the money is (cross-industry)

1) Service operations: cost and speed
AI handles simple, repeatable requests end-to-end and coaches human agents on the rest. The effect: shorter response times, fewer repeat contacts, and more consistent outcomes—without sacrificing customer experience.

2) Supply chain: forecast → plan → move
The gains show up in fewer stockouts, tighter inventories, and faster cycle times. Think demand forecasting, production planning, and dynamic routing that reacts to real-world conditions.

3) Software and engineering: throughput
Developer copilots and automated testing increase release velocity and reduce rework. You ship improvements more often, with fewer defects, and free scarce engineering time for higher-value problems.

4) HR and talent: faster funnels and better onboarding/learning
Screening, scheduling, and candidate communication are compressed from days to hours. Internal assistants support learning and workforce planning. The results: shorter time-to-hire and better conversion through each stage.

5) Marketing and sales: growing revenue
Personalization, next-best-action, and on-the-fly content creation consistently drive incremental sales. This is the most frequently reported area for measurable revenue lift.

Leadership advice: Pick 2-3 high-volume processes (one cost, one revenue). Redesign the workflow, not just add AI on top. Set hard metrics (cost per contact, cycle time, revenue per visit) and a 90-day checkpoint. Industrialize what works; kill what doesn’t.


Sector spotlights

Consumer industries (Retail & Consumer Packaged Goods)

Marketing and sales.

  • Personalized recommendations increase conversion and basket size; retail media programs are showing verified incremental sales.
  • AI-generated marketing content reduces production costs and speeds creative iteration across markets and channels. Mondelez reported 30-50% reduction in marketing content production costs using generative AI at scale.
  • Campaign analytics that used to take days are produced automatically, so teams run more “good bets” each quarter.

Supply chain.

  • Demand forecasting sharpens purchasing and reduces waste.
  • Production planning cuts changeovers and work-in-progress.
  • Route optimization lowers distance traveled and fuel, improving on-time delivery.

Customer service.

  • AI agents now resolve a growing share of contacts end-to-end. Ikea AI agents now handle already 47% of all request so service people can offer more support on the other questions.
  • Agent assist gives human colleagues instant context and suggested next steps.
    The result is more issues solved on first contact, shorter wait times, and maintained satisfaction, provided clear hand-offs to humans exist for complex cases.

What to copy: Start with one flagship process in each of the three areas above; set a 90-day target; only then roll it across brands and markets with a standard playbook.


Manufacturing (non-pharma)

Predictive maintenance.
When tied into scheduling and spare-parts planning, predictive maintenance reduces unexpected stoppages and maintenance costs—foundational for higher overall equipment effectiveness (spelled out intentionally).

Computer-vision quality control.
In-line visual inspection detects defects early, cutting scrap, rework, and warranty exposure. Value compounds as models learn across lines and plants.

Production scheduling.
AI continuously rebalances schedules for constraints, changeovers, and demand shifts—more throughput with fewer bottlenecks. Automotive and electronics manufacturers report 5-15% throughput gains when AI-driven scheduling handles real-time constraints.

Move to scale: Standardize data capture on the line, run one “AI plant playbook” to convergence, then replicate. Treat models as line assets with clear ownership, service levels, and a retraining cadence.


Pharmaceuticals

R&D knowledge work.
AI accelerates three high-friction areas: (1) large evidence reviews, (2) drafting protocols and clinical study reports, and (3) assembling regulatory summaries. You remove weeks from critical paths and redirect scientists to higher-value analysis.

Manufacturing and quality.
Assistants streamline batch record reviews, deviation write-ups, and quality reports. You shorten release cycles and reduce delays. Govern carefully under Good Manufacturing Practice, with humans approving final outputs.

Practical tip: Stand up an “AI for documents” capability (standardized templates, automated redaction, citation checking, audit trails) before you touch lab workflows. It pays back quickly, proves your governance model, and reduces compliance risk when you expand to higher-stakes processes.


Healthcare providers

Augment the professional; automate the routine. Radiology, pathology, and frontline clinicians benefit from AI that drafts first-pass reports, triages cases, and pre-populates documentation. Northwestern Medicine studies show approximately 15.5% average productivity gains (up to 40% in specific workflows) in radiology report completion without accuracy loss. Well-designed oversight maintains quality while reducing burnout.

Non-negotiable guardrail: Clear escalation rules for edge cases and full traceability. If a tool can’t show how it arrived at a suggestion, it shouldn’t touch a clinical decision. Establish explicit human review protocols for any AI-generated clinical content before it reaches patients or medical records.


Financial services

Banking.

  • Service and back-office work: assistants summarize documents, draft responses, and reconcile data. JPMorgan reports approximately 30% fewer servicing calls per account in targeted Consumer and Community Banking segments and 15% lower processing costs in specific workflows.
  • Risk and compliance: earlier risk flags, smarter anti-money-laundering reviews, and cleaner audit trails reduce losses and manual rework.

Insurance.

  • Claims: straight-through processing for simple claims moves from days to hours.
  • Underwriting: AI assembles files and surfaces risk signals so underwriters focus on complex judgment.
  • Back office: finance, procurement, and HR automations deliver steady, compounding savings.

Leadership note: Treat service assistants and claims bots as products with roadmaps and release notes—not projects. That discipline keeps quality high as coverage expands.


Professional services (legal, consulting, accounting)

Document-heavy work is being rebuilt: contract and filing review, research synthesis, proposal generation. Well-scoped processes often see 40–60% time savings. . Major law firms report contract review cycles compressed from 8-12 hours to 2-3 hours for standard agreements, with associates redirected to judgment-heavy analysis and client advisory work.

Play to win: Build a governed retrieval layer over prior matters, proposals, and playbooks—your firm’s institutional memory—then give every practitioner an assistant that can reason over it.


Energy and utilities

Grid and renewables.
AI improves demand and renewable forecasting and helps balance the grid in real time. Autonomous inspections (drones plus computer vision) speed asset checks by 60-70% and reduce hazards. Predictive maintenance on critical infrastructure prevents outages—utilities report 20-30% reduction in unplanned downtime when AI is tied into work order systems and cuts truck rolls (field service visits).

How to scale: Start with one corridor or substation, prove inspection cycle time and fault detection, then expand with a standard data schema so models learn from every site.


Next Steps (practical and measurable)

1) Choose three processes—one for cost, one for revenue, one enabler.
Examples:

  • Cost: customer service automation, predictive maintenance, the month-end finance close.
  • Revenue: personalized offers, “next-best-action” in sales, improved online merchandising.
  • Enabler: developer assistants for code and tests, HR screening and scheduling.
    Write a one-line success metric and a quarterly target for each (e.g., “reduce average response time by 30%,” “increase conversion by 2 points,” “ship weekly instead of bi-weekly”).

2) Redesign the work, not just the process map.
Decide explicitly: what moves to the machine, what stays with people, where the hand-off happens, and what the quality gate is. Train for it. Incentivize it.

3) Industrialize fast.
Stand up a small platform team for identity, data access, monitoring, and policy. Establish lightweight model governance. Create a change backbone (playbooks, enablement, internal communications) so each new team ramps faster than the last.

4) Publish a value dashboard.
Measure cash, not demos: cost per contact, cycle time, on-shelf availability, release frequency, time-to-hire, revenue per visit. Baseline these metrics before launch—most teams skip this step and cannot prove impact six months later when challenged. Review monthly. Retire anything that doesn’t move the number.

5) Keep humans in the loop where it matters.
Customer experience, safety, financial risk, and regulatory exposure all require clear human decision points. Automate confidently—but design escalation paths from day one.


Final word

In 2025, AI pays where volume is high and rules are clear (service, supply chain, HR, engineering), and where personalization drives spend (marketing and sales). The winners aren’t “using AI.” They are re-staging how the work happens—and they can prove it on the P&L.

From AI-Enabled to AI-Centered – Reimagining How Enterprises Operate

Enterprises around the world are racing to deploy generative AI. Yet most remain stuck in the pilot trap; experimenting with copilots and narrow use cases while legacy operating models, data silos, and governance structures stay intact. The results are incremental: efficiency gains without strategic reinvention.

With the rapidly developing context aware AI we also can chart different course — making AI not an add-on, but the center of how the enterprise thinks, decides, and operates. This shift, captured powerfully in The AI-Centered Enterprise (ACE) by Ram Bala, Natarajan Balasubramanian, and Amit Joshi (IMD), signals the next evolution in business design: from AI-enabled to AI-centered.

The premise is bold. Instead of humans using AI tools to perform discrete tasks, the enterprise itself becomes an intelligent system, continuously sensing context, understanding intent, and orchestrating action through networks of people and AI agents. This is the next-generation operating model for the age of context-aware intelligence and it will separate tomorrow’s leaders from those merely experimenting today.


What an AI-Centered Enterprise Is

At its core, an AI-centered enterprise is built around Context-Aware AI (CAI), systems that understand not only content (what is being said) but also intent (why it is being said). These systems operate across three layers:

  • Interaction layer: where humans and AI collaborate through natural conversation, document exchange, or digital workflow.(ACE)
  • Execution layer: where tasks and processes are performed by autonomous or semi-autonomous agents.
  • Governance layer: where policies, accountability, and ethical guardrails are embedded into the AI fabric.

The book introduces the idea of the “unshackled enterprise” — one no longer bound by rigid hierarchies and manual coordination. Instead, work flows dynamically through AI-mediated interactions that connect needs with capabilities across the organization. The result is a company that can learn, decide, and act at digital speed — not by scaling headcount, but by scaling intelligence.

This is a profound departure from current “AI-enabled” organizations, which mostly deploy AI as assistants within traditional structures. In an AI-centered enterprise, AI becomes the organizing principle, the invisible infrastructure that drives how value is created, decisions are made, and work is executed.


How It Differs from Today’s Experiments

Today’s enterprise AI landscape is dominated by point pilots and embedded copilots: productivity boosters designed onto existing processes. They enhance efficiency but rarely transform the logic of value creation.

An AI-centered enterprise, by contrast, rebuilds the transaction system of the organization around intelligence. Key differences include:

  • From tools to infrastructure: AI doesn’t automate isolated tasks; it coordinates entire workflows; from matching expertise to demand, to ensuring compliance, to optimizing outcomes.
  • From structured data to unstructured cognition: Traditional analytics rely on structured databases. AI-centered systems start with unstructured information (emails, documents, chats) extracting relationships and meaning through knowledge graphs and retrieval-augmented reasoning.
  • From pilots to internal marketplaces: Instead of predefined processes, AI mediates dynamic marketplaces where supply and demand for skills, resources, and data meet in real time, guided by the enterprise’s goals and policies.

The result is a shift from human-managed bureaucracy to AI-coordinated agility. Decision speed increases, friction falls, and collaboration scales naturally across boundaries.


What It Takes: The Capability and Governance Stack

The authors of The AI-Centered Enterprise propose a pragmatic framework for this transformation, the 3Cs: Calibrate, Clarify, and Channelize.

  1. Calibrate – Understand the types of AI your business requires. What decisions depend on structured vs. unstructured data? What precision or control is needed? This step ensures technology choices fit business context.
  2. Clarify – Map your value creation network: where do decisions happen, and how could context-aware intelligence change them? This phase surfaces where AI can augment, automate, or orchestrate work for tangible impact.
  3. Channelize – Move from experimentation to scaled execution. Build a repeatable path for deployment, governance, and continuous improvement. Focus on high-readiness, high-impact areas first to build credibility and momentum.

Underneath the 3Cs lies a capability stack that blends data engineering, knowledge representation, model orchestration, and responsible governance.

  • Context capture: unify data, documents, and interactions into a living knowledge graph.
  • Agentic orchestration: deploy systems of task, dialogue, and decision agents that coordinate across domains.
  • Policy and observability: embed transparency, traceability, and human oversight into every layer.

Organizationally, the AI-centered journey requires anchored agility — a balance between central guardrails (architecture, ethics, security) and federated innovation (business-owned use cases). As with digital transformations before it, success depends as much on leadership and learning as on technology.


Comparative Perspectives — and Where the Field Is Heading

The ideas in The AI-Centered Enterprise align with a broader shift seen across leading research and consulting work, a convergence toward AI as the enterprise operating system.

McKinsey: The Rise of the Agentic Organization

McKinsey describes the next evolution as the agentic enterprise; organizations where humans work alongside fleets of intelligent agents embedded throughout workflows. Early adopters are already redesigning decision rights, funding models, and incentives to harness this new form of distributed intelligence.
Their State of AI 2025 shows that firms capturing the most value have moved beyond pilots to process rewiring and AI governance, embedding AI directly into operations, not as a service layer.

BCG: From Pilots to “Future-Built” Firms

BCG’s 2025 research (Sep 2025) finds that only about 5% of companies currently realize sustainable AI value at scale. Those that do are “future-built”, treating AI as a capability, not a project. These leaders productize internal platforms, reuse components across business lines, and dedicate investment to AI agents, which BCG estimates already generate 17% of enterprise AI value, projected to reach nearly 30% by 2028.
This mirrors the book’s view of context-aware intelligence and marketplaces as the next sources of competitive advantage.

Harvard Business Review: Strategy and Human-AI Collaboration

HBR provides the strategic frame. In Competing in the Age of AI, Iansiti and Lakhani show how AI removes the traditional constraints of scale, scope, and learning, allowing organizations to grow exponentially without structural drag. Wilson and Daugherty’s Collaborative Intelligence adds the human dimension, redefining roles so that humans shift from operators to orchestrators of intelligent systems.

Convergence – A New Operating System for the Enterprise

Across these perspectives, the trajectory is clear:

  • AI is moving from tools to coordination system capabilities.
  • Work will increasingly flow through context-aware agents that understand intent and execute autonomously.
  • Leadership attention is shifting from proof-of-concept to operating-model redesign: governance, role architecture, and capability building.
  • The competitive gap will widen between firms that use AI to automate tasks and those that rebuild the logic of their enterprise around intelligence.

In short, the AI-centered enterprise is not a future vision — it is the direction of travel for every organization serious about reinvention in the next five years.


The AI-Centered Enterprise – A Refined Summary

The AI-Centered Enterprise (Bala, Balasubramanian & Joshi, 2025) offers one of the clearest playbooks yet for this new organisational architecture. The authors begin by defining the limitations of today’s AI adoption — fragmented pilots, structured-data basis, and an overreliance on human intermediaries to bridge data, systems, and decisions.

They introduce Context-Aware AI (CAI) as the breakthrough: AI that understands not just information but the intent and context behind it, enabling meaning to flow seamlessly across functions. CAI underpins an “unshackled enterprise,” where collaboration, decision-making, and execution happen fluidly across digital boundaries.

The book outlines three core principles:

  1. Perceive context: Use knowledge graphs and natural language understanding to derive meaning from unstructured information — the true foundation of enterprise knowledge.
  2. Act with intent: Deploy AI agents that can interpret business objectives, not just execute instructions.
  3. Continuously calibrate: Maintain a human-in-the-loop approach to governance, ensuring AI decisions stay aligned with strategy and ethics.

Implementation follows the 3C framework — Calibrate, Clarify, Channelize — enabling leaders to progress from experimentation to embedded capability.

The authors conclude that the real frontier of AI is not smarter tools but smarter enterprises; organizations designed to sense, reason, and act as coherent systems of intelligence.


Closing Reflection

For executives navigating transformation, The AI-Centered Enterprise reframes the challenge. The question is no longer how to deploy AI efficiently, but how to redesign the enterprise so intelligence becomes its organizing logic.

Those who start now, building context-aware foundations, adopting agentic operating models, and redefining how humans and machines collaborate, will not just harness AI. They will become AI-centered enterprises: adaptive, scalable, and truly intelligent by design.

How AI is Reshaping Human Work, Teams, and Organisational Design

The implications of AI are profound: when individuals can deliver team-level output with AI, organisations must rethink not just productivity, but the very design of work and teams. A recent Harvard Business School and Wharton field experiment titled The Cybernetic Teammate offers one of the clearest demonstrations of this shift. Conducted with 776 professionals at Procter & Gamble, the study compared individuals and teams working on real product-innovation challenges, both with and without access to generative AI.

The results were striking:

  • Individuals using AI performed as well as/better than human teams without AI.
  • Teams using AI performed best of all.
  • AI also balanced out disciplinary biases—commercial and technical professionals produced more integrated, higher-quality outputs when assisted by AI.

In short, AI amplified human capability at both the individual and collective level. It became a multiplier of creativity, insight, and balance—reshaping the traditional boundaries of teamwork and expertise.

The Evidence Is Converging

Other large-scale studies reinforce this picture. A Harvard–BCG experiment showed consultants using GPT-4 were 12% more productive, 25% faster, and delivered work rated 40% higher in quality for tasks within the model’s “competence frontier


How Work Will Be Done Differently

These findings signal a fundamental redesign in how work is organised. The dominant model—teams collaborating to produce output—is evolving toward individual-with-AI first, followed by team integration and validation.

A typical workflow may now look like this:

AI-assisted ideation → human synthesis → AI refinement → human decision.

Work becomes more iterative, asynchronous, and cognitively distributed. Human collaboration increasingly occurs through the medium of AI: teams co-create ideas, share prompt libraries, and build upon each other’s AI-generated drafts.

The BCG study introduces a useful distinction:

  • Inside the AI frontier: tasks within the model’s competence—ideation, synthesis, summarisation—where AI can take the lead.
  • Outside the AI frontier: tasks requiring novel reasoning, complex judgment, or proprietary context—where human expertise must anchor the process.

Future roles will be defined less by function and more by how individuals navigate that frontier: knowing when to rely on AI and when to override it. Skills like critical reasoning, verification, and synthesis will matter more than rote expertise.


Implications for Large Enterprises

For established organisations, the shift toward AI-augmented work changes the anatomy of structure, leadership, and learning.

1. Flatter, more empowered structures.
AI copilots widen managerial spans by automating coordination and reporting. However, they also increase the need for judgmental oversight—requiring managers who coach, review, and integrate rather than micromanage.

2. Redefined middle-management roles.
The traditional coordinator role gives way to integrator and quality gatekeeper. Managers become stewards of method and culture rather than traffic controllers.

3. Governance at the “AI frontier.”
Leaders must define clear rules of engagement: what tasks can be automated, which require human review, and what data or models are approved. This “model–method–human” control system ensures both productivity and trust.

4. A new learning agenda.
Reskilling moves from technical training to cognitive fluency: prompting, evaluating, interpreting, and combining AI insights with business judgment. The AI-literate professional becomes the new organisational backbone.

5. Quality and performance metrics evolve.
Volume and throughput give way to quality, cycle time, rework reduction, and bias detection—metrics aligned with the new blend of human and machine contribution.

In short, AI doesn’t remove management—it redefines it around sense-making, coaching, and cultural cohesion.


Implications for Startups and Scale-Ups

While enterprises grapple with governance and reskilling, startups are already operating in an AI-native way.

Evidence from recent natural experiments shows that AI-enabled startups raise funding faster and with leaner teams. The cost of experimentation drops, enabling more rapid iteration but also more intense competition.

The typical AI-native startup now runs with a small human core and an AI-agent ecosystem handling customer support, QA, and documentation. The operating model is flatter, more fluid, and relentlessly data-driven.

Yet the advantage is not automatic. As entry barriers fall, differentiation depends on execution, brand, and customer intimacy. Startups that harness AI for learning loops—testing, improving, and scaling through real-time feedback—will dominate the next wave of digital industries.


Leadership Imperatives – Building AI-Enabled Work Systems

For leaders, the challenge is no longer whether to use AI, but how to design work and culture around it. Five imperatives stand out:

  1. Redesign workflows, not just add tools. Map where AI fits within existing processes and where human oversight is non-negotiable.
  2. Build the complements. Create shared prompt libraries, custom GPTs,  structured review protocols, and access to verified data.
  3. Run controlled pilots. Test AI augmentation in defined workstreams, measure speed, quality, and engagement, and scale what works.
  4. Empower and educate. Treat AI literacy as a strategic skill—every employee a prompt engineer, every manager a sense-maker.
  5. Lead the culture shift. Encourage experimentation, transparency, and open dialogue about human-machine collaboration.

Closing Thought

AI will not replace humans or teams. But it will transform how humans and teams create value together.

The future belongs to organisations that treat AI not as an external technology, but as an integral part of their work design and learning system. The next generation of high-performing enterprises—large and small—will be those that master this new choreography between human judgment and machine capability.

AI won’t replace teams—but teams that know how to work with AI will outperform those that don’t.

More on this in one of my next newsletters.

The AI Strategy Imperative: Why Act Now

Two weeks ago, I completed IMD’s AI Strategy & Implementation program. It made the “act now” imperative unmistakable. In this newsletter I share the overarching insights I took away; in upcoming issues I’ll go deeper into specific topics and tools we used.


AI is no longer a tooling choice. It’s a shift in distribution, decision-making, and work design that will create new winners and losers. Leaders who move now—anchoring execution in clear problems, strong data foundations, and human–AI teaming—will compound advantage while others get trapped in pilots and platform dependency.


1) Why act now: the competitive reality

Distribution is changing. AI assistants and agentic workflows increasingly mediate buying journeys. If your brand isn’t represented in answers and automations, you forfeit visibility, traffic, and margin. This is a channel economics shift: AI determines which brands are surfaced—and which are invisible.

Platforms are consolidating power. Hyperscalers are embedding AI across their offerings. You’ll benefit from their acceleration, but your defensibility won’t come from platforms your competitors can also buy. The durable moat is your proprietary data, decision logic, and learning loops you control—not a longer vendor list.

Agents are getting real. Think of agents as “an algorithm that applies algorithms.” They decompose work into steps, call tools/APIs, and complete tasks with minimal supervision. Agent architectures will reshape processes, controls, and talent—pushing leaders to design for human–AI teams rather than bolt‑on copilots.


2) The paradox: move fast and build right

The cost of waiting. Competitors pairing people with AI deliver faster at lower cost and start absorbing activities you still outsource. As internal production costs fall faster than coordination costs, vertical integration becomes attractive—accelerated by automation. Late movers face margin pressure and share erosion.

The risk of rushing. Many efforts stall because they “build castles on quicksand”—shiny proofs‑of‑concept on weak data and process foundations. Value doesn’t materialize, trust erodes, and budgets freeze. Urgency must be paired with disciplined follow up so speed creates compounded learning.


3) A durable path to value: the 5‑Box Implementation Framework

A simple path from strategy deck to shipped value:

  1. Problem. Define a single business problem tied to P&L or experience outcomes. Write the metric up front; make the use case narrow enough to ship quickly.
  2. Data. Map sources, quality, access, and ownership. Decide what you must own versus can borrow; invest early in clean, governed data because it is the most sustainable differentiator.
  3. Tools. Choose the lightest viable model/agent and the minimum integration needed to achieve the outcome, keep it simple.
  4. People. Form cross‑functional teams (domain expertise + data + engineering + change) with one accountable owner. Team design—not individual heroics—drives performance.
  5. Feedback loops. Instrument production to compare predicted vs. actual outcomes. The delta gives valuable insights and becomes new training data.

Your defensive moat is data + people + decisions + learning loops, not your vendor list.


4) Moving the Human Workforce to more Complex Tasks

While AI absorbs simple and complicated work (routine tasks, prediction, pattern recognition), the human edge shifts decisively to complex and chaotic problems—where cause and effect are only clear in retrospect or not at all. This economic reality forces immediate investment in people as internal work is increasingly handled by AI–human teams.

The immediate talent pivot. Leaders must signal—and codify—new “complexity competencies”: adaptive problem‑solving, systems thinking, comfort with ambiguity, and AI product‑ownership (defining use cases, data needs, acceptance criteria, and evaluation).

Organizational design for learning.

  • Security: Build psychological safety so smart experiments are rewarded and failures fuel learning, not blame.
  • Convenience: Make adoption of new AI tools easy—frictionless access, clear guidance, and default enablement.
  • Process: A weak human with a tool and a better process will outperform a strong human with a tool and a worse process. Define roles, handoffs, and measurement so teams learn in the loop.

5) Where ROI shows up first

There is a lot of discussion on where AI really shows it benefits and there are four areas, where we see consistent reporting about:

Content. Marketing and knowledge operations see immediate throughput gains and more consistent quality. Treat this as a production system: govern sources, version prompts/flows, and measure impact.

Code. Assistance, testing, and remediation compress cycle time and reduce defects. Success depends on clear guardrails, reproducible evaluation, and tight feedback from production incidents into your patterns.

Customer. Service and sales enablement benefit from faster resolution and personalization at scale. Start with narrow intents, then expand coverage as accuracy and routing improve.

Creative. Design, research, and planning benefit from rapid exploration and option value. Use agentic research assistants with human review to widen the solution space before you converge.


6) Organize to scale without chaos

Govern the reality, not the slide. Shadow AI already exists. Enable it safely with approved toolkits, lightweight guardrails, and clear data rules—so exploration happens inside the tent, not outside it.

CoE vs. federation. Avoid the “cost‑center CoE” trap. Stand up a small enablement core (standards, evaluation, patterns), but push delivery into business‑owned pods that share libraries and reviews. This balances consistency with throughput.

Human + AI teams. Process design beats heroics. Make handoffs explicit, instrument outcomes, and build psychological safety so teams learn in the loop. A weak human with a machine and a better process will outperform a strong human with a machine and a worse process.


What this means for leaders

  • Move talent to handle complexity. Codify new competencies (adaptive problem‑solving, systems thinking, comfort with ambiguity, AI product‑ownership) and design organizational systems that accelerate learning (security, convenience, process).
  • Your moat is data + people + decisions + learning loops. Platforms accelerate you, but they’re available to everyone. Proprietary, well‑governed data feeding instrumented processes is what compounds.
  • Ship value early; strengthen foundations as you scale. Start where ROI is proven (content, code, customer, creative), then use that momentum to fund data quality and governance.
  • Design for agents and teams now. Architect processes assuming agents will do steps of work and humans will supervise, escalate, and improve the system. That’s how you create repeatable outcomes.

Lifelong Learning in the Age of AI – My Playbook

September 2025, I received two diplomas: IMD’s AI Strategy & Implementation and Nyenrode University’s Corporate Governance for Supervisory Boards. I am proud of both—more importantly, they cap off a period where I have deliberately rebuilt how I learn.

With AI accelerating change and putting top-tier knowledge at everyone’s fingertips, the edge goes to leaders who learn—and apply—faster than the market moves. In this issue I am not writing theory; I am sharing my learning journey of the past six months—what I did, what worked, and the routine I will keep using. If you are a leader, I hope this helps you design a learning system that fits a busy executive life.


My Learning System – 3 pillars

1) Structured learning

This helped me to gain the required depth:

  • IMD — AI Strategy & Implementation. I connected strategy to execution: where AI creates value across the business, and how to move from pilots to scaled outcomes. In upcoming newsletters, I will go share insights on specific topics we went deep on in this course.
  • Nyenrode — Corporate Governance for Supervisory Boards. I deepened my view on board-level oversight—roles and duties, risk/compliance, performance monitoring, and strategic oversight. I authored my final paper on how to close the digital gap in supervisory boards (see also my earlier article)
  • Google/Kaggle’s 5-day Generative AI Intensive. Hands-on labs demystified how large language models work: what is under the hood, why prompt quality matters, where workflows can break, and how to evaluate outputs against business goals. It gave understanding how to improve the use of these models.

2) Curated sources

This extended the breadth of my understanding of the use of AI.

2a. Books

Below I give a few examples, more book summaries/review, you can find on my website: www.bestofdigitaltransformation.com/digital-ai-insights.

  • Co-Intelligence: a pragmatic mindset for working with AI—experiment, reflect, iterate.
  • Human + Machine: how to redesign processes around human–AI teaming rather than bolt AI onto old workflows.
  • The AI-Savvy Leader: what executives need to know to steer outcomes without needing to code.

2b. Research & articles
I built a personal information base with research from: HBR, MIT, IMD, Gartner, plus selected pieces from McKinsey, BCG, Strategy&, Deloitte, and EY. This keeps me grounded in capability shifts, operating-model implications, and the evolving landscape.

2c. Podcasts & newsletters
Two that stuck: AI Daily Brief and Everyday AI. Short, practical audio overviews with companion newsletters so I can find and revisit sources. They give me a quick daily pulse without drowning in feeds.

3) AI as my tutor

I am using AI to get personalised learning support.

3a. Explain concepts. I use AI to clarify ideas, contrast approaches, and test solutions using examples from my context.
3b. Create learning plans. I ask for step-by-step learning journeys with milestones and practice tasks tailored to current projects.
3c. Drive my understanding. I use different models to create learning content, provide assignments, and quiz me on my understanding.


How my journey unfolded

Here is how it played out.

1) Started experimenting with ChatGPT.
I was not an early adopter; I joined when GPT-4 was already strong. Like many, I did not fully trust it at first. I began with simple questions and asked the model to show how it interpreted my prompts. That built confidence without creating risks/frustration.

2) Built foundations with books.
I read books like Co-Intelligence, Human + Machine, and The AI-Savvy Leader. These created a common understanding for where AI helps (and does not), how to pair humans and machines, and how to organise for impact. For all the books I created reviews, to anchor my learnings and share them in my website.

3) Added research and articles.
I set up a repository with research across HBR/MIT/IMD/Gartner and selected consulting research. This kept me anchored in evidence and applications, and helped me track the operational implications for strategy, data, and governance.

4) Tried additional models (Gemini and Claude).
Rather than picking a “winner,” I used them side by side on real tasks. The value was in contrast—seeing how different models frame the same question, then improving the final answer by combining perspectives. Letting models critique each other surfaced blind spots.

5) Went deep with Google + Kaggle.
The 5-day intensive course clarified what is under the hood: tokens/vectors, why prompts behave the way they do, where workflows tend to break, and how to evaluate outputs beyond “sounds plausible.” The exercises translated directly into better prompt design and started my understanding of how agents work.

6) Used NotebookLM for focused learning.
For my Nyenrode paper, I uploaded the key articles and interacted only with that corpus. NotebookLM generated grounded summaries, surfaced insights I might have missed, and reduced the risk of invented citations (by sticking to the uploaded resources). The auto-generated “podcast” is one of the coolest features I experienced and really helps to learn about the content.

7) Added daily podcasts/newsletters to stay current.
The news volume on AI is impossible to track end-to-end. AI Daily Brief and Everyday AI give me a quick scan each morning and links worth saving for later deep dives. This provides the difference between staying aware versus constantly feeling behind.

8) Learned new tools and patterns at IMD.

  • DeepSeek helped me debug complex requests by showing how the model with reasoning interpreted my prompt—a fantastic way to unravel complex problems.
  • Agentic models like Manus showed the next step: chaining actions and tools to complete tasks end-to-end.
  • CustomGPTs (within today’s LLMs) let me encode my context, tone, and recurring workflows, boosting consistency and speed across repeated tasks.

Bring it together with a realistic cadence.

Leaders do not need another to-do list; they need a routine that works. Here is the rhythm I am using now:

Daily

  • Skim one high-signal newsletter or listen to a podcast.
  • Capture questions to explore later.
  • Learn by doing with the various tools.

Weekly

  • Learn: read one or more papers/articles on various Ai related topics
  • Apply: use one idea on a live problem; interact with AI on going deeper
  • Share: create my weekly newsletter, based on my learnings

Monthly

  • Pick one learning topic read a number of primary sources, not just summaries.
  • Draft an experiment: with goal, scope, success metric, risks, and data needs. Using AI to pressure-test assumptions.
  • Review with a thought leaders/colleagues for challenge and alignment.

Quarterly

  • Read at least one book that expands your mental models.
  • Create a summary for my network. Teaching others cements my own understanding.

(Semi-) Annualy

  • Add a structured program or certificate to go deep and to benefit from peer debate.

Closing

The AI era compresses the shelf life of knowledge. Waiting for a single course is no longer enough. What works is a learning system: structured learning for depth, curated sources for breadth, and AI as your tutor for speed. That has been my last six months, and it is a routine I will continue.

From Org Charts to Work Charts – Designing for Hybrid Human–Agent Organisations

The org chart is no longer the blueprint for how value gets created. As Microsoft’s Asha Sharma puts it, “the org chart needs to become the work chart.” When AI agents begin to own real slices of execution—preparing customer interactions, triaging tickets, validating invoices—structure must follow the flow of work, not the hierarchy of titles. This newsletter lays out what that means for leaders and how to move, decisively, from boxes to flows.


Why this is relevant now

Agents are leaving the lab. The conversation has shifted from “pilot a chatbot” to “re-architect how we deliver outcomes.” Boards and executive teams are pushing beyond experiments toward embedded agents in sales, service, finance, and supply chain. This is not a tooling implementation—it’s an operating-model change.

Hierarchy is flattening. When routine coordination and status reporting are automated, you need fewer layers to move information and make decisions. Roles compress; accountabilities become clearer; cycle times shrink. The management burden doesn’t disappear—it changes. Leaders spend less time collecting updates and more time setting direction, coaching, and owning outcomes.

Enterprises scale. AI-native “tiny teams” design around flows—the sequence of steps that create value—rather than traditional functions. Large organizations shouldn’t copy their size; they should copy this unit of design. Work Charts make each flow explicit, assign human and agent owners, and let you govern and scale it across the enterprise.


What is a Work Chart?

A Work Chart is a living map of how value is produced—linking outcomes → end-to-end flows → tasks → handoffs—and explicitly assigning human owners and agent operators at each step. Where an org chart shows who reports to whom, a Work Chart shows:

  • Where the work happens – the flow and its stages
  • Who is accountable – named human owners of record
  • What is automated – agents with charters and boundaries
  • Which systems/data/policies apply – the plumbing and guardrails
  • How performance is measured – SLAs, exceptions, error/rework, latency

Work Chart is your work graph made explicit—connecting goals, people, and permissions so agents can act with context and leaders can govern outcomes.


Transformation at every level

Board / Executive Committee
Set policy for non-human resources (NHRs) just as you do for capital and people. Define decision rights, guardrails, and budgets (compute/tokens). Require blended KPIs—speed, cost, risk, quality—reported for human–agent flows, not just departments. Make Work Charts a standing artifact in performance reviews.

Enterprise / Portfolio
Shift from function-first projects to capability platforms (retrieval, orchestration, evaluation, observability) that any BU can consume. Keep a registry of approved agents and a flow inventory so portfolio decisions always show which flows, agents, and data they affect. Treat major flow changes like product releases—versioned, reversible, and measured.

Business Units / Functions
Turn priority processes into agent-backed services with clear SLAs and a named human owner. Publish inputs/outputs, boundaries (what the agent may and may not do), and escalation paths. You are not “installing AI”; you’re standing up services that can be governed and improved.

Teams
Maintain an Agent Roster (purpose, inputs, outputs, boundaries, logs). Fold Work Chart updates into existing rituals (standups, QBRs). Managers spend less time on status and more on coaching, exception handling, and continuous improvement of the flow.

Individuals
Define personal work charts for each role (the 5–7 recurring flows they own) and the agents they orchestrate. Expect role drift toward judgment, relationships, and stewardship of AI outcomes.


Design principles – what “good” looks like

  1. Outcome-first. Start from customer journeys and Objective – Key Results (OKRs); redesign flows to meet them.
  2. Agents as first-class actors. Every agent has a charter, a named owner, explicit boundaries, and observability from day one.
  3. Graph your work. Connect people, permissions, and policies so agents operate with context and least-privilege access.
  4. Version the flow. Treat flow changes like product releases—documented, tested, reversible, and measured.
  5. Measure continuously. Track time-to-outcome, error/rework, exception rates, and SLA adherence—reviewed where leadership already looks (business reviews, portfolio forums).

Implementation tips

1) Draw the Work Chart for mission-critical journeys
Pick one customer journey, one financial core process, and one internal productivity flow. Map outcome → stages → tasks → handoffs. Mark where agents operate and where humans remain owners of record. This becomes the executive “single source” for how the work actually gets done.

2) Create a Work Chart Registry
Create a lightweight, searchable registry that lists every flow, human owner, agent(s), SLA, source, and data/permission scope. Keep it in the systems people already use (e.g., your collaboration hub) so it becomes a living reference, not a slide deck.

3) Codify the Agent Charters
For each agent on the Work Chart, publish a one-pager: Name, Purpose, Inputs, Outputs, Boundaries, Owner, Escalation Path, Log Location. Version control these alongside the Work Chart so changes are transparent and auditable.

4) Measure where the work happens.
Instrument every node with flow health metrics—latency, error rate, rework, exception volume. Surface them in the tools leaders already use (BI dashboards, exec scorecards). The goal is to manage by flow performance, not anecdotes.

5) Shift budgeting from headcount to flows
Attach compute/SLA budgets to the flows in your Work Chart. Review them at portfolio cadence. Fund increases when there’s demonstrable improvement in speed, quality, or risk. This aligns investment with value creation rather than with org boxes.

6) Communicate the new social contract
Use the Work Chart in town halls and leader roundtables to explain what’s changing, why it matters, and how roles evolve. Show before/after charts for one flow to make the change tangible. Invite feedback; capture exceptions; iterate.


Stop reorganizing boxes – start redesigning flows. Mandate that each executive publishes the first Work Chart for one mission-critical journey—complete with agent charters, SLAs, measurements, and named owners of record. Review it with the same rigor you apply to budget and risk. Organizations that do this won’t just “adopt AI”; they’ll build a living structure that mirrors how value is created—and compounds it.