Why 88% of Companies Use AI but Only 6% See Real Results: What McKinsey’s Research Really Tells Us

Over the past year, McKinsey – itself busy reinventing its business model with AI – has published a constant flow of AI research: adoption surveys, sector deep-dives, workforce projections, technology roadmaps. I’ve read these at different moments in time. For this newsletter, I synthesized 25 of those reports into one overview (leveraging NotebookLM).

The picture that emerges is both clearer and more confronting than any of the individual pieces on their own.

The headline is simple: AI is now everywhere, but real value is highly concentrated. A small group of “AI high performers” is pulling away from the pack—economically, organizationally, and technologically. The gap is about to widen further as we move from today’s generative tools to tomorrow’s agentic, workflow-orchestrating systems.

This isn’t a technology story. It’s a strategy, operating model, and governance story.


AI is everywhere – value is not

McKinsey’s research shows that almost 9 in 10 organizations now use AI somewhere in the business, typically in one function or a handful of use cases. Yet only about a third are truly scaling AI beyond pilots, and just 6% can attribute 5% or more EBIT uplift to AI.

Most organizations are stuck in what I call the “pilot loop”:

  1. Launch a promising proof of concept.
  2. Prove that “AI works” in a narrow setting.
  3. Hit organizational friction – ownership, data, process, risk.
  4. Park the use case and start another pilot.

On paper, these companies look active and innovative. In reality, they are accumulating “AI debt”: a growing gap between what they could achieve and what the real leaders are already realizing in terms of growth, margin, and capability.

The research is clear: tools are no longer a differentiator. Your competitive position is defined by your ability to industrialize AI – to embed it deeply into how work is done, not just where experiments are run.


The 6% success factors: what AI high performers actually do

The small cohort of high performers behaves in systematically different ways. Four contrasts stand out:

  1. They pursue growth, not just efficiency
    Most organizations still frame AI as a cost and productivity story. High performers treat efficiency as table stakes and put equal weight on new revenue, new offerings, and new business models. AI is positioned as a growth engine, not a shared-service optimization tool.
  2. They redesign workflows, not just add tools
    This is the single biggest differentiator. High performers are almost three times more likely to fundamentally redesign workflows around AI. They are willing to change decision rights, process steps, roles, and controls so that AI is embedded at the core of how work flows end-to-end.
  3. They lead from the C-suite
    In high performers, AI is not owned by a digital lab, an innovation team, or a single function. It has visible, direct sponsorship from the CEO or a top-team member, with clear, enterprise-wide mandates. That sponsorship is about more than budget approval; it’s about breaking silos and forcing trade-offs.
  4. They invest at scale and over time
    Over a third of high performers dedicate more than 20% of their digital budgets to AI. Crucially, that spend is not limited to models and tools. It funds data foundations, workflow redesign, change management, and talent.

Taken together, these behaviours show that AI leadership is a management choice, not a technical one The playbook is available to everyone, but only a few are willing to fully commit.


The workforce is already shifting – and we’re still early

McKinsey’s data also cuts through a lot of speculation about jobs and skills. Three signals are particularly important:

  • Workforce impact is real and rising
    In the past year, a median of 17% of respondents reported workforce reductions in at least one function due to AI. Looking ahead, that number jumps to 30% expecting reductions in the next year as AI scales further.
  • The impact is uneven by function
    The biggest expected declines are in service operations and supply chain management, where processes are structured and outcomes are measurable. In other areas, hiring and reskilling are expected to offset much of the displacement.
  • New roles and skills are emerging fast
    Organizations are already hiring for roles like AI compliance, model risk, and AI ethics, and expect reskilling efforts to ramp up significantly over the next three years.

The message for leaders is not “AI will take all the jobs,” but rather:

If you’re not deliberately designing a human–AI workforce strategy that covers role redesign, reskilling, mobility, governance implications, it will happen to you by default.


The next wave: from copilots to co-workers

Most of the current adoption story is still about generative tools that assist individual knowledge workers: drafting content, summarizing documents, writing code.

McKinsey’s research points to the next phase: Agentic AI – systems that don’t just respond to prompts but plan, orchestrate, and execute multi-step workflows with limited human input.

Three shifts matter here:

  1. From tasks to workflows
    We move from “AI helps write one email” to “AI manages the full case resolution process”—from intake to investigation, decision, and follow-up.
  2. From copilots to virtual co-workers
    Agents will interact with systems, trigger actions, call APIs, and collaborate with other agents. Humans move further upstream (framing, oversight, escalation) and downstream (relationship, judgement, exception handling).
  3. From generic tools to deep verticalization
    The most impactful agents will be highly tailored to sector and context: claims orchestration in insurance, demand planning in manufacturing, clinical operations in pharma, and so on.

Today, around six in ten organizations are experimenting with AI agents, but fewer than one in ten is scaling them in any function. The gap between high performers and everyone else is set to widen dramatically as agents move from proof of concept to production.


So what should leaders actually do?

The gap between high performers and everyone else is widening now, not in five years. As agentic AI moves from proof of concept to production, the organizations still running pilots will find themselves competing against fundamentally different operating models—ones that are faster, more scalable, and structurally more profitable.

If you sit on an executive committee or board, you might start with these questions:

  1. Ambition – Are we using AI mainly to cut cost, or do we have a clear thesis on how it will create new revenue, offerings, and business models?
  2. Workflow rewiring – For our top 5–10 value pools, have we actually redesigned end-to-end workflows around AI, or are we just bolting tools onto legacy processes?
  3. Ownership – Who on the top team is truly accountable for AI as an enterprise-wide agenda—not just for “experiments,” but for operating model, risk, and value delivery?
  4. Workforce strategy – Do we have a concrete plan for role redesign, reskilling, and new AI governance roles over the next 3–5 years, backed by budget?
  5. Foundations and governance – Are we treating data, infrastructure, and sustainability as strategic assets, with the same rigor as financial capital and cybersecurity?

The era of casual experimentation is over. McKinsey’s research makes one thing brutally clear: the organizations that will dominate the agentic era won’t be those with the most impressive demos or the longest list of pilots. but those willing to answer “yes” to all five questions – and back those answers with real budget, real accountability, and real organizational change.

The 6% are already there. The question is whether you’ll join them—or explain to your board why you didn’t.

Where Copilot Actually Saves Time, and How to Make It Happen!

Microsoft 365 Copilot is officially live in many organisations. Licences bought, pilots run, internal comms sent. Yet most employees still open blank Word docs, scroll through endless email threads, and search SharePoint by hand. Leaders are starting to ask: What is the value we get from this investment?

This isn’t a technology problem. It’s a work problem. And we can fix it!

Independent studies and government pilots are already showing roughly 30–40% time savings on first drafts and 20–30 minutes saved per long document when people actually use Copilot properly. The gap is not in the potential. It’s in how we introduce it into everyday work.

This article demystifies where Copilot really creates value, why usage is lagging, and what leaders can do to turn licences into impact.


Why Value Isn’t Showing Up

Four issues usually kill Copilot value:

1. People don’t know what it’s for
Most employees have heard the AI story, but can’t answer a basic question: “When and How, in my day, should I use Copilot?” Without clear scenarios and simple guidance, the Copilot icon is just another button.

2. Old habits beat new tools
People know how to push through work the old way: write from scratch, forward emails, dig through folders. Some are already comfortable with ChatGPT in a browser and don’t see why they should change.

3. It’s treated as an IT rollout, not a work redesign
Turning Copilot on in Word, Outlook and Teams is easy. Redesigning how your organisation drafts documents, runs meetings and finds information is hard. Too many programmes stop after the feature is turned on.

4. Governance anxiety stalls decisions
Security, legal and compliance teams see real risk: data exposure, poor-quality outputs, regulatory questions. Without clear guardrails, the safest option is to keep Copilot locked in “pilot” mode.

The upside: these are leadership and design issues, not technical limitations. That means they can be solved.


Where Copilot Actually Delivers: Five Everyday Value Zones

The biggest, most reliable gains so far cluster around five very familiar patterns of knowledge work.

1. Kill the blank page: 0 to 60% in minutes

Impact: Fast first drafts for documents, decks and emails.

Copilot shines when you ask it to get you from nothing to a solid starting point:

  • Strategy papers, board packs, proposals, policies in Word
  • First-cut slide decks in PowerPoint from a brief or source document
  • Long or nuanced emails in Outlook

This “let Copilot write the ugly first draft” consistently shows the largest time savings and strong perceived quality improvements.

2. Turn every meeting into instant documentation

Impact: Decisions, actions and risks captured without a human scribe.

In Teams meetings, Copilot can:

  • Produce a structured summary
  • Pull out decisions, risks and action items
  • Answer questions afterwards: “What did we agree about X?”

This use case is easy to explain. Nobody wants to take minutes; everyone benefits from clear follow-up. In early pilots, meeting summarisation is one of the most frequently used and highest-rated features.

3. Find the right document, not just a document

Impact: Reduce time wasted hunting for information across Outlook, Teams and SharePoint.

Knowledge workers spend a serious chunk of their week just looking for things. Microsoft 365 Chat turns Copilot into a cross-suite concierge:

  • “Summarise what we know about client Y.”
  • “Show me the latest approved deck for product X.”
  • “What did we decide last quarter on pricing for Z?”

When your content already lives in Microsoft 365, this “ask before you search” habit cuts through version chaos and gives people back time and focus.

4. Manage email overload

Impact: Faster triage, clearer responses, less mental drag.

Copilot won’t solve email, but it makes it more manageable:

  • Summarising long threads so you can decide quickly what matters
  • Drafting responses and adjusting tone
  • Cleaning up structure and language

The per-email time saving might be modest, but the reduction in cognitive load is real. Copilot helps you get through the noise and focus on the handful of messages that need your judgment.

5. Accelerate light analysis and reporting in Excel

Impact: Quicker insights and recurring reports from structured data.

In Excel, Copilot can:

  • Explain what’s going on in a dataset
  • Suggest ways to slice the data
  • Create charts and narratives
  • Speed up recurring performance or KPI reporting

This is high-value but not plug-and-play. It works best with reasonably clean data and users who understand the business context. Think of it as a force multiplier for analysts and power users, not a magic button.

In short, Copilot’s sweet spot today is writing, summarising and searching across your existing Microsoft estate, plus selected analytical scenarios for more advanced users.


What Successful Organisations Do Differently

Organisations that are getting real value from Copilot have a few things in common.

They start from work, not from the tool
They don’t launch with “we’re rolling out Copilot”. They start with “we want better strategy papers, better client proposals, better governance packs” – and then show how Copilot changes how those artefacts are produced.

They build Copilot into the flow of work
Instead of creating a separate “AI zone”, they embed Copilot where work already happens: inside Teams meetings, in their intranet, alongside existing forms and workflows. People don’t go to Copilot; Copilot meets people in the tools they use all day.

They invest in skills and champions
They replace generic AI awareness sessions with short, scenario-based training: “Here’s how we now write our monthly report with Copilot.” They build champion networks in each function – credible people who share prompts, examples and tips in context.

They create guardrails instead of red tape
Risk, security and legal are involved early. Data access is configured carefully. Simple rules are agreed: always review outputs; don’t paste in external confidential data; use human judgment on important decisions

Where leaders design Copilot into real work, usage scales. Where they simply procure it, usage stalls.


From Licences to Value: A Practical Plan

The first move is to be selective about where Copilot should create value. Instead of “rolling it out to everyone”, ask: where does knowledge work hurt most today? For most organisations that’s strategy documents and board packs, major client proposals, heavy governance cycles, and monthly reporting. Map those pain points to the five value zones and choose a small set of anchor use cases – for example, first drafts for leadership papers, meeting summaries for key forums, and cross-tenant search for major programmes.

The second move is to design the experience around those use cases. Be concrete: who uses Copilot, in which app, at what moment, and for what output. Replace generic AI briefings with sessions where teams produce real work with Copilot in the loop: a live board paper, a deal review, a performance report. People see their own content, just created differently. At the same time, identify a few credible champions in each area who experiment, refine prompts, and share examples with their colleagues.

The third move is to make experimentation feel safe. Bring risk, security and legal into the conversation early to agree which repositories Copilot can access, where restrictions apply, and a few simple rules: outputs are always reviewed, highly sensitive external information isn’t pasted into prompts, and human judgment remains the final step on important decisions. Communicate this in plain language. Clear boundaries do more for adoption than long policy decks; when people know the rules, they’re much more willing to try new ways of working.

The final move is to measure what matters and iterate. A small set of indicators is enough: time to first draft, time to prepare key meetings, time spent searching, plus self-reported usefulness and quality. Combine those with a few concrete stories – the board pack done in half the time, the proposal turned around in a day, the project review where nobody had to take notes – and you have the basis to decide where to extend licences, where to deepen training, and where to adjust governance. Over a few cycles, Copilot stops being “an AI project” and becomes part of how work gets done.


The winners in the Copilot era won’t be those with the most licences. They’ll be those who embed Copilot into daily work – better drafts, better meetings, better decisions.

Start with three things: pick your use cases, brief your champions, and decide how you’ll measure success.

How to use AI whilst keeping your Data Private and Safe

AI can pay off quickly—copilots that accelerate knowledge work, smarter customer operations, and faster software delivery. The risk is not AI itself; it is how you handle data. Look at privacy (what you expose), security (who can access), compliance (what you can prove), and sovereignty (where processing happens) as separate lenses. The playbook is simple: classify the data you’ll touch; choose one of four deployment models; apply a few guardrails—identity, logging, and simple rules people understand; then measure value and incidents. Start “as open as safely possible” with the less sensitive cases for speed, and move to tighter control as sensitivity increases.


What “Private & Safe” actually means

Private and safe AI means using the least amount of sensitive information, tightly controlling who and what AI can access, proving that your handling meets legal and industry obligations, and ensuring processing happens in approved locations. In practice you minimise exposure, authenticate users, encrypt and log activity, and keep a clear record of decisions and data flows so auditors and customers can trust the outcome.

To make this work across the enterprise, bring the right people together around each use case. The CIO and CISO own the platform choices and controls; the CDO curates which data sources are approved; Legal sets lawful use and documentation; business owners define value and success; HR and Works Council get involved where employee data or work patterns change. Run a short, repeatable intake: describe the use case, identify the data, select the deployment model, confirm the controls, and agree how quality and incidents will be monitored.


How to classify “Sensitive Data” – a simple four-tier guide

Not all data is equal. Classifying it upfront tells you how careful you need to be and which setup to use.

Tier 1 – Low sensitivity. Think public information or generic content such as first drafts of marketing copy. Treat this as the training ground for speed: use packaged tools, keep records of usage, and avoid connecting unnecessary internal sources.

Decision check: “Could this appear on our website tomorrow?”Yes = Tier 1

Tier 2 – Internal. Everyday company knowledge—policy summaries, project notes, internal wikis. Allow AI to read from approved internal sources, but restrict access to teams who need it and retain basic logs so you can review what was asked and answered.

Decision check: “Would sharing this externally require approval?”Yes = Tier 2+

Tier 3 – Confidential. Material that would harm you or your customers if leaked—client lists, pricing models, source code. Use controlled company services that you manage, limit which repositories can be searched, keep detailed activity records, and review results for quality and leakage before scaling.

Decision check: “Would leakage breach a contract or NDA?”Yes = Tier 3+

Tier 4 – Restricted or regulated. Legally protected or mission-critical information—patient or financial records, trade secrets, M&A. Run in tightly controlled environments you operate, separate this work from general productivity tools, test thoroughly before go-live, and document decisions for auditors and boards.

Decision check: “Is this regulated or business-critical?”Yes = Tier 4


Common mistakes – and how to fix them

Using personal AI accounts with company data.
This bypasses your protections and creates invisible risk. Make it company accounts only, block personal tools on the network, and provide approved alternatives that people actually want to use.

Assuming “enterprise tier” means safe by default.
Labels vary and settings differ by vendor. Ask for clear terms: your questions and documents are not used to improve public systems, processing locations are under your control, and retention of queries and answers is off unless you choose otherwise.

Building clever assistants without seeing what actually flows.
Teams connect documents and systems, then no one reviews which questions, files, or outputs move through the pipeline. Turn on logging, review usage, and allow only a short list of approved data connections.

Skipping basic training and a simple policy.
People guess what’s allowed, leading to inconsistent—and risky—behaviour. Publish a one-page “how we use AI here,” include it in onboarding, and name owners who check usage and costs.


AI Deployment Models

Model 1 — Secure packaged tools (fastest path to value).
Ready-made apps with business controls—ideal for broad productivity on low-to-moderate sensitivity work such as drafting, summarising, meeting notes, and internal Q&A. Examples: Microsoft Copilot for Microsoft 365, Google Workspace Gemini, Notion AI, Salesforce Einstein Copilot, ServiceNow Now Assist. Use this when speed matters and the content is not highly sensitive; step up to other models for regulated data or deeper system connections.

Model 2 — Enterprise AI services from major providers.
You access powerful models through your company’s account; your inputs aren’t used to train public systems and you can choose where processing happens. Well-suited to building your own assistants and workflows that read approved internal data. Examples: Azure OpenAI, AWS Bedrock, Google Vertex AI, OpenAI Enterprise, Anthropic for Business. Choose this for flexibility without running the underlying software yourself; consider Model 3 if you need stronger control and detailed records.

Model 3 — Managed models running inside your cloud.
The models and search components run within your own cloud environment, giving you stronger control and visibility while the vendor still manages the runtime. A good fit for confidential or regulated work where oversight and location matter. Examples: Bedrock in your AWS account, Vertex AI in your Google Cloud Platform, Azure OpenAI in your subscription, Databricks Mosaic AI, Snowflake Cortex. Use this when you need enterprise-grade control with fewer operational burdens than full self-hosting.

Model 4 — Self-hosted and open-source models.
You operate the models yourself—on-premises or in your cloud. This gives maximum control and sovereignty, at the cost of more engineering, monitoring, and testing. Suits the most sensitive use cases or IP-heavy R&D. Examples: Llama, Mistral, DBRX—supported by platforms such as Databricks, Nvidia NIM, VMware Private AI, Hugging Face, and Red Hat OpenShift AI. Use this when the business case and risk profile justify the investment and you have the talent to run it safely.


Building Blocks and How to Implement (by company size)

Essential Building blocks

A few building blocks change outcomes more than anything else. Connect AI to approved data sources through a standard “search-then-answer” approach—often called Retrieval-Augmented Generation (RAG), where the AI first looks up facts in your trusted sources and only then drafts a response.

This reduces the need to copy data into the AI system and keeps authority with your original records. Add a simple filter to remove personal or secret information before questions are sent. Control access with single sign-on and clear roles. Record questions and answers so you can review quality, fix issues, and evidence compliance. Choose processing regions deliberately and, where possible, manage your own encryption keys. Keep costs in check with team budgets and a monthly review of usage and benefits.

Large enterprises

Move fastest with a dual approach. Enable packaged tools for day-to-day productivity, and create a central runway based on enterprise AI services for most custom assistants. For sensitive domains, provide managed environments inside your cloud with the standard connection pattern, built-in filtering, and ready-made quality tests. Reserve full self-hosting for the few cases that genuinely need it. Success looks like rapid adoption, measurable improvements in time or quality, and no data-handling incidents.

Mid-market organisations

Get leverage by standardising on one enterprise AI service from their primary cloud, while selectively enabling packaged tools where they clearly save time. Offer a single reusable pattern for connecting to internal data, with logging and simple redaction built in. Keep governance light: a short policy, a quarterly review of model quality and costs, and a named owner for each assistant.

Small-Mid sized companies

Should keep it simple. Use packaged tools for daily work and a single enterprise AI service for tasks that need internal data. Turn off retention of questions and answers where available, restrict connections to a small list of approved sources, and keep work inside the company account—no personal tools or copying content out. A one-page “how we use AI here,” plus a monthly check of usage and spend, is usually enough.


What success looks like

Within 90 days, 20–40% of knowledge workers are using AI for routine tasks. Teams report time saved or quality improved on specific workflows. You have zero data-handling incidents and can show auditors your data flows, access controls, and review process. Usage and costs are tracked monthly, and you’ve refined your approved-tools list based on what actually gets adopted.

You don’t need a bespoke platform or a 200-page policy to use AI safely. You need clear choices, a short playbook, and the discipline to apply it.

Where AI Is Creating the Most Value (Q4 2025)

There’s still a value gap—but leaders are breaking away. In the latest BCG work, top performers report around five times more revenue uplift and three times deeper cost reduction from AI than peers. The common thread: they don’t bolt AI onto old processes—they rewire the work. As BCG frames it, the 10-20-70 rule applies: roughly 10% technology, 20% data and models, and 70% process and organizational change. That’s where most of the value is released.

This article is for leaders deciding where to place AI bets in 2025. If you’re past “should we do AI?” and into “where do we make real money?”, this is your map.


Where the money is (cross-industry)

1) Service operations: cost and speed
AI handles simple, repeatable requests end-to-end and coaches human agents on the rest. The effect: shorter response times, fewer repeat contacts, and more consistent outcomes—without sacrificing customer experience.

2) Supply chain: forecast → plan → move
The gains show up in fewer stockouts, tighter inventories, and faster cycle times. Think demand forecasting, production planning, and dynamic routing that reacts to real-world conditions.

3) Software and engineering: throughput
Developer copilots and automated testing increase release velocity and reduce rework. You ship improvements more often, with fewer defects, and free scarce engineering time for higher-value problems.

4) HR and talent: faster funnels and better onboarding/learning
Screening, scheduling, and candidate communication are compressed from days to hours. Internal assistants support learning and workforce planning. The results: shorter time-to-hire and better conversion through each stage.

5) Marketing and sales: growing revenue
Personalization, next-best-action, and on-the-fly content creation consistently drive incremental sales. This is the most frequently reported area for measurable revenue lift.

Leadership advice: Pick 2-3 high-volume processes (one cost, one revenue). Redesign the workflow, not just add AI on top. Set hard metrics (cost per contact, cycle time, revenue per visit) and a 90-day checkpoint. Industrialize what works; kill what doesn’t.


Sector spotlights

Consumer industries (Retail & Consumer Packaged Goods)

Marketing and sales.

  • Personalized recommendations increase conversion and basket size; retail media programs are showing verified incremental sales.
  • AI-generated marketing content reduces production costs and speeds creative iteration across markets and channels. Mondelez reported 30-50% reduction in marketing content production costs using generative AI at scale.
  • Campaign analytics that used to take days are produced automatically, so teams run more “good bets” each quarter.

Supply chain.

  • Demand forecasting sharpens purchasing and reduces waste.
  • Production planning cuts changeovers and work-in-progress.
  • Route optimization lowers distance traveled and fuel, improving on-time delivery.

Customer service.

  • AI agents now resolve a growing share of contacts end-to-end. Ikea AI agents now handle already 47% of all request so service people can offer more support on the other questions.
  • Agent assist gives human colleagues instant context and suggested next steps.
    The result is more issues solved on first contact, shorter wait times, and maintained satisfaction, provided clear hand-offs to humans exist for complex cases.

What to copy: Start with one flagship process in each of the three areas above; set a 90-day target; only then roll it across brands and markets with a standard playbook.


Manufacturing (non-pharma)

Predictive maintenance.
When tied into scheduling and spare-parts planning, predictive maintenance reduces unexpected stoppages and maintenance costs—foundational for higher overall equipment effectiveness (spelled out intentionally).

Computer-vision quality control.
In-line visual inspection detects defects early, cutting scrap, rework, and warranty exposure. Value compounds as models learn across lines and plants.

Production scheduling.
AI continuously rebalances schedules for constraints, changeovers, and demand shifts—more throughput with fewer bottlenecks. Automotive and electronics manufacturers report 5-15% throughput gains when AI-driven scheduling handles real-time constraints.

Move to scale: Standardize data capture on the line, run one “AI plant playbook” to convergence, then replicate. Treat models as line assets with clear ownership, service levels, and a retraining cadence.


Pharmaceuticals

R&D knowledge work.
AI accelerates three high-friction areas: (1) large evidence reviews, (2) drafting protocols and clinical study reports, and (3) assembling regulatory summaries. You remove weeks from critical paths and redirect scientists to higher-value analysis.

Manufacturing and quality.
Assistants streamline batch record reviews, deviation write-ups, and quality reports. You shorten release cycles and reduce delays. Govern carefully under Good Manufacturing Practice, with humans approving final outputs.

Practical tip: Stand up an “AI for documents” capability (standardized templates, automated redaction, citation checking, audit trails) before you touch lab workflows. It pays back quickly, proves your governance model, and reduces compliance risk when you expand to higher-stakes processes.


Healthcare providers

Augment the professional; automate the routine. Radiology, pathology, and frontline clinicians benefit from AI that drafts first-pass reports, triages cases, and pre-populates documentation. Northwestern Medicine studies show approximately 15.5% average productivity gains (up to 40% in specific workflows) in radiology report completion without accuracy loss. Well-designed oversight maintains quality while reducing burnout.

Non-negotiable guardrail: Clear escalation rules for edge cases and full traceability. If a tool can’t show how it arrived at a suggestion, it shouldn’t touch a clinical decision. Establish explicit human review protocols for any AI-generated clinical content before it reaches patients or medical records.


Financial services

Banking.

  • Service and back-office work: assistants summarize documents, draft responses, and reconcile data. JPMorgan reports approximately 30% fewer servicing calls per account in targeted Consumer and Community Banking segments and 15% lower processing costs in specific workflows.
  • Risk and compliance: earlier risk flags, smarter anti-money-laundering reviews, and cleaner audit trails reduce losses and manual rework.

Insurance.

  • Claims: straight-through processing for simple claims moves from days to hours.
  • Underwriting: AI assembles files and surfaces risk signals so underwriters focus on complex judgment.
  • Back office: finance, procurement, and HR automations deliver steady, compounding savings.

Leadership note: Treat service assistants and claims bots as products with roadmaps and release notes—not projects. That discipline keeps quality high as coverage expands.


Professional services (legal, consulting, accounting)

Document-heavy work is being rebuilt: contract and filing review, research synthesis, proposal generation. Well-scoped processes often see 40–60% time savings. . Major law firms report contract review cycles compressed from 8-12 hours to 2-3 hours for standard agreements, with associates redirected to judgment-heavy analysis and client advisory work.

Play to win: Build a governed retrieval layer over prior matters, proposals, and playbooks—your firm’s institutional memory—then give every practitioner an assistant that can reason over it.


Energy and utilities

Grid and renewables.
AI improves demand and renewable forecasting and helps balance the grid in real time. Autonomous inspections (drones plus computer vision) speed asset checks by 60-70% and reduce hazards. Predictive maintenance on critical infrastructure prevents outages—utilities report 20-30% reduction in unplanned downtime when AI is tied into work order systems and cuts truck rolls (field service visits).

How to scale: Start with one corridor or substation, prove inspection cycle time and fault detection, then expand with a standard data schema so models learn from every site.


Next Steps (practical and measurable)

1) Choose three processes—one for cost, one for revenue, one enabler.
Examples:

  • Cost: customer service automation, predictive maintenance, the month-end finance close.
  • Revenue: personalized offers, “next-best-action” in sales, improved online merchandising.
  • Enabler: developer assistants for code and tests, HR screening and scheduling.
    Write a one-line success metric and a quarterly target for each (e.g., “reduce average response time by 30%,” “increase conversion by 2 points,” “ship weekly instead of bi-weekly”).

2) Redesign the work, not just the process map.
Decide explicitly: what moves to the machine, what stays with people, where the hand-off happens, and what the quality gate is. Train for it. Incentivize it.

3) Industrialize fast.
Stand up a small platform team for identity, data access, monitoring, and policy. Establish lightweight model governance. Create a change backbone (playbooks, enablement, internal communications) so each new team ramps faster than the last.

4) Publish a value dashboard.
Measure cash, not demos: cost per contact, cycle time, on-shelf availability, release frequency, time-to-hire, revenue per visit. Baseline these metrics before launch—most teams skip this step and cannot prove impact six months later when challenged. Review monthly. Retire anything that doesn’t move the number.

5) Keep humans in the loop where it matters.
Customer experience, safety, financial risk, and regulatory exposure all require clear human decision points. Automate confidently—but design escalation paths from day one.


Final word

In 2025, AI pays where volume is high and rules are clear (service, supply chain, HR, engineering), and where personalization drives spend (marketing and sales). The winners aren’t “using AI.” They are re-staging how the work happens—and they can prove it on the P&L.

From AI-Enabled to AI-Centered – Reimagining How Enterprises Operate

Enterprises around the world are racing to deploy generative AI. Yet most remain stuck in the pilot trap; experimenting with copilots and narrow use cases while legacy operating models, data silos, and governance structures stay intact. The results are incremental: efficiency gains without strategic reinvention.

With the rapidly developing context aware AI we also can chart different course — making AI not an add-on, but the center of how the enterprise thinks, decides, and operates. This shift, captured powerfully in The AI-Centered Enterprise (ACE) by Ram Bala, Natarajan Balasubramanian, and Amit Joshi (IMD), signals the next evolution in business design: from AI-enabled to AI-centered.

The premise is bold. Instead of humans using AI tools to perform discrete tasks, the enterprise itself becomes an intelligent system, continuously sensing context, understanding intent, and orchestrating action through networks of people and AI agents. This is the next-generation operating model for the age of context-aware intelligence and it will separate tomorrow’s leaders from those merely experimenting today.


What an AI-Centered Enterprise Is

At its core, an AI-centered enterprise is built around Context-Aware AI (CAI), systems that understand not only content (what is being said) but also intent (why it is being said). These systems operate across three layers:

  • Interaction layer: where humans and AI collaborate through natural conversation, document exchange, or digital workflow.(ACE)
  • Execution layer: where tasks and processes are performed by autonomous or semi-autonomous agents.
  • Governance layer: where policies, accountability, and ethical guardrails are embedded into the AI fabric.

The book introduces the idea of the “unshackled enterprise” — one no longer bound by rigid hierarchies and manual coordination. Instead, work flows dynamically through AI-mediated interactions that connect needs with capabilities across the organization. The result is a company that can learn, decide, and act at digital speed — not by scaling headcount, but by scaling intelligence.

This is a profound departure from current “AI-enabled” organizations, which mostly deploy AI as assistants within traditional structures. In an AI-centered enterprise, AI becomes the organizing principle, the invisible infrastructure that drives how value is created, decisions are made, and work is executed.


How It Differs from Today’s Experiments

Today’s enterprise AI landscape is dominated by point pilots and embedded copilots: productivity boosters designed onto existing processes. They enhance efficiency but rarely transform the logic of value creation.

An AI-centered enterprise, by contrast, rebuilds the transaction system of the organization around intelligence. Key differences include:

  • From tools to infrastructure: AI doesn’t automate isolated tasks; it coordinates entire workflows; from matching expertise to demand, to ensuring compliance, to optimizing outcomes.
  • From structured data to unstructured cognition: Traditional analytics rely on structured databases. AI-centered systems start with unstructured information (emails, documents, chats) extracting relationships and meaning through knowledge graphs and retrieval-augmented reasoning.
  • From pilots to internal marketplaces: Instead of predefined processes, AI mediates dynamic marketplaces where supply and demand for skills, resources, and data meet in real time, guided by the enterprise’s goals and policies.

The result is a shift from human-managed bureaucracy to AI-coordinated agility. Decision speed increases, friction falls, and collaboration scales naturally across boundaries.


What It Takes: The Capability and Governance Stack

The authors of The AI-Centered Enterprise propose a pragmatic framework for this transformation, the 3Cs: Calibrate, Clarify, and Channelize.

  1. Calibrate – Understand the types of AI your business requires. What decisions depend on structured vs. unstructured data? What precision or control is needed? This step ensures technology choices fit business context.
  2. Clarify – Map your value creation network: where do decisions happen, and how could context-aware intelligence change them? This phase surfaces where AI can augment, automate, or orchestrate work for tangible impact.
  3. Channelize – Move from experimentation to scaled execution. Build a repeatable path for deployment, governance, and continuous improvement. Focus on high-readiness, high-impact areas first to build credibility and momentum.

Underneath the 3Cs lies a capability stack that blends data engineering, knowledge representation, model orchestration, and responsible governance.

  • Context capture: unify data, documents, and interactions into a living knowledge graph.
  • Agentic orchestration: deploy systems of task, dialogue, and decision agents that coordinate across domains.
  • Policy and observability: embed transparency, traceability, and human oversight into every layer.

Organizationally, the AI-centered journey requires anchored agility — a balance between central guardrails (architecture, ethics, security) and federated innovation (business-owned use cases). As with digital transformations before it, success depends as much on leadership and learning as on technology.


Comparative Perspectives — and Where the Field Is Heading

The ideas in The AI-Centered Enterprise align with a broader shift seen across leading research and consulting work, a convergence toward AI as the enterprise operating system.

McKinsey: The Rise of the Agentic Organization

McKinsey describes the next evolution as the agentic enterprise; organizations where humans work alongside fleets of intelligent agents embedded throughout workflows. Early adopters are already redesigning decision rights, funding models, and incentives to harness this new form of distributed intelligence.
Their State of AI 2025 shows that firms capturing the most value have moved beyond pilots to process rewiring and AI governance, embedding AI directly into operations, not as a service layer.

BCG: From Pilots to “Future-Built” Firms

BCG’s 2025 research (Sep 2025) finds that only about 5% of companies currently realize sustainable AI value at scale. Those that do are “future-built”, treating AI as a capability, not a project. These leaders productize internal platforms, reuse components across business lines, and dedicate investment to AI agents, which BCG estimates already generate 17% of enterprise AI value, projected to reach nearly 30% by 2028.
This mirrors the book’s view of context-aware intelligence and marketplaces as the next sources of competitive advantage.

Harvard Business Review: Strategy and Human-AI Collaboration

HBR provides the strategic frame. In Competing in the Age of AI, Iansiti and Lakhani show how AI removes the traditional constraints of scale, scope, and learning, allowing organizations to grow exponentially without structural drag. Wilson and Daugherty’s Collaborative Intelligence adds the human dimension, redefining roles so that humans shift from operators to orchestrators of intelligent systems.

Convergence – A New Operating System for the Enterprise

Across these perspectives, the trajectory is clear:

  • AI is moving from tools to coordination system capabilities.
  • Work will increasingly flow through context-aware agents that understand intent and execute autonomously.
  • Leadership attention is shifting from proof-of-concept to operating-model redesign: governance, role architecture, and capability building.
  • The competitive gap will widen between firms that use AI to automate tasks and those that rebuild the logic of their enterprise around intelligence.

In short, the AI-centered enterprise is not a future vision — it is the direction of travel for every organization serious about reinvention in the next five years.


The AI-Centered Enterprise – A Refined Summary

The AI-Centered Enterprise (Bala, Balasubramanian & Joshi, 2025) offers one of the clearest playbooks yet for this new organisational architecture. The authors begin by defining the limitations of today’s AI adoption — fragmented pilots, structured-data basis, and an overreliance on human intermediaries to bridge data, systems, and decisions.

They introduce Context-Aware AI (CAI) as the breakthrough: AI that understands not just information but the intent and context behind it, enabling meaning to flow seamlessly across functions. CAI underpins an “unshackled enterprise,” where collaboration, decision-making, and execution happen fluidly across digital boundaries.

The book outlines three core principles:

  1. Perceive context: Use knowledge graphs and natural language understanding to derive meaning from unstructured information — the true foundation of enterprise knowledge.
  2. Act with intent: Deploy AI agents that can interpret business objectives, not just execute instructions.
  3. Continuously calibrate: Maintain a human-in-the-loop approach to governance, ensuring AI decisions stay aligned with strategy and ethics.

Implementation follows the 3C framework — Calibrate, Clarify, Channelize — enabling leaders to progress from experimentation to embedded capability.

The authors conclude that the real frontier of AI is not smarter tools but smarter enterprises; organizations designed to sense, reason, and act as coherent systems of intelligence.


Closing Reflection

For executives navigating transformation, The AI-Centered Enterprise reframes the challenge. The question is no longer how to deploy AI efficiently, but how to redesign the enterprise so intelligence becomes its organizing logic.

Those who start now, building context-aware foundations, adopting agentic operating models, and redefining how humans and machines collaborate, will not just harness AI. They will become AI-centered enterprises: adaptive, scalable, and truly intelligent by design.

How AI is Reshaping Human Work, Teams, and Organisational Design

The implications of AI are profound: when individuals can deliver team-level output with AI, organisations must rethink not just productivity, but the very design of work and teams. A recent Harvard Business School and Wharton field experiment titled The Cybernetic Teammate offers one of the clearest demonstrations of this shift. Conducted with 776 professionals at Procter & Gamble, the study compared individuals and teams working on real product-innovation challenges, both with and without access to generative AI.

The results were striking:

  • Individuals using AI performed as well as/better than human teams without AI.
  • Teams using AI performed best of all.
  • AI also balanced out disciplinary biases—commercial and technical professionals produced more integrated, higher-quality outputs when assisted by AI.

In short, AI amplified human capability at both the individual and collective level. It became a multiplier of creativity, insight, and balance—reshaping the traditional boundaries of teamwork and expertise.

The Evidence Is Converging

Other large-scale studies reinforce this picture. A Harvard–BCG experiment showed consultants using GPT-4 were 12% more productive, 25% faster, and delivered work rated 40% higher in quality for tasks within the model’s “competence frontier


How Work Will Be Done Differently

These findings signal a fundamental redesign in how work is organised. The dominant model—teams collaborating to produce output—is evolving toward individual-with-AI first, followed by team integration and validation.

A typical workflow may now look like this:

AI-assisted ideation → human synthesis → AI refinement → human decision.

Work becomes more iterative, asynchronous, and cognitively distributed. Human collaboration increasingly occurs through the medium of AI: teams co-create ideas, share prompt libraries, and build upon each other’s AI-generated drafts.

The BCG study introduces a useful distinction:

  • Inside the AI frontier: tasks within the model’s competence—ideation, synthesis, summarisation—where AI can take the lead.
  • Outside the AI frontier: tasks requiring novel reasoning, complex judgment, or proprietary context—where human expertise must anchor the process.

Future roles will be defined less by function and more by how individuals navigate that frontier: knowing when to rely on AI and when to override it. Skills like critical reasoning, verification, and synthesis will matter more than rote expertise.


Implications for Large Enterprises

For established organisations, the shift toward AI-augmented work changes the anatomy of structure, leadership, and learning.

1. Flatter, more empowered structures.
AI copilots widen managerial spans by automating coordination and reporting. However, they also increase the need for judgmental oversight—requiring managers who coach, review, and integrate rather than micromanage.

2. Redefined middle-management roles.
The traditional coordinator role gives way to integrator and quality gatekeeper. Managers become stewards of method and culture rather than traffic controllers.

3. Governance at the “AI frontier.”
Leaders must define clear rules of engagement: what tasks can be automated, which require human review, and what data or models are approved. This “model–method–human” control system ensures both productivity and trust.

4. A new learning agenda.
Reskilling moves from technical training to cognitive fluency: prompting, evaluating, interpreting, and combining AI insights with business judgment. The AI-literate professional becomes the new organisational backbone.

5. Quality and performance metrics evolve.
Volume and throughput give way to quality, cycle time, rework reduction, and bias detection—metrics aligned with the new blend of human and machine contribution.

In short, AI doesn’t remove management—it redefines it around sense-making, coaching, and cultural cohesion.


Implications for Startups and Scale-Ups

While enterprises grapple with governance and reskilling, startups are already operating in an AI-native way.

Evidence from recent natural experiments shows that AI-enabled startups raise funding faster and with leaner teams. The cost of experimentation drops, enabling more rapid iteration but also more intense competition.

The typical AI-native startup now runs with a small human core and an AI-agent ecosystem handling customer support, QA, and documentation. The operating model is flatter, more fluid, and relentlessly data-driven.

Yet the advantage is not automatic. As entry barriers fall, differentiation depends on execution, brand, and customer intimacy. Startups that harness AI for learning loops—testing, improving, and scaling through real-time feedback—will dominate the next wave of digital industries.


Leadership Imperatives – Building AI-Enabled Work Systems

For leaders, the challenge is no longer whether to use AI, but how to design work and culture around it. Five imperatives stand out:

  1. Redesign workflows, not just add tools. Map where AI fits within existing processes and where human oversight is non-negotiable.
  2. Build the complements. Create shared prompt libraries, custom GPTs,  structured review protocols, and access to verified data.
  3. Run controlled pilots. Test AI augmentation in defined workstreams, measure speed, quality, and engagement, and scale what works.
  4. Empower and educate. Treat AI literacy as a strategic skill—every employee a prompt engineer, every manager a sense-maker.
  5. Lead the culture shift. Encourage experimentation, transparency, and open dialogue about human-machine collaboration.

Closing Thought

AI will not replace humans or teams. But it will transform how humans and teams create value together.

The future belongs to organisations that treat AI not as an external technology, but as an integral part of their work design and learning system. The next generation of high-performing enterprises—large and small—will be those that master this new choreography between human judgment and machine capability.

AI won’t replace teams—but teams that know how to work with AI will outperform those that don’t.

More on this in one of my next newsletters.

The AI Strategy Imperative: Why Act Now

Two weeks ago, I completed IMD’s AI Strategy & Implementation program. It made the “act now” imperative unmistakable. In this newsletter I share the overarching insights I took away; in upcoming issues I’ll go deeper into specific topics and tools we used.


AI is no longer a tooling choice. It’s a shift in distribution, decision-making, and work design that will create new winners and losers. Leaders who move now—anchoring execution in clear problems, strong data foundations, and human–AI teaming—will compound advantage while others get trapped in pilots and platform dependency.


1) Why act now: the competitive reality

Distribution is changing. AI assistants and agentic workflows increasingly mediate buying journeys. If your brand isn’t represented in answers and automations, you forfeit visibility, traffic, and margin. This is a channel economics shift: AI determines which brands are surfaced—and which are invisible.

Platforms are consolidating power. Hyperscalers are embedding AI across their offerings. You’ll benefit from their acceleration, but your defensibility won’t come from platforms your competitors can also buy. The durable moat is your proprietary data, decision logic, and learning loops you control—not a longer vendor list.

Agents are getting real. Think of agents as “an algorithm that applies algorithms.” They decompose work into steps, call tools/APIs, and complete tasks with minimal supervision. Agent architectures will reshape processes, controls, and talent—pushing leaders to design for human–AI teams rather than bolt‑on copilots.


2) The paradox: move fast and build right

The cost of waiting. Competitors pairing people with AI deliver faster at lower cost and start absorbing activities you still outsource. As internal production costs fall faster than coordination costs, vertical integration becomes attractive—accelerated by automation. Late movers face margin pressure and share erosion.

The risk of rushing. Many efforts stall because they “build castles on quicksand”—shiny proofs‑of‑concept on weak data and process foundations. Value doesn’t materialize, trust erodes, and budgets freeze. Urgency must be paired with disciplined follow up so speed creates compounded learning.


3) A durable path to value: the 5‑Box Implementation Framework

A simple path from strategy deck to shipped value:

  1. Problem. Define a single business problem tied to P&L or experience outcomes. Write the metric up front; make the use case narrow enough to ship quickly.
  2. Data. Map sources, quality, access, and ownership. Decide what you must own versus can borrow; invest early in clean, governed data because it is the most sustainable differentiator.
  3. Tools. Choose the lightest viable model/agent and the minimum integration needed to achieve the outcome, keep it simple.
  4. People. Form cross‑functional teams (domain expertise + data + engineering + change) with one accountable owner. Team design—not individual heroics—drives performance.
  5. Feedback loops. Instrument production to compare predicted vs. actual outcomes. The delta gives valuable insights and becomes new training data.

Your defensive moat is data + people + decisions + learning loops, not your vendor list.


4) Moving the Human Workforce to more Complex Tasks

While AI absorbs simple and complicated work (routine tasks, prediction, pattern recognition), the human edge shifts decisively to complex and chaotic problems—where cause and effect are only clear in retrospect or not at all. This economic reality forces immediate investment in people as internal work is increasingly handled by AI–human teams.

The immediate talent pivot. Leaders must signal—and codify—new “complexity competencies”: adaptive problem‑solving, systems thinking, comfort with ambiguity, and AI product‑ownership (defining use cases, data needs, acceptance criteria, and evaluation).

Organizational design for learning.

  • Security: Build psychological safety so smart experiments are rewarded and failures fuel learning, not blame.
  • Convenience: Make adoption of new AI tools easy—frictionless access, clear guidance, and default enablement.
  • Process: A weak human with a tool and a better process will outperform a strong human with a tool and a worse process. Define roles, handoffs, and measurement so teams learn in the loop.

5) Where ROI shows up first

There is a lot of discussion on where AI really shows it benefits and there are four areas, where we see consistent reporting about:

Content. Marketing and knowledge operations see immediate throughput gains and more consistent quality. Treat this as a production system: govern sources, version prompts/flows, and measure impact.

Code. Assistance, testing, and remediation compress cycle time and reduce defects. Success depends on clear guardrails, reproducible evaluation, and tight feedback from production incidents into your patterns.

Customer. Service and sales enablement benefit from faster resolution and personalization at scale. Start with narrow intents, then expand coverage as accuracy and routing improve.

Creative. Design, research, and planning benefit from rapid exploration and option value. Use agentic research assistants with human review to widen the solution space before you converge.


6) Organize to scale without chaos

Govern the reality, not the slide. Shadow AI already exists. Enable it safely with approved toolkits, lightweight guardrails, and clear data rules—so exploration happens inside the tent, not outside it.

CoE vs. federation. Avoid the “cost‑center CoE” trap. Stand up a small enablement core (standards, evaluation, patterns), but push delivery into business‑owned pods that share libraries and reviews. This balances consistency with throughput.

Human + AI teams. Process design beats heroics. Make handoffs explicit, instrument outcomes, and build psychological safety so teams learn in the loop. A weak human with a machine and a better process will outperform a strong human with a machine and a worse process.


What this means for leaders

  • Move talent to handle complexity. Codify new competencies (adaptive problem‑solving, systems thinking, comfort with ambiguity, AI product‑ownership) and design organizational systems that accelerate learning (security, convenience, process).
  • Your moat is data + people + decisions + learning loops. Platforms accelerate you, but they’re available to everyone. Proprietary, well‑governed data feeding instrumented processes is what compounds.
  • Ship value early; strengthen foundations as you scale. Start where ROI is proven (content, code, customer, creative), then use that momentum to fund data quality and governance.
  • Design for agents and teams now. Architect processes assuming agents will do steps of work and humans will supervise, escalate, and improve the system. That’s how you create repeatable outcomes.

Lifelong Learning in the Age of AI – My Playbook

September 2025, I received two diplomas: IMD’s AI Strategy & Implementation and Nyenrode University’s Corporate Governance for Supervisory Boards. I am proud of both—more importantly, they cap off a period where I have deliberately rebuilt how I learn.

With AI accelerating change and putting top-tier knowledge at everyone’s fingertips, the edge goes to leaders who learn—and apply—faster than the market moves. In this issue I am not writing theory; I am sharing my learning journey of the past six months—what I did, what worked, and the routine I will keep using. If you are a leader, I hope this helps you design a learning system that fits a busy executive life.


My Learning System – 3 pillars

1) Structured learning

This helped me to gain the required depth:

  • IMD — AI Strategy & Implementation. I connected strategy to execution: where AI creates value across the business, and how to move from pilots to scaled outcomes. In upcoming newsletters, I will go share insights on specific topics we went deep on in this course.
  • Nyenrode — Corporate Governance for Supervisory Boards. I deepened my view on board-level oversight—roles and duties, risk/compliance, performance monitoring, and strategic oversight. I authored my final paper on how to close the digital gap in supervisory boards (see also my earlier article)
  • Google/Kaggle’s 5-day Generative AI Intensive. Hands-on labs demystified how large language models work: what is under the hood, why prompt quality matters, where workflows can break, and how to evaluate outputs against business goals. It gave understanding how to improve the use of these models.

2) Curated sources

This extended the breadth of my understanding of the use of AI.

2a. Books

Below I give a few examples, more book summaries/review, you can find on my website: www.bestofdigitaltransformation.com/digital-ai-insights.

  • Co-Intelligence: a pragmatic mindset for working with AI—experiment, reflect, iterate.
  • Human + Machine: how to redesign processes around human–AI teaming rather than bolt AI onto old workflows.
  • The AI-Savvy Leader: what executives need to know to steer outcomes without needing to code.

2b. Research & articles
I built a personal information base with research from: HBR, MIT, IMD, Gartner, plus selected pieces from McKinsey, BCG, Strategy&, Deloitte, and EY. This keeps me grounded in capability shifts, operating-model implications, and the evolving landscape.

2c. Podcasts & newsletters
Two that stuck: AI Daily Brief and Everyday AI. Short, practical audio overviews with companion newsletters so I can find and revisit sources. They give me a quick daily pulse without drowning in feeds.

3) AI as my tutor

I am using AI to get personalised learning support.

3a. Explain concepts. I use AI to clarify ideas, contrast approaches, and test solutions using examples from my context.
3b. Create learning plans. I ask for step-by-step learning journeys with milestones and practice tasks tailored to current projects.
3c. Drive my understanding. I use different models to create learning content, provide assignments, and quiz me on my understanding.


How my journey unfolded

Here is how it played out.

1) Started experimenting with ChatGPT.
I was not an early adopter; I joined when GPT-4 was already strong. Like many, I did not fully trust it at first. I began with simple questions and asked the model to show how it interpreted my prompts. That built confidence without creating risks/frustration.

2) Built foundations with books.
I read books like Co-Intelligence, Human + Machine, and The AI-Savvy Leader. These created a common understanding for where AI helps (and does not), how to pair humans and machines, and how to organise for impact. For all the books I created reviews, to anchor my learnings and share them in my website.

3) Added research and articles.
I set up a repository with research across HBR/MIT/IMD/Gartner and selected consulting research. This kept me anchored in evidence and applications, and helped me track the operational implications for strategy, data, and governance.

4) Tried additional models (Gemini and Claude).
Rather than picking a “winner,” I used them side by side on real tasks. The value was in contrast—seeing how different models frame the same question, then improving the final answer by combining perspectives. Letting models critique each other surfaced blind spots.

5) Went deep with Google + Kaggle.
The 5-day intensive course clarified what is under the hood: tokens/vectors, why prompts behave the way they do, where workflows tend to break, and how to evaluate outputs beyond “sounds plausible.” The exercises translated directly into better prompt design and started my understanding of how agents work.

6) Used NotebookLM for focused learning.
For my Nyenrode paper, I uploaded the key articles and interacted only with that corpus. NotebookLM generated grounded summaries, surfaced insights I might have missed, and reduced the risk of invented citations (by sticking to the uploaded resources). The auto-generated “podcast” is one of the coolest features I experienced and really helps to learn about the content.

7) Added daily podcasts/newsletters to stay current.
The news volume on AI is impossible to track end-to-end. AI Daily Brief and Everyday AI give me a quick scan each morning and links worth saving for later deep dives. This provides the difference between staying aware versus constantly feeling behind.

8) Learned new tools and patterns at IMD.

  • DeepSeek helped me debug complex requests by showing how the model with reasoning interpreted my prompt—a fantastic way to unravel complex problems.
  • Agentic models like Manus showed the next step: chaining actions and tools to complete tasks end-to-end.
  • CustomGPTs (within today’s LLMs) let me encode my context, tone, and recurring workflows, boosting consistency and speed across repeated tasks.

Bring it together with a realistic cadence.

Leaders do not need another to-do list; they need a routine that works. Here is the rhythm I am using now:

Daily

  • Skim one high-signal newsletter or listen to a podcast.
  • Capture questions to explore later.
  • Learn by doing with the various tools.

Weekly

  • Learn: read one or more papers/articles on various Ai related topics
  • Apply: use one idea on a live problem; interact with AI on going deeper
  • Share: create my weekly newsletter, based on my learnings

Monthly

  • Pick one learning topic read a number of primary sources, not just summaries.
  • Draft an experiment: with goal, scope, success metric, risks, and data needs. Using AI to pressure-test assumptions.
  • Review with a thought leaders/colleagues for challenge and alignment.

Quarterly

  • Read at least one book that expands your mental models.
  • Create a summary for my network. Teaching others cements my own understanding.

(Semi-) Annualy

  • Add a structured program or certificate to go deep and to benefit from peer debate.

Closing

The AI era compresses the shelf life of knowledge. Waiting for a single course is no longer enough. What works is a learning system: structured learning for depth, curated sources for breadth, and AI as your tutor for speed. That has been my last six months, and it is a routine I will continue.

Consultancy, Rewired: AI’s Impact on consultancy firms and what their clients should expect

The bottom line: consulting is not going away. It is changing—fast. AI removes a lot of manual work and shifts the focus to speed, reusable tools, and results that can be measured. This has consequences for how firms are organised and how clients buy and use consulting.


What HBR says

The main message: AI is reshaping the structure of consulting firms. Tasks that used to occupy many junior people—research, analysis, and first-pass modelling—are now largely automated. Teams get smaller and more focused. Think of a move from a wide pyramid to a slimmer column.

New human roles matter more: people who frame the problem, translate AI insights into decisions, and work with executives to make change happen. HBR also points to a new wave of AI-native boutiques. These firms start lean, build reusable assets, and aim for outcomes rather than volume of slides.

What The Economist says

The emphasis here is on client expectations and firm economics. Clients want proof of impact, not page counts. If AI can automate a lot of the production work, large firms must show where they still create unique value. That means clearer strategies, simpler delivery models, and pricing that links fees to outcomes.

The coverage also suggests this is a structural shift, not a short-term cycle. Big brands will need to combine their access and experience with technology, reusable assets, and strong governance to stay ahead.


What AI can do in consulting — now vs. next (practical view)

Now

  • Discovery & synthesis. AI can sweep through filings, research, transcripts, and internal knowledge bases to cluster themes, extract evidence with citations, and surface red flags. This compresses the preparation phase of understanding so teams spend time on framing the problem and implications.
  • First-pass quantification & modelling. It produces draft market models and sensitivity analyses that consultants then stress-test. The benefit isn’t perfect numbers; it’s cycle-time—from question to a defendable starting point—in hours, not days.
  • Deliverables at speed. From storylines to slide drafts and exhibits, AI enforces structure and house style, handles versioning, and catches inconsistencies. Human effort shifts to message clarity, executive alignment, and implications for decision makers.
  • Program operations & governance. Agents can maintain risk and issue logs, summarize meetings, chase actions, and prepare steering packs. Leaders can use meeting time for choices, not status updates.
  • Knowledge retrieval & reuse. Firm copilots bring up relevant cases, benchmarks, and experts. Reuse becomes normal, improving speed and consistency across engagements.

Next (12–24 months)

  • Agentic due diligence. Multi-agent pipelines will triage vast data sets (news, filings, call transcripts), propose claims with evidence trails, and flag anomalies for partner review—compressing weeks to days while keeping human judgment in the loop.
  • Scenario studios and digital twins. Reusable models (pricing, supply, workforce) will let executives explore “what-ifs” choices live, improving decision speed and buy-in.
  • Operate / managed AI. Advisory will bundle with run-time AI services (build-run-transfer), priced on SLAs or outcome measuress, linking fees to performance after go-live.
  • Scaled change support. Chat-based enablement and role-tailored nudges will help people adopt new behaviors at scale; consultants curate and calibrate content and finetune interventions instead of running endless classroom sessions.

Reality check: enterprise data quality, integration, and model-risk constraints keep humans firmly in the loop. The best designs make this explicit with approvals, audit trails, and guardrails.


Five industry scenarios (2025–2030)

  1. AI-Accelerated Classic. The big firms keep CXO access but run leaner teams; economics rely on IP based assets and pricing shifts from hours to outcomes.
  2. Hourglass Market Strong positions at the top (large integrators) and at the bottom (specialist boutiques). The middle gets squeezed as clients self-serve standard analysis.
  3. Productised & Operate. Advice comes with data, models, and managed services. Contracts include service levels and shared-savings, tying value to real-world results.
  4. Client-First Platforms. Companies build internal AI studios and bring in targeted experts. Firms must plug into client platforms and compete on speed, trust, and distinctive assets.
  5. AI-Native Agencies Rise. New entrants born with automation-first workflows and thin layers scale quickly—resetting expectations of speed, price-performance, and what a “team” looks like.

What clients should ask for (and firms should offer)

  • Ask for assets, not documents. Ask for reusable data, models, and playbooks that you keep using after the engagement. —and specify this in the SOW.
  • Insist on transparency. Demand visibility into data sources, prompt chains, evaluation methods, and guardrails so you can trust, govern, and scale what’s built.
  • Design for capability transfer. Make enablement, documentation, and handover part of the scope with clear acceptance criteria.
  • Outcome-linked pricing where possible. Start with a pilot and clear success metrics; scale with contracts tied to results or service levels.

Close

AI is changing both the shape of consulting firms and the way organisations use them. Smaller teams, reusable assets, and outcome focus will define the winners.

From Org Charts to Work Charts – Designing for Hybrid Human–Agent Organisations

The org chart is no longer the blueprint for how value gets created. As Microsoft’s Asha Sharma puts it, “the org chart needs to become the work chart.” When AI agents begin to own real slices of execution—preparing customer interactions, triaging tickets, validating invoices—structure must follow the flow of work, not the hierarchy of titles. This newsletter lays out what that means for leaders and how to move, decisively, from boxes to flows.


Why this is relevant now

Agents are leaving the lab. The conversation has shifted from “pilot a chatbot” to “re-architect how we deliver outcomes.” Boards and executive teams are pushing beyond experiments toward embedded agents in sales, service, finance, and supply chain. This is not a tooling implementation—it’s an operating-model change.

Hierarchy is flattening. When routine coordination and status reporting are automated, you need fewer layers to move information and make decisions. Roles compress; accountabilities become clearer; cycle times shrink. The management burden doesn’t disappear—it changes. Leaders spend less time collecting updates and more time setting direction, coaching, and owning outcomes.

Enterprises scale. AI-native “tiny teams” design around flows—the sequence of steps that create value—rather than traditional functions. Large organizations shouldn’t copy their size; they should copy this unit of design. Work Charts make each flow explicit, assign human and agent owners, and let you govern and scale it across the enterprise.


What is a Work Chart?

A Work Chart is a living map of how value is produced—linking outcomes → end-to-end flows → tasks → handoffs—and explicitly assigning human owners and agent operators at each step. Where an org chart shows who reports to whom, a Work Chart shows:

  • Where the work happens – the flow and its stages
  • Who is accountable – named human owners of record
  • What is automated – agents with charters and boundaries
  • Which systems/data/policies apply – the plumbing and guardrails
  • How performance is measured – SLAs, exceptions, error/rework, latency

Work Chart is your work graph made explicit—connecting goals, people, and permissions so agents can act with context and leaders can govern outcomes.


Transformation at every level

Board / Executive Committee
Set policy for non-human resources (NHRs) just as you do for capital and people. Define decision rights, guardrails, and budgets (compute/tokens). Require blended KPIs—speed, cost, risk, quality—reported for human–agent flows, not just departments. Make Work Charts a standing artifact in performance reviews.

Enterprise / Portfolio
Shift from function-first projects to capability platforms (retrieval, orchestration, evaluation, observability) that any BU can consume. Keep a registry of approved agents and a flow inventory so portfolio decisions always show which flows, agents, and data they affect. Treat major flow changes like product releases—versioned, reversible, and measured.

Business Units / Functions
Turn priority processes into agent-backed services with clear SLAs and a named human owner. Publish inputs/outputs, boundaries (what the agent may and may not do), and escalation paths. You are not “installing AI”; you’re standing up services that can be governed and improved.

Teams
Maintain an Agent Roster (purpose, inputs, outputs, boundaries, logs). Fold Work Chart updates into existing rituals (standups, QBRs). Managers spend less time on status and more on coaching, exception handling, and continuous improvement of the flow.

Individuals
Define personal work charts for each role (the 5–7 recurring flows they own) and the agents they orchestrate. Expect role drift toward judgment, relationships, and stewardship of AI outcomes.


Design principles – what “good” looks like

  1. Outcome-first. Start from customer journeys and Objective – Key Results (OKRs); redesign flows to meet them.
  2. Agents as first-class actors. Every agent has a charter, a named owner, explicit boundaries, and observability from day one.
  3. Graph your work. Connect people, permissions, and policies so agents operate with context and least-privilege access.
  4. Version the flow. Treat flow changes like product releases—documented, tested, reversible, and measured.
  5. Measure continuously. Track time-to-outcome, error/rework, exception rates, and SLA adherence—reviewed where leadership already looks (business reviews, portfolio forums).

Implementation tips

1) Draw the Work Chart for mission-critical journeys
Pick one customer journey, one financial core process, and one internal productivity flow. Map outcome → stages → tasks → handoffs. Mark where agents operate and where humans remain owners of record. This becomes the executive “single source” for how the work actually gets done.

2) Create a Work Chart Registry
Create a lightweight, searchable registry that lists every flow, human owner, agent(s), SLA, source, and data/permission scope. Keep it in the systems people already use (e.g., your collaboration hub) so it becomes a living reference, not a slide deck.

3) Codify the Agent Charters
For each agent on the Work Chart, publish a one-pager: Name, Purpose, Inputs, Outputs, Boundaries, Owner, Escalation Path, Log Location. Version control these alongside the Work Chart so changes are transparent and auditable.

4) Measure where the work happens.
Instrument every node with flow health metrics—latency, error rate, rework, exception volume. Surface them in the tools leaders already use (BI dashboards, exec scorecards). The goal is to manage by flow performance, not anecdotes.

5) Shift budgeting from headcount to flows
Attach compute/SLA budgets to the flows in your Work Chart. Review them at portfolio cadence. Fund increases when there’s demonstrable improvement in speed, quality, or risk. This aligns investment with value creation rather than with org boxes.

6) Communicate the new social contract
Use the Work Chart in town halls and leader roundtables to explain what’s changing, why it matters, and how roles evolve. Show before/after charts for one flow to make the change tangible. Invite feedback; capture exceptions; iterate.


Stop reorganizing boxes – start redesigning flows. Mandate that each executive publishes the first Work Chart for one mission-critical journey—complete with agent charters, SLAs, measurements, and named owners of record. Review it with the same rigor you apply to budget and risk. Organizations that do this won’t just “adopt AI”; they’ll build a living structure that mirrors how value is created—and compounds it.