Where AI Is Creating the Most Value (Q4 2025)

There’s still a value gap—but leaders are breaking away. In the latest BCG work, top performers report around five times more revenue uplift and three times deeper cost reduction from AI than peers. The common thread: they don’t bolt AI onto old processes—they rewire the work. As BCG frames it, the 10-20-70 rule applies: roughly 10% technology, 20% data and models, and 70% process and organizational change. That’s where most of the value is released.

This article is for leaders deciding where to place AI bets in 2025. If you’re past “should we do AI?” and into “where do we make real money?”, this is your map.


Where the money is (cross-industry)

1) Service operations: cost and speed
AI handles simple, repeatable requests end-to-end and coaches human agents on the rest. The effect: shorter response times, fewer repeat contacts, and more consistent outcomes—without sacrificing customer experience.

2) Supply chain: forecast → plan → move
The gains show up in fewer stockouts, tighter inventories, and faster cycle times. Think demand forecasting, production planning, and dynamic routing that reacts to real-world conditions.

3) Software and engineering: throughput
Developer copilots and automated testing increase release velocity and reduce rework. You ship improvements more often, with fewer defects, and free scarce engineering time for higher-value problems.

4) HR and talent: faster funnels and better onboarding/learning
Screening, scheduling, and candidate communication are compressed from days to hours. Internal assistants support learning and workforce planning. The results: shorter time-to-hire and better conversion through each stage.

5) Marketing and sales: growing revenue
Personalization, next-best-action, and on-the-fly content creation consistently drive incremental sales. This is the most frequently reported area for measurable revenue lift.

Leadership advice: Pick 2-3 high-volume processes (one cost, one revenue). Redesign the workflow, not just add AI on top. Set hard metrics (cost per contact, cycle time, revenue per visit) and a 90-day checkpoint. Industrialize what works; kill what doesn’t.


Sector spotlights

Consumer industries (Retail & Consumer Packaged Goods)

Marketing and sales.

  • Personalized recommendations increase conversion and basket size; retail media programs are showing verified incremental sales.
  • AI-generated marketing content reduces production costs and speeds creative iteration across markets and channels. Mondelez reported 30-50% reduction in marketing content production costs using generative AI at scale.
  • Campaign analytics that used to take days are produced automatically, so teams run more “good bets” each quarter.

Supply chain.

  • Demand forecasting sharpens purchasing and reduces waste.
  • Production planning cuts changeovers and work-in-progress.
  • Route optimization lowers distance traveled and fuel, improving on-time delivery.

Customer service.

  • AI agents now resolve a growing share of contacts end-to-end. Ikea AI agents now handle already 47% of all request so service people can offer more support on the other questions.
  • Agent assist gives human colleagues instant context and suggested next steps.
    The result is more issues solved on first contact, shorter wait times, and maintained satisfaction, provided clear hand-offs to humans exist for complex cases.

What to copy: Start with one flagship process in each of the three areas above; set a 90-day target; only then roll it across brands and markets with a standard playbook.


Manufacturing (non-pharma)

Predictive maintenance.
When tied into scheduling and spare-parts planning, predictive maintenance reduces unexpected stoppages and maintenance costs—foundational for higher overall equipment effectiveness (spelled out intentionally).

Computer-vision quality control.
In-line visual inspection detects defects early, cutting scrap, rework, and warranty exposure. Value compounds as models learn across lines and plants.

Production scheduling.
AI continuously rebalances schedules for constraints, changeovers, and demand shifts—more throughput with fewer bottlenecks. Automotive and electronics manufacturers report 5-15% throughput gains when AI-driven scheduling handles real-time constraints.

Move to scale: Standardize data capture on the line, run one “AI plant playbook” to convergence, then replicate. Treat models as line assets with clear ownership, service levels, and a retraining cadence.


Pharmaceuticals

R&D knowledge work.
AI accelerates three high-friction areas: (1) large evidence reviews, (2) drafting protocols and clinical study reports, and (3) assembling regulatory summaries. You remove weeks from critical paths and redirect scientists to higher-value analysis.

Manufacturing and quality.
Assistants streamline batch record reviews, deviation write-ups, and quality reports. You shorten release cycles and reduce delays. Govern carefully under Good Manufacturing Practice, with humans approving final outputs.

Practical tip: Stand up an “AI for documents” capability (standardized templates, automated redaction, citation checking, audit trails) before you touch lab workflows. It pays back quickly, proves your governance model, and reduces compliance risk when you expand to higher-stakes processes.


Healthcare providers

Augment the professional; automate the routine. Radiology, pathology, and frontline clinicians benefit from AI that drafts first-pass reports, triages cases, and pre-populates documentation. Northwestern Medicine studies show approximately 15.5% average productivity gains (up to 40% in specific workflows) in radiology report completion without accuracy loss. Well-designed oversight maintains quality while reducing burnout.

Non-negotiable guardrail: Clear escalation rules for edge cases and full traceability. If a tool can’t show how it arrived at a suggestion, it shouldn’t touch a clinical decision. Establish explicit human review protocols for any AI-generated clinical content before it reaches patients or medical records.


Financial services

Banking.

  • Service and back-office work: assistants summarize documents, draft responses, and reconcile data. JPMorgan reports approximately 30% fewer servicing calls per account in targeted Consumer and Community Banking segments and 15% lower processing costs in specific workflows.
  • Risk and compliance: earlier risk flags, smarter anti-money-laundering reviews, and cleaner audit trails reduce losses and manual rework.

Insurance.

  • Claims: straight-through processing for simple claims moves from days to hours.
  • Underwriting: AI assembles files and surfaces risk signals so underwriters focus on complex judgment.
  • Back office: finance, procurement, and HR automations deliver steady, compounding savings.

Leadership note: Treat service assistants and claims bots as products with roadmaps and release notes—not projects. That discipline keeps quality high as coverage expands.


Professional services (legal, consulting, accounting)

Document-heavy work is being rebuilt: contract and filing review, research synthesis, proposal generation. Well-scoped processes often see 40–60% time savings. . Major law firms report contract review cycles compressed from 8-12 hours to 2-3 hours for standard agreements, with associates redirected to judgment-heavy analysis and client advisory work.

Play to win: Build a governed retrieval layer over prior matters, proposals, and playbooks—your firm’s institutional memory—then give every practitioner an assistant that can reason over it.


Energy and utilities

Grid and renewables.
AI improves demand and renewable forecasting and helps balance the grid in real time. Autonomous inspections (drones plus computer vision) speed asset checks by 60-70% and reduce hazards. Predictive maintenance on critical infrastructure prevents outages—utilities report 20-30% reduction in unplanned downtime when AI is tied into work order systems and cuts truck rolls (field service visits).

How to scale: Start with one corridor or substation, prove inspection cycle time and fault detection, then expand with a standard data schema so models learn from every site.


Next Steps (practical and measurable)

1) Choose three processes—one for cost, one for revenue, one enabler.
Examples:

  • Cost: customer service automation, predictive maintenance, the month-end finance close.
  • Revenue: personalized offers, “next-best-action” in sales, improved online merchandising.
  • Enabler: developer assistants for code and tests, HR screening and scheduling.
    Write a one-line success metric and a quarterly target for each (e.g., “reduce average response time by 30%,” “increase conversion by 2 points,” “ship weekly instead of bi-weekly”).

2) Redesign the work, not just the process map.
Decide explicitly: what moves to the machine, what stays with people, where the hand-off happens, and what the quality gate is. Train for it. Incentivize it.

3) Industrialize fast.
Stand up a small platform team for identity, data access, monitoring, and policy. Establish lightweight model governance. Create a change backbone (playbooks, enablement, internal communications) so each new team ramps faster than the last.

4) Publish a value dashboard.
Measure cash, not demos: cost per contact, cycle time, on-shelf availability, release frequency, time-to-hire, revenue per visit. Baseline these metrics before launch—most teams skip this step and cannot prove impact six months later when challenged. Review monthly. Retire anything that doesn’t move the number.

5) Keep humans in the loop where it matters.
Customer experience, safety, financial risk, and regulatory exposure all require clear human decision points. Automate confidently—but design escalation paths from day one.


Final word

In 2025, AI pays where volume is high and rules are clear (service, supply chain, HR, engineering), and where personalization drives spend (marketing and sales). The winners aren’t “using AI.” They are re-staging how the work happens—and they can prove it on the P&L.

How AI is Reshaping Human Work, Teams, and Organisational Design

The implications of AI are profound: when individuals can deliver team-level output with AI, organisations must rethink not just productivity, but the very design of work and teams. A recent Harvard Business School and Wharton field experiment titled The Cybernetic Teammate offers one of the clearest demonstrations of this shift. Conducted with 776 professionals at Procter & Gamble, the study compared individuals and teams working on real product-innovation challenges, both with and without access to generative AI.

The results were striking:

  • Individuals using AI performed as well as/better than human teams without AI.
  • Teams using AI performed best of all.
  • AI also balanced out disciplinary biases—commercial and technical professionals produced more integrated, higher-quality outputs when assisted by AI.

In short, AI amplified human capability at both the individual and collective level. It became a multiplier of creativity, insight, and balance—reshaping the traditional boundaries of teamwork and expertise.

The Evidence Is Converging

Other large-scale studies reinforce this picture. A Harvard–BCG experiment showed consultants using GPT-4 were 12% more productive, 25% faster, and delivered work rated 40% higher in quality for tasks within the model’s “competence frontier


How Work Will Be Done Differently

These findings signal a fundamental redesign in how work is organised. The dominant model—teams collaborating to produce output—is evolving toward individual-with-AI first, followed by team integration and validation.

A typical workflow may now look like this:

AI-assisted ideation → human synthesis → AI refinement → human decision.

Work becomes more iterative, asynchronous, and cognitively distributed. Human collaboration increasingly occurs through the medium of AI: teams co-create ideas, share prompt libraries, and build upon each other’s AI-generated drafts.

The BCG study introduces a useful distinction:

  • Inside the AI frontier: tasks within the model’s competence—ideation, synthesis, summarisation—where AI can take the lead.
  • Outside the AI frontier: tasks requiring novel reasoning, complex judgment, or proprietary context—where human expertise must anchor the process.

Future roles will be defined less by function and more by how individuals navigate that frontier: knowing when to rely on AI and when to override it. Skills like critical reasoning, verification, and synthesis will matter more than rote expertise.


Implications for Large Enterprises

For established organisations, the shift toward AI-augmented work changes the anatomy of structure, leadership, and learning.

1. Flatter, more empowered structures.
AI copilots widen managerial spans by automating coordination and reporting. However, they also increase the need for judgmental oversight—requiring managers who coach, review, and integrate rather than micromanage.

2. Redefined middle-management roles.
The traditional coordinator role gives way to integrator and quality gatekeeper. Managers become stewards of method and culture rather than traffic controllers.

3. Governance at the “AI frontier.”
Leaders must define clear rules of engagement: what tasks can be automated, which require human review, and what data or models are approved. This “model–method–human” control system ensures both productivity and trust.

4. A new learning agenda.
Reskilling moves from technical training to cognitive fluency: prompting, evaluating, interpreting, and combining AI insights with business judgment. The AI-literate professional becomes the new organisational backbone.

5. Quality and performance metrics evolve.
Volume and throughput give way to quality, cycle time, rework reduction, and bias detection—metrics aligned with the new blend of human and machine contribution.

In short, AI doesn’t remove management—it redefines it around sense-making, coaching, and cultural cohesion.


Implications for Startups and Scale-Ups

While enterprises grapple with governance and reskilling, startups are already operating in an AI-native way.

Evidence from recent natural experiments shows that AI-enabled startups raise funding faster and with leaner teams. The cost of experimentation drops, enabling more rapid iteration but also more intense competition.

The typical AI-native startup now runs with a small human core and an AI-agent ecosystem handling customer support, QA, and documentation. The operating model is flatter, more fluid, and relentlessly data-driven.

Yet the advantage is not automatic. As entry barriers fall, differentiation depends on execution, brand, and customer intimacy. Startups that harness AI for learning loops—testing, improving, and scaling through real-time feedback—will dominate the next wave of digital industries.


Leadership Imperatives – Building AI-Enabled Work Systems

For leaders, the challenge is no longer whether to use AI, but how to design work and culture around it. Five imperatives stand out:

  1. Redesign workflows, not just add tools. Map where AI fits within existing processes and where human oversight is non-negotiable.
  2. Build the complements. Create shared prompt libraries, custom GPTs,  structured review protocols, and access to verified data.
  3. Run controlled pilots. Test AI augmentation in defined workstreams, measure speed, quality, and engagement, and scale what works.
  4. Empower and educate. Treat AI literacy as a strategic skill—every employee a prompt engineer, every manager a sense-maker.
  5. Lead the culture shift. Encourage experimentation, transparency, and open dialogue about human-machine collaboration.

Closing Thought

AI will not replace humans or teams. But it will transform how humans and teams create value together.

The future belongs to organisations that treat AI not as an external technology, but as an integral part of their work design and learning system. The next generation of high-performing enterprises—large and small—will be those that master this new choreography between human judgment and machine capability.

AI won’t replace teams—but teams that know how to work with AI will outperform those that don’t.

More on this in one of my next newsletters.

Consultancy, Rewired: AI’s Impact on consultancy firms and what their clients should expect

The bottom line: consulting is not going away. It is changing—fast. AI removes a lot of manual work and shifts the focus to speed, reusable tools, and results that can be measured. This has consequences for how firms are organised and how clients buy and use consulting.


What HBR says

The main message: AI is reshaping the structure of consulting firms. Tasks that used to occupy many junior people—research, analysis, and first-pass modelling—are now largely automated. Teams get smaller and more focused. Think of a move from a wide pyramid to a slimmer column.

New human roles matter more: people who frame the problem, translate AI insights into decisions, and work with executives to make change happen. HBR also points to a new wave of AI-native boutiques. These firms start lean, build reusable assets, and aim for outcomes rather than volume of slides.

What The Economist says

The emphasis here is on client expectations and firm economics. Clients want proof of impact, not page counts. If AI can automate a lot of the production work, large firms must show where they still create unique value. That means clearer strategies, simpler delivery models, and pricing that links fees to outcomes.

The coverage also suggests this is a structural shift, not a short-term cycle. Big brands will need to combine their access and experience with technology, reusable assets, and strong governance to stay ahead.


What AI can do in consulting — now vs. next (practical view)

Now

  • Discovery & synthesis. AI can sweep through filings, research, transcripts, and internal knowledge bases to cluster themes, extract evidence with citations, and surface red flags. This compresses the preparation phase of understanding so teams spend time on framing the problem and implications.
  • First-pass quantification & modelling. It produces draft market models and sensitivity analyses that consultants then stress-test. The benefit isn’t perfect numbers; it’s cycle-time—from question to a defendable starting point—in hours, not days.
  • Deliverables at speed. From storylines to slide drafts and exhibits, AI enforces structure and house style, handles versioning, and catches inconsistencies. Human effort shifts to message clarity, executive alignment, and implications for decision makers.
  • Program operations & governance. Agents can maintain risk and issue logs, summarize meetings, chase actions, and prepare steering packs. Leaders can use meeting time for choices, not status updates.
  • Knowledge retrieval & reuse. Firm copilots bring up relevant cases, benchmarks, and experts. Reuse becomes normal, improving speed and consistency across engagements.

Next (12–24 months)

  • Agentic due diligence. Multi-agent pipelines will triage vast data sets (news, filings, call transcripts), propose claims with evidence trails, and flag anomalies for partner review—compressing weeks to days while keeping human judgment in the loop.
  • Scenario studios and digital twins. Reusable models (pricing, supply, workforce) will let executives explore “what-ifs” choices live, improving decision speed and buy-in.
  • Operate / managed AI. Advisory will bundle with run-time AI services (build-run-transfer), priced on SLAs or outcome measuress, linking fees to performance after go-live.
  • Scaled change support. Chat-based enablement and role-tailored nudges will help people adopt new behaviors at scale; consultants curate and calibrate content and finetune interventions instead of running endless classroom sessions.

Reality check: enterprise data quality, integration, and model-risk constraints keep humans firmly in the loop. The best designs make this explicit with approvals, audit trails, and guardrails.


Five industry scenarios (2025–2030)

  1. AI-Accelerated Classic. The big firms keep CXO access but run leaner teams; economics rely on IP based assets and pricing shifts from hours to outcomes.
  2. Hourglass Market Strong positions at the top (large integrators) and at the bottom (specialist boutiques). The middle gets squeezed as clients self-serve standard analysis.
  3. Productised & Operate. Advice comes with data, models, and managed services. Contracts include service levels and shared-savings, tying value to real-world results.
  4. Client-First Platforms. Companies build internal AI studios and bring in targeted experts. Firms must plug into client platforms and compete on speed, trust, and distinctive assets.
  5. AI-Native Agencies Rise. New entrants born with automation-first workflows and thin layers scale quickly—resetting expectations of speed, price-performance, and what a “team” looks like.

What clients should ask for (and firms should offer)

  • Ask for assets, not documents. Ask for reusable data, models, and playbooks that you keep using after the engagement. —and specify this in the SOW.
  • Insist on transparency. Demand visibility into data sources, prompt chains, evaluation methods, and guardrails so you can trust, govern, and scale what’s built.
  • Design for capability transfer. Make enablement, documentation, and handover part of the scope with clear acceptance criteria.
  • Outcome-linked pricing where possible. Start with a pilot and clear success metrics; scale with contracts tied to results or service levels.

Close

AI is changing both the shape of consulting firms and the way organisations use them. Smaller teams, reusable assets, and outcome focus will define the winners.

From Org Charts to Work Charts – Designing for Hybrid Human–Agent Organisations

The org chart is no longer the blueprint for how value gets created. As Microsoft’s Asha Sharma puts it, “the org chart needs to become the work chart.” When AI agents begin to own real slices of execution—preparing customer interactions, triaging tickets, validating invoices—structure must follow the flow of work, not the hierarchy of titles. This newsletter lays out what that means for leaders and how to move, decisively, from boxes to flows.


Why this is relevant now

Agents are leaving the lab. The conversation has shifted from “pilot a chatbot” to “re-architect how we deliver outcomes.” Boards and executive teams are pushing beyond experiments toward embedded agents in sales, service, finance, and supply chain. This is not a tooling implementation—it’s an operating-model change.

Hierarchy is flattening. When routine coordination and status reporting are automated, you need fewer layers to move information and make decisions. Roles compress; accountabilities become clearer; cycle times shrink. The management burden doesn’t disappear—it changes. Leaders spend less time collecting updates and more time setting direction, coaching, and owning outcomes.

Enterprises scale. AI-native “tiny teams” design around flows—the sequence of steps that create value—rather than traditional functions. Large organizations shouldn’t copy their size; they should copy this unit of design. Work Charts make each flow explicit, assign human and agent owners, and let you govern and scale it across the enterprise.


What is a Work Chart?

A Work Chart is a living map of how value is produced—linking outcomes → end-to-end flows → tasks → handoffs—and explicitly assigning human owners and agent operators at each step. Where an org chart shows who reports to whom, a Work Chart shows:

  • Where the work happens – the flow and its stages
  • Who is accountable – named human owners of record
  • What is automated – agents with charters and boundaries
  • Which systems/data/policies apply – the plumbing and guardrails
  • How performance is measured – SLAs, exceptions, error/rework, latency

Work Chart is your work graph made explicit—connecting goals, people, and permissions so agents can act with context and leaders can govern outcomes.


Transformation at every level

Board / Executive Committee
Set policy for non-human resources (NHRs) just as you do for capital and people. Define decision rights, guardrails, and budgets (compute/tokens). Require blended KPIs—speed, cost, risk, quality—reported for human–agent flows, not just departments. Make Work Charts a standing artifact in performance reviews.

Enterprise / Portfolio
Shift from function-first projects to capability platforms (retrieval, orchestration, evaluation, observability) that any BU can consume. Keep a registry of approved agents and a flow inventory so portfolio decisions always show which flows, agents, and data they affect. Treat major flow changes like product releases—versioned, reversible, and measured.

Business Units / Functions
Turn priority processes into agent-backed services with clear SLAs and a named human owner. Publish inputs/outputs, boundaries (what the agent may and may not do), and escalation paths. You are not “installing AI”; you’re standing up services that can be governed and improved.

Teams
Maintain an Agent Roster (purpose, inputs, outputs, boundaries, logs). Fold Work Chart updates into existing rituals (standups, QBRs). Managers spend less time on status and more on coaching, exception handling, and continuous improvement of the flow.

Individuals
Define personal work charts for each role (the 5–7 recurring flows they own) and the agents they orchestrate. Expect role drift toward judgment, relationships, and stewardship of AI outcomes.


Design principles – what “good” looks like

  1. Outcome-first. Start from customer journeys and Objective – Key Results (OKRs); redesign flows to meet them.
  2. Agents as first-class actors. Every agent has a charter, a named owner, explicit boundaries, and observability from day one.
  3. Graph your work. Connect people, permissions, and policies so agents operate with context and least-privilege access.
  4. Version the flow. Treat flow changes like product releases—documented, tested, reversible, and measured.
  5. Measure continuously. Track time-to-outcome, error/rework, exception rates, and SLA adherence—reviewed where leadership already looks (business reviews, portfolio forums).

Implementation tips

1) Draw the Work Chart for mission-critical journeys
Pick one customer journey, one financial core process, and one internal productivity flow. Map outcome → stages → tasks → handoffs. Mark where agents operate and where humans remain owners of record. This becomes the executive “single source” for how the work actually gets done.

2) Create a Work Chart Registry
Create a lightweight, searchable registry that lists every flow, human owner, agent(s), SLA, source, and data/permission scope. Keep it in the systems people already use (e.g., your collaboration hub) so it becomes a living reference, not a slide deck.

3) Codify the Agent Charters
For each agent on the Work Chart, publish a one-pager: Name, Purpose, Inputs, Outputs, Boundaries, Owner, Escalation Path, Log Location. Version control these alongside the Work Chart so changes are transparent and auditable.

4) Measure where the work happens.
Instrument every node with flow health metrics—latency, error rate, rework, exception volume. Surface them in the tools leaders already use (BI dashboards, exec scorecards). The goal is to manage by flow performance, not anecdotes.

5) Shift budgeting from headcount to flows
Attach compute/SLA budgets to the flows in your Work Chart. Review them at portfolio cadence. Fund increases when there’s demonstrable improvement in speed, quality, or risk. This aligns investment with value creation rather than with org boxes.

6) Communicate the new social contract
Use the Work Chart in town halls and leader roundtables to explain what’s changing, why it matters, and how roles evolve. Show before/after charts for one flow to make the change tangible. Invite feedback; capture exceptions; iterate.


Stop reorganizing boxes – start redesigning flows. Mandate that each executive publishes the first Work Chart for one mission-critical journey—complete with agent charters, SLAs, measurements, and named owners of record. Review it with the same rigor you apply to budget and risk. Organizations that do this won’t just “adopt AI”; they’ll build a living structure that mirrors how value is created—and compounds it.

Closing the Digital Competency Gap in the Boardroom

This article is based on a thesis I have written for the Supervisory Board program (NCC 73) at Nyenrode University, which I will complete this month. I set out to answer a practical question: how can supervisory boards close the digital competency gap so their oversight of digitalization and AI is effective and value-creating?

The research combined literature, practitioner insights, and my own experience leading large-scale digital transformations. The signal is clear: technology, data, and AI are no longer specialist topics—they shape strategy, execution, and resilience. Boards that upgrade their competence change the quality of oversight, the shape of investment, and ultimately the future of the company.


1) Business model transformation

Digital doesn’t just add channels; it rewrites how value is created and captured. The board’s role is to probe how data, platforms, and AI may alter customer problem–solution fit, value generation logic, and ecosystem position over the next 3–5–10 years. Ask management to make the trade-offs explicit: which parts of the current model should we defend, which should we cannibalize, and which new options (platform plays, data partnerships, embedded services) warrant small “option bets” now?

What to look out for: strategies that talk about “going digital” without quantifying how revenue mix, margins, or cash generation will change. Beware dependency risks (platforms, app stores, hyperscalers) that shift bargaining power over time. Leverage scenario planning and clear leading indicators—so the board can see whether the plan is working early enough to pivot or double down.

2) Operational digital transformation

The strongest programs are anchored in outcomes, not output. Boards should ask to see business results expressed in P&L and balance-sheet terms (growth, cost, capital turns), not just “go-live” milestones. Require a credible pathway from pilot to scale: gated tranches that release funding when adoption, value, and risk thresholds are met; and clear “stop/reshape” criteria to avoid sunk-costs.

What to look out for: “watermelon” reporting— that stay green while progress/adoption is behind; vendor-led roadmaps that don’t fit the architecture; and under-resourced change management. As a rule of thumb, ensure 10–15% of major transformation budgets are reserved for change, communications, and training. Ask who owns adoption metrics and how you’ll know—early—that teams are using what’s been built.

3) Organization & culture

Technology succeeds at the speed of behaviour change. The board should examine whether leadership is telling a coherent story (why/what/how/who) and whether middle management has the capacity to translate it into local action. Probe how AI will reshape roles and capabilities, and whether the company has a reskilling plan that is targeted, measurable, and linked to workforce planning.

What to look out for: assuming tools will “sell themselves,” starving change budgets, and running transformations in a shadow lane disconnected from the real business. Look for feedback loops—engagement diagnostics, learning dashboards, peer-to-peer communities—that surface resistance early and help leadership course-correct before adoption stalls.

4) Technology investments

Oversight improves dramatically when the board insists on a North Star architecture that makes trade-offs visible: which data foundations come first, how integration will work, and how security/privacy are designed in. Investments should be staged, with each tranche linked to outcome evidence and risk mitigation, and with conscious decisions about vendor lock-in and exit options.

What to look out for: shiny-tool syndrome, financial engineering that ignore lifetime Total Cost of Ownership (TCO), and weak vendor due diligence. Ask for risk analysis (e.g., cloud and vendor exposure) and continuity plans that are actually tested. Expect architecture reviews by independent experts on mission-critical choices, so the board gets a clear view beyond vendor narratives.

5) Security & compliance

Cyber, privacy, and emerging AI regulation must be treated as enterprise-level risks with clear ownership, KPIs, and tested recovery playbooks. Boards should expect regular exercises and evidence that GDPR, NIS2, and AI governance are embedded in product and process design—not bolted on at the end.

What to look out for: “tick-the-box” compliance that produces documents rather than resilience, infrequent or purely theoretical drills, and untested backups. Probe third-party and supply-chain exposure as seriously as internal controls. The standard is not perfection; it’s informed preparedness, repeated practice, and to learn from near-misses.


Seven structural moves that work

  1. Make digital explicit in board profiles. Use a competency matrix that distinguishes business-model, data/AI, technology, and cyber/compliance fluency. Recruit to close gaps or appoint external advisors—don’t hide digital under a generic “technology” label.
  2. Run periodic board maturity assessments. Combine self-assessment with executive feedback to identify capability gaps. Tie development plans to the board calendar (e.g., pre-strategy masterclasses, deep-dives before major investments).
  3. Hard-wire digital/AI into the agenda. Move from ad-hoc updates to a cadence: strategy and scenario sessions, risk and resilience reviews, and portfolio health checks. Make room for bad news early so issues surface before they become expensive.
  4. Adopt a board-level Digital & IT Cockpit. Track six things concisely: run-the-business efficiency, risk posture, innovation enablement, strategy alignment, value creation, and future-proofing (change control, talent, and architecture). Keep trends visible across quarters.
  5. Establish a Digital | AI Committee (where applicable). This complements—not replaces—the Audit Committee. Mandate: opportunities and threats, ethics and risk, investment discipline, and capability building. The committee prepares the ground; the full board takes the decisions.
  6. Use independent expertise by default on critical choices. Commission targeted reviews (architecture, vendor due diligence, cyber resilience) to challenge internal narratives. Independence is not a luxury; it’s how you avoid groupthink and discover blind spots in time.
  7. Onboard and upskill continuously. Provide a digital/AI onboarding for new members; schedule briefings with external experts; and use site visits to see real adoption. Treat learning like risk management: systematic, scheduled, and recorded.

Do you need a separate “Digital Board”?

My reflection: competence helps, but time and attention are the true scarcities. In digitally intensive businesses—where data platforms, AI-enabled operations, and cyber exposure shape enterprise value and are moving fast—a separate advisory or oversight body can deepen challenge and accelerate learning. It creates space for structured debate on architecture, ecosystems, and regulation without crowding out other board duties.

This isn’t a universal prescription. In companies where digital is material but not defining, strengthening the main board with a committee and better rhythms is usually sufficient. But when the operating model’s future rests on technology bets, a dedicated Digital Board (or equivalent advisory council) can bring the needed altitude, continuity, and specialized challenge to help the supervisory board make better, faster calls.


What this means for your next board cycle

The practical message from the thesis is straightforward: digital oversight is a core board responsibility that can be institutionalised. Start by clarifying the capability you need (the competency matrix), then hard-wire the conversation into the board’s rhythms (the agenda and cockpit), and raise the quality of decisions (staged investments, independent challenge, real adoption metrics). Expect a culture shift: from project status to value realization, from tool choice to architecture, from compliance as paperwork to resilience as practice.

Most importantly, treat this as a journey. Boards that improve a little each quarter—on fluency, on the sharpness of their questions, on the discipline of their investment decisions—create compounding advantages. The gap closes not with a single appointment or workshop, but with deliberate governance that learns, adapts, and holds itself to the same standard it asks of management.

Why 95% of AI Pilots Fail (MIT Study) – And How to Beat the Odds

Last week, a MIT study sent shockwaves through the AI and business community: 95% of AI pilots fail to deliver measurable business returns. Headlines spread fast, with investors and executives questioning whether enterprise AI is a bubble.

But behind the headlines lies a more nuanced story. The study doesn’t show that AI lacks potential—it shows that most organizations are not yet equipped to turn AI experiments into real business impact.


Myth vs. Reality: What Other Research Tells Us

While the MIT report highlights execution gaps, other studies paint a more balanced picture:

  • McKinsey (2025): AI adoption is rising fast, with value emerging where firms rewire processes and governance.
  • Stanford AI Index (2025): Investment and adoption continue to accelerate, signaling confidence in the long-term upside.
  • Field studies: Copilots in customer service and software engineering deliver double-digit productivity gains—but only when properly integrated.
  • MIT SMR–BCG: Companies that give individuals tangible benefits from AI—and track the right KPIs—are 6x more likely to see financial impact.

The picture is clear: AI works, but only under the right conditions.


Why AI Projects Fail (The 10 Traps)

1. No learning loop
Many AI pilots are clever demos that never improve once deployed. Without feedback mechanisms and continuous learning, the system remains static—and users quickly revert to old ways of working.

2. Integration gaps
AI may deliver great results in a sandbox, but in production it often fails to connect with core systems like CRM or ERP. Issues with identity management, permissions, and latency kill adoption.

3. Vanity pilots
Executives often prioritize flashy use cases—like marketing campaigns or customer-facing chatbots—while ignoring back-office automations. The result: excitement without measurable cash impact.

4. Build-first reflex
Organizations rush to build their own AI tools, underestimating the complexity of User eXperience (UX), guardrails, data pipelines, and monitoring. Specialist partners often outperform in speed and quality.

5. Six-month ROI traps
Leadership expects visible returns within half a year. But AI adoption follows a J-curve: disruption comes first, with benefits only materializing once processes and people adapt.

6. Weak KPIs
Too many pilots measure activity—such as number of prompts or usage time—rather than outcomes like error reduction, cycle time improvements, or cost savings. Without the right metrics, it’s impossible to prove value.

7. No product owner
AI projects often sit “between” IT, data, and the business, leaving no single accountable leader. Without an empowered product owner with a P&L target, projects stall in pilot mode.

8. Change ignored
Technology is deployed, but users aren’t engaged. Poor UX, lack of training, and trust concerns mean adoption lags. In response, employees turn to consumer AI tools instead of sanctioned ones.

9. Data & policy drag
Even when the AI works, poor data quality, fragmented sources, and unclear governance delay rollouts. Legal and compliance teams often block scaling because policies are not defined early enough.

10. Wrong first bets
Too many companies start with complex tasks. Early success is more likely in “thin-slice” repetitive processes—like call summarization or contract intake—that can prove value quickly.


How to Beat the Odds (10 Fixes That Work)

1. Design for learning
Build AI systems with memory, feedback capture, and regular improvement cycles. If a tool cannot learn and adapt in production, it should never progress beyond pilot stage.

2. Fix integration before inference
Prioritize robust connections into your CRM, ERP, and ticketing systems. AI without seamless workflow integration is just an isolated chatbot with no business impact.

3. Pick quick-win use cases
Target repetitive, document- and conversation-heavy flows—like claims processing, contract extraction, or helpdesk queries. These areas deliver ROI within 90–120 days and build momentum.

4. Appoint an AI Product Owner
Every use case should have a leader with budget, KPIs, and authority. This person is responsible for hitting targets and driving the project through pilot, limited production, and full scale-up.

5. Measure outcomes, not activity
Define 3–5 hard business KPIs (e.g., −25% contract cycle time, −20% cost per contact) and track adoption leading indicators. Publish a regular value scorecard to make progress visible.

6. Buy speed, build advantage
Use specialist vendors for modular, non-differentiating tasks. Save your in-house resources for proprietary applications where AI can become a true competitive edge.

7. Rebalance your portfolio
Shift investments away from glossy front-office showcases. Focus on back-office operations and service processes where AI can cut costs and generate visible ROI quickly.

8. Make change a deliverable
Adoption doesn’t happen automatically. Co-design solutions with frontline users, train them actively, and make fallback paths obvious. Manage trust as carefully as the technology itself.

9. Educate the board on the J-curve
Set realistic expectations that ROI takes more than six months. Pilot fast, but give production deployments time to stabilize, improve, and demonstrate sustained results.

10. Prove, then scale
Choose two or three use cases, set clear ROI targets up front, and scale only after success is proven. This disciplined sequencing builds credibility and prevents overreach.


The Broader Reflection

The 95% failure rate is not a verdict on AI’s future—it’s a warning about execution risk. Today’s picture is simple: adoption and investment are accelerating, productivity impacts are real, but enterprise-scale returns require a more professional approach.

We’ve seen this pattern before. Just as with earlier waves of digital transformation, leaders tend to overestimate short-term results and underestimate mid- to long-term impact.

Agents vs. Automation – How to Choose the Right Tool for the Job

As AI agents storm the market and automation technologies mature, transformation leaders face a critical question: Not just what to automate — but how.

From RPA and low-code platforms to intelligent agents and native automation tools, the choices are expanding fast.

This article offers a practical framework to help you make the right decisions — and build automation that scales with your organization.


A Layered View of the Automation Landscape

Modern automation isn’t a single tool — it’s leveraging a full stack. Here are the key layers:

🔹 1. Digital Core Platforms

Systems like SAP, Salesforce, ServiceNow and Workday host your enterprise data and business processes. They often come with native automation tools (e.g., Salesforce Flow, SAP BTP), ideal for automating workflows within the platform.

🔹 2. Integration Platforms (iPaaS)

Tools like MuleSoft, Boomi, and Microsoft Power Platform play a foundational role in enterprise automation. These Integration Platforms as a Service (iPaaS) connect applications, data sources, and services across your IT landscape — allowing automation to function seamlessly across systems rather than in silos.

🔹 3. Automation Tools

  • RPA (e.g., UiPath) automates rule-based, repetitive tasks
  • Workflow Automation manages structured, multi-step business processes
  • Low-/No-Code Platforms (e.g., Power Apps, Mendix) empower teams to build lightweight apps and automations with minimal IT support

🔹 4. AI Agents

Tools and platforms like OpenAI Agents, Microsoft Copilot Studio, Google Vertex AI Agent Builder, and LangChain enable reasoning, adaptability, and orchestration — making them well-suited for knowledge work, decision support, and dynamic task execution.


Choosing the Right Tool for the Job

No single tool is right for every use case. Here’s how to decide:

ScenarioBest Fit
Rule-based, repetitive workRPA
Structured, approval-based flowsWorkflow Automation
Inside one platform (e.g., CRM/ERP)Native Platform Automation
Cross-system data & process flowsIntegration Platforms (iPaaS)
Lightweight cross-platform appsLow-/No-Code Platforms
Knowledge-driven or dynamic tasksAI Agents

The most effective automation strategies are hybrid — combining multiple tools for end-to-end value.


Implementation Roadmaps: One Journey, Many Paths

While all automation projects follow a shared journey — identify, pilot, scale — each tool requires a slightly different approach.


1. Identify the Right Opportunities

  • Native Platform Tools: Start with what’s already built into Salesforce, SAP, etc.
  • iPaaS: Identify silos where data must flow between systems
  • RPA: Use process/task mining to find repeatable, rule-based activities
  • Workflow: Focus on bottlenecks, exceptions, and handoffs
  • Low-/No-Code: Empower teams to surface automation needs and prototype fast
  • AI Agents: Look for unstructured, knowledge-heavy processes

2. Design for Fit and Governance

Each automation type requires a different design mindset — based on scope, user ownership, and risk profile.

  • Native Platform Automation: Stay aligned with vendor architecture and update cycles
  • iPaaS: Build secure, reusable data flows
  • RPA: Design for stability, handle exceptions
  • Workflow: Focus on roles, rules, and user experience
  • Low-/No-Code Platforms: Enable speed, but embed clear guardrails
  • AI Agents: Use iterative prompt design, test for reliability

Key distinction:

  • Native platform automation is ideal for secure, internal process flows.
  • Low-/no-code platforms are better for lightweight, cross-functional solutions — but they need structure to avoid sprawl.

3. Pilot, Learn, and Iterate

  • Platform-native pilots are quick to deploy and low-risk
  • RPA pilots deliver fast ROI but require careful exception handling
  • Workflow Automation start with one process and involve users early to validate flow and adoption.
  • Low-/no-code pilots accelerate innovation, especially at the edge
  • iPaaS pilots often work quietly in the background — but are critical for scale
  • AI agent pilots demand close supervision and feedback loops

4. Scale with Structure

To scale automation, focus not just on tools, but on governance:

  • Workflow and Low-Code: Set up federated ownership or Centres of Excellence
  • RPA and iPaaS: Track usage, manage lifecycles, prevent duplication
  • AI Agents: Monitor for performance, hallucination, and compliance
  • Native Platform Tools: Coordinate with internal admins and platform owners

The most successful organizations won’t just automate tasks — they’ll design intelligent ecosystems that scale innovation, decision-making, and value creation.


Conclusion: Architect the Ecosystem

Automation isn’t just about efficiency — it’s about scaling intelligence across the enterprise.

  • Use native platform tools when speed, security, and process alignment matter most
  • Use low-/no-code platforms to empower teams and accelerate delivery
  • Use RPA and workflows for high-volume or structured tasks
  • Use AI agents to enhance decision-making and orchestrate knowledge work
  • Use integration platforms to stitch it all together

The winners will be the ones who build coherent, adaptive automation ecosystems — with the right tools, applied the right way, at the right time.

GAINing Clarity – Demystifying and Implementing GenAI

Herewith my final summer reading book review as part of my newsletter series.
GAIN – Demystifying GenAI for Office and Home by Michael Wade and Amit Joshi offers clarity in a world filled with AI hype. Written by two respected IMD professors, this book is an accessible, structured, and balanced guide to Generative AI (GenAI), designed for a broad audience—executives, professionals, and curious individuals alike.

What makes GAIN especially valuable for leaders is its practical approach. It focuses on GenAI’s real-world relevance: what it is, what it can do, where it can go wrong, and how individuals and organizations can integrate it effectively into daily workflows and long-term strategies.

What’s especially nice is that Michael and Amit have invited several other thought and business leaders to contribute their perspectives and examples to the framework provided. (I especially liked the contribution of Didier Bonnet.)

The GAIN Framework

The book is structured into eight chapters, each forming a step in a logical journey—from understanding GenAI to preparing for its future impact. Below is a summary of each chapter’s key concepts.


Chapter 1 – EXPLAIN: What Makes GenAI Different

This chapter distinguishes GenAI from earlier AI and digital innovations. It highlights GenAI’s ability to generate original content, respond to natural-language prompts, and adapt across tasks with minimal input. Key concepts include zero-shot learning, democratized content creation, and rapid adoption. The authors stress that misunderstanding GenAI’s unique characteristics can undermine effective leadership and strategy.


Chapter 2 – OBTAIN: Unlocking GenAI Value

Wade and Joshi explore how GenAI delivers value at individual, organizational, and societal levels. It’s accessible and doesn’t require deep technical expertise to drive impact. The chapter emphasizes GenAI’s role in boosting productivity, enhancing creativity, and aiding decision-making—especially in domains like marketing, HR, and education—framing it as a powerful augmentation tool.


Chapter 3 – DERAIL: Navigating GenAI’s Risks

This chapter outlines key GenAI risks: hallucinations, privacy breaches, IP misuse, and embedded bias. The authors warn that GenAI systems are inherently probabilistic, and that outputs must be questioned and validated. They introduce the concept of “failure by design,” reminding readers that creativity and unpredictability often go hand in hand.


Chapter 4 – PREVAIL: Creating a Responsible AI Environment

Here, the focus turns to managing risks through responsible use. The authors advocate for transparency, human oversight, and well-structured usage policies. By embedding ethics and review mechanisms into workflows, organizations can scale GenAI while minimizing harm. Ultimately, it’s how GenAI is used—not just the tech itself—that defines its impact.


Chapter 5 – ATTAIN: Scaling with Anchored Agility

This chapter presents “anchored agility” as a strategy to scale GenAI responsibly. It encourages experimentation, but within a framework of clear KPIs and light-touch governance. The authors promote an adaptive, cross-functional approach where teams are empowered, and successful pilots evolve into embedded capabilities.

One of the most actionable frameworks in GAIN is the Digital and AI Transformation Journey, which outlines how organizations typically mature in their use of GenAI:

  • Silo – Individual experimentation, no shared visibility or coordination.
  • Chaos – Widespread, unregulated use. High potential but rising risk.
  • Bureaucracy – Management clamps down. Risk is reduced, but innovation stalls.
  • Anchored Agility – The desired state: innovation at scale, supported by light governance, shared learning, and role clarity.

This model is especially relevant for transformation leaders. It mirrors the organizational reality many face—not only with AI, but with broader digital initiatives. It gives leaders a language to assess their current state and a vision for where to evolve.


Chapter 6 – CONTAIN: Designing for Trust and Capability

Focusing on organizational readiness, this chapter explores structures like AI boards and CoEs. It also addresses workforce trust, re-skilling, and role evolution. Rather than replacing jobs, GenAI changes how work gets done—requiring new hybrid roles and cultural adaptation. Containment is about enabling growth, not restricting it.


Chapter 7 – MAINTAIN: Ensuring Adaptability Over Time

GenAI adoption is not static. This chapter emphasizes the need for feedback loops, continuous learning, and responsive processes. Maintenance involves both technical tasks—like tuning models—and organizational updates to governance and team roles. The authors frame GenAI maturity as an ongoing journey.


Chapter 8 – AWAIT: Preparing for the Future

The book closes with a pragmatic look ahead. It touches on near-term shifts like emerging GenAI roles, evolving regulations, and tool commoditization. Rather than speculate, the authors urge leaders to stay informed and ready to adapt, fostering a mindset of proactive anticipation.posture of informed anticipation: not reactive panic, but intentional readiness. As the GenAI field evolves, so must its players.


What GAIN Teaches Us About Digital Transformation

Beyond the specifics of GenAI, GAIN offers broader lessons that are directly applicable to digital transformation initiatives:

  • Start with shared understanding. Whether you’re launching a transformation program or exploring AI pilots, alignment starts with clarity.
  • Balance risk with opportunity. The GAIN framework models a mature transformation mindset—one that embraces experimentation while putting safeguards in place.
  • Transformation is everyone’s job. GenAI success is not limited to IT or data teams. From HR to marketing to the executive suite, value creation is cross-functional.
  • Governance must be adaptive. Rather than rigid control structures, “anchored agility” provides a model for iterative scaling—one that balances speed with oversight.
  • Keep learning. Like any transformation journey, GenAI is not linear. Feedback loops, upskilling, and cultural evolution are essential to sustaining momentum.

In short, GAIN helps us navigate the now, while preparing for what’s next. For leaders navigating digital and AI transformation, it’s a practical compass in a noisy, fast-moving world.

Fusion Strategy – How real-time Data and AI will Power the Industrial Future

This book by Vijay Govindarajan and Venkat Venkatraman gives excellent insights on how industrial companies can become leaders in this Data and AI-driven age.

Rather than discarding legacy strengths, the book shows how to fuse physical assets with digital intelligence to create new value, drive outcomes, and redefine business models. It gives a compelling and well-structured roadmap for industrial companies to get ready and lead through this digital transformation


From Pipeline to Fusion: A New Strategic Paradigm

Traditional industrial firms have long operated with a pipeline mindset – designing, building, and selling physical products through linear value chains. But in a world where customer needs change in real-time, and where data flows continuously from connected devices, this model is no longer sufficient.

Fusion Strategy introduces a new playbook: combine your physical strengths with digital capabilities to compete on adaptability, outcomes, and ecosystem value. It’s about integrating the trust and scale of industrial operations with the intelligence and speed of digital platforms.


Competing in the Four Fusion Battlegrounds

At the core of the book is a powerful matrix: four battlegrounds where industrial firms must compete – and four strategic levers to win in each: Architect, Organize, Accelerate, and Monetize.

Fusion Products – Embedding intelligence into physical products

This battleground focuses on evolving the traditional product into a smart, connected version that delivers value through both physical functionality and digital enhancements. It shifts the value proposition from one-time transactions to continuous value creation.

  • Architect: Build connected products with embedded sensors and software.
  • Organize: Create cross-functional product-data-software teams.
  • Accelerate: Use real-world usage data to improve iterations and performance.
  • Monetize: Shift to usage-based pricing, subscription models, or data-informed upgrades.

Example: John Deere integrates GPS, sensors, and machine learning into its agricultural equipment, enabling precision farming and monetizing through subscription-based services.

Fusion Services – Creating new layers of customer value

This battleground addresses the transformation from product-centric to outcome-centric offerings. Services become digitally enabled and proactively delivered, increasing customer stickiness and long-term revenue potential.

  • Architect: Design service layers that improve uptime, efficiency, or experience.
  • Organize: Stand up service delivery and customer success capabilities.
  • Accelerate: Leverage AI to scale and automate service interactions.
  • Monetize: Offer predictive maintenance, remote diagnostics, or outcomes-as-a-service.

Example: Caterpillar offers remote monitoring and predictive maintenance for its heavy equipment fleet, increasing operational uptime and generating recurring service revenues.

Fusion Systems – Transforming internal operations

This battleground focuses on using data and AI to reengineer internal processes, improve agility, and reduce cost-to-serve. Real-time operational intelligence becomes a source of competitive advantage.

  • Architect: Digitize plants, supply chains, and operations with real-time visibility.
  • Organize: Break down functional silos; design around data flows.
  • Accelerate: Use AI to optimize scheduling, energy use, or resource allocation.
  • Monetize: Drive efficiency gains and free up capital for reinvestment.

Example: Schneider Electric uses digital twins and data-driven energy management to optimize operations and reduce downtime in its global manufacturing network.

Fusion Solutions – Building platforms and ecosystems

This battleground is about building broader solutions that integrate products, services, and partners. It opens new avenues for value creation through platforms, data sharing, and co-innovation.

  • Architect: Offer modular solutions with open APIs and partner integration.
  • Organize: Orchestrate partner ecosystems that create mutual value.
  • Accelerate: Foster external innovation through developer communities.
  • Monetize: Sell analytics, data products, or platform access.

Example: Tesla is reimagining mobility not just as a product (cars) but as an integrated solution combining electric vehicles, software, energy management, autonomous driving, insurance and charging/energy infrastructure.


The Role of Data Graphs in Fusion Strategy

One of the foundational concepts emphasized throughout Fusion Strategy is the importance of data graphs. These are strategic tools that connect data across silos and enable intelligent, real-time insights.

A data graph is a semantic structure that maps relationships between entities—machines, sensors, people, processes, and locations—into a flexible and navigable format. In fusion strategy, data graphs link physical and digital domains, enabling smarter operations and decisions.

How to build a data graph:

  1. Collect data from operational systems – sensors, ERP -, CRM systems, etc.
  2. Define key entities and relationships – focus on what matters most.
  3. Create semantic linkages – use metadata and business context.
  4. Ensure real-time updates – to maintain situational awareness.
  5. Enable access – for both humans and AI systems.

Why data graphs matter:

  • Provide context for AI and analytics.
  • Enable real-time visibility across assets and systems.
  • Power predictive services, digital twins, and platform innovation.

According to the authors, data graphs are essential for scaling fusion strategies. Without them, it’s difficult to unify insights, drive automation, or deliver integrated digital experiences


Why This Book Stands Out

This is book does not start from the successful digital native companies, but from the leader of the industrial age point of view, describing on how they can become leaders in the digital age.

The structure is what makes it so useful:

  • It gives executives a language to discuss digital opportunities in operational and financial terms.
  • It balances the long-term vision with near-term execution levers.
  • It connects customer value, technology, organization, and monetization in one integrated model.

It’s a strategy-led, boardroom-level guide to competing in the AI era.


My Reflections

  • Applying Fusion Strategy is a shift in how to re-architect your products and business. It requires rewiring how you create, deliver, and capture value.
  • You don’t need to become a tech company. You need to become a fusion company – one that blends operational excellence with digital innovation.
  • Winning in Fusion means rethinking strategy, governance, talent, and incentives – all at once in other words, enabling full transformation.

Fusion Strategy is essential reading for any industrial executive seeking to lead their company through this era of accelerated transformation. It’s not about jumping on the latest AI trend – it’s about designing a future-ready business, grounded in strategy.

The battlegrounds are clear. The tools are available. The time is now.

If AI Is So Smart, Why Are We Struggling to Use It?

The human-side barriers to AI adoption — and how to overcome them

In my previous newsletter, “Where AI is Already Making a Significant Impact on Business Process Execution – 15 Areas Explained,” we explored how AI is streamlining tasks from claims processing to customer segmentation. But despite these breakthroughs, one question keeps surfacing:

If AI is delivering so much value… why are so many organizations struggling to actually adopt it?

The answer isn’t technical — it’s human.

In this edition, I explore ten people-related reasons AI initiatives stall or underdeliver. Each barrier is followed by a practical example and suggestions for how to overcome it.


1. Fear of Job Loss and Role Redundancy

Employees fear AI will replace them, leading to resistance or disengagement. This is especially prevalent in operational roles and shared services.

Example: An EY survey found 75% of US workers worry about AI replacing their jobs. In several large organizations, process experts quietly slow-roll automation to protect their roles.

How to mitigate: Communicate early and often. Frame AI as augmentation, not replacement. Highlight opportunities for upskilling and create pathways for digitally enabled roles.


2. Loss of Meaning and Professional Identity

Even if employees accept AI won’t replace them, they may fear it will erode the craftsmanship and meaning of their work.

Example: In legal and editorial teams, professionals report reluctance to use generative AI tools because they feel it “cheapens” their contribution or downplays their expertise.

How to mitigate: Position AI as a creative partner, not a substitute. Focus on use cases that enhance quality and amplify human strengths.


3. Low AI Literacy and Confidence

Many knowledge workers don’t feel equipped to understand or apply AI tools. This leads to underutilization or misuse.

Example: I’ve seen this firsthand: employees hesitate to rely on AI tools and default to old ways of working out of discomfort or lack of clarity.

How to mitigate: Launch AI literacy programs tailored to roles. Give people space to experiment, and build a shared language for AI in the organization.


4. Skills Gap: Applying AI to Domain Work

Beyond literacy, many employees lack the applied skills needed to integrate AI into their actual workflows. They may know what AI can do — but not how to adapt it to their role.

Example: In a global supply chain function, team members were aware of AI’s capabilities but struggled to translate models into usable scenarios like demand sensing or inventory risk prediction.

How to mitigate: Invest in practical upskilling: scenario-based training, role-specific accelerators, and coaching. Empower cross-functional “AI translators” to bridge tech and business.


5. Trust and Explainability Concerns

Employees and managers hesitate to rely on AI if they don’t understand “how” it reached its output — especially in decision-making contexts.

Example: A global logistics firm paused the rollout of AI-based demand forecasting after regional leaders questioned unexplained fluctuations in output.

How to mitigate: Prioritize transparency for critical use cases. Use interpretable models where possible, and combine AI output with human judgment.


6. Middle Management Resistance

Mid-level managers may perceive AI as a threat to their control or relevance. They can become blockers, slowing momentum.

Example: In a consumer goods company, digital leaders struggled to scale AI pilots because local managers didn’t support or prioritize the initiatives.

How to mitigate: Involve middle managers in co-creation. Tie their success metrics to AI-enabled outcomes and make them champions of transformation.


7. Change Fatigue and Initiative Overload

Teams already dealing with hybrid work, restructurings, or system rollouts may see AI as just another corporate initiative on top of their daily work.

Example: A pharmaceutical company with multiple digital programs saw frontline disengagement with AI pilots due to burnout and lack of clear value.

How to mitigate: Embed AI within existing transformation goals. Focus on a few high-impact use cases, and consistently communicate their benefit to teams.


8. Lack of Inclusion in Design and Rollout

When AI tools are developed in technical silos, end users often feel the solutions don’t reflect their workflows or needs.

Example: A banking chatbot failed in deployment because call center staff hadn’t been involved in the design phase — leading to confusion and distrust.

How to mitigate: Involve users early and often. Use participatory design approaches and validate tools in real working environments.


9. Ethical Concerns and Mistrust

Some employees worry AI may reinforce bias, lack fairness, or be used inappropriately — especially in sensitive areas like HR, compliance, or performance assessment.

Example: An AI-based resume screener was withdrawn by a tech firm after internal concerns about gender and ethnicity bias, even before public rollout.

How to mitigate: Establish clear ethical guidelines for AI. Be transparent about data usage, and create safe channels for feedback and concerns.


10. Peer Friction: “They Let the AI Do Their Job”

Even when AI is used effectively, friction can arise when colleagues feel others are “outsourcing their thinking” or bypassing effort by relying on AI tools.

Example: In a shared services team, tension grew when some employees drafted client reports with AI in minutes — while others insisted on traditional methods, feeling their contributions were undervalued.

How to mitigate: Create shared norms around responsible AI use. Recognize outcomes, not effort alone, and encourage knowledge sharing across teams.


Final Thought: It’s Not the Tech — It’s the Trust

Successful AI adoption isn’t about algorithms or infrastructure — it’s about mindsets, motivation, and meaning.

If we want people to embrace AI, we must:

  • Empower them with knowledge, skills, and confidence
  • Engage them as co-creators in the journey
  • Ensure they see personal and professional value in change

Human-centered adoption isn’t the soft side of transformation — it’s the hard edge of success. Let’s create our transformation plans with that in mind.